[openstack-dev] [nova][glance][keystone] Philosophy of the service catalog (was: Who needs multiple api_servers?)

2017-05-10 Thread Mike Dorman
After discussion in the Large Deployments Team session this morning, we wanted 
to follow up on the earlier thread [1,2] about overriding endpoint URLs.

That topic is exposing an underlying implication about the purpose of the 
service catalog.  The LDT position is that the service catalog should be for 
end user clients to do endpoint discovery.  While it can also be used for 
discovery by other OpenStack services, we desire to maintain the ability to 
override (like that which was discussed in the previous thread about Glance.)  
In addition to the Glance to nova-compute use case, the feedback during the LDT 
session surfaced potential use cases for other services.

The point to raise here from LDT is that we would like to avoid a trend toward 
services *only* supporting discovery via the service catalog, with no ability 
to override in config.  I.e., we want to maintain the endpoint_override (and 
similar) options.
Thanks!

[1]  http://lists.openstack.org/pipermail/openstack-dev/2017-April/116028.html 
/ 
http://lists.openstack.org/pipermail/openstack-operators/2017-April/013272.html
[2]  
http://lists.openstack.org/pipermail/openstack-dev/2017-May/thread.html#116133 
/ 
http://lists.openstack.org/pipermail/openstack-operators/2017-May/thread.html#13309
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [openstack-dev] [nova][glance] Who needs multiple api_servers?

2017-04-28 Thread Mike Dorman
l exist. Instead of then deprecating the api_servers list, we keep 
it- but add a big doc warning listing the gotchas and limitations - but 
for those folks for whom they are not an issue, you've got an out.

Alternative Two - Hybrid Approach - optional list of URLs

We go ahead and move to service config being the standard way one lists 
how to consume a service from the catalog. One of the standard options 
for consuming services is "endpoint_override" - which is a way an API 
user can say "hi, please to ignore the catalog and use this endpoint 
I've given you instead". The endpoint in question is a full URL, so 
https/http and ports and whatnot are all handled properly.

We add, in addition, an additional option "endpoint_override_list" which 
allows you to provide a list of URLs (not API servers) and if you 
provide that option, we'll keep the logic of choosing one at random at 
API call time. It's still a poor load balancer, and we'll still put 
warnings in the docs about it not being a featureful load balancing 
solution, but again would be available if needed.

Alternative Three - We ignore you and give you docs

I'm only including this because in the name of completeness. But we 
could write a bunch of docs about a recommended way of putting your 
internal endpoints in a load balancer and registering that with the 
internal endpoint in keystone. (I would prefer to make the operators 
happy, so let's say whatever vote I have is not for this option)

Alternative Four - We update client libs to understand multiple values 
from keystone for endpoints

I _really_ don't like this one - as I think us doing dumb software 
loadbalancing client side is prone to a ton of failures. BUT - right now 
the assumption when consuming endpoints from the catalog is that one and 
only one endpoint will be returned for a given 
service_type/service_name/interface. Rather than special-casing the
url roundrobin in nova, we could move that round-robin to be in the base 
client library, update api consumption docs with round-robin 
recommendations and then have you register the list of endpoints with 
keystone.

I know the keystone team has long been _very_ against using keystone as 
a list of all the endpoints, and I agree with them. Putting it here for 
sake of argument.

Alternative Five - We update keystone to round-robin lists of endpoints

Potentially even worse than four and even more unlikely given the 
keystone team's feelings, but we could have keystone continue to only 
return one endpoint, but have it do the round-robin selection at catalog 
generation time.


Sorry - you caught me in early morning brainstorm mode.

I am neither nova core nor keystone core. BUT:

I think honestly if adding a load balancer in front of your internal 
endpoints is an undue burden and/or the usefulness of the lists 
outweighs the limitations they have, we should go with One or Two. (I 
think three through five are all terrible)

My personal preference would be for Two - the round-robin code winds up 
being the same logic in both cases, but at least in Two folks who want 
to SSL all the way _can_, and it shouldn't be an undue extra burden on 
those of you using the api_servers now. We also don't have to do the 
funky things we currently have to do to turn the api_severs list into 
workable URLs.


On 04/27/2017 11:50 PM, Blair Bethwaite wrote:
> We at Nectar are in the same boat as Mike. Our use-case is a little
> bit more about geo-distributed operations though - our Cells are in
> different States around the country, so the local glance-apis are
> particularly important for caching popular images close to the
> nova-computes. We consider these glance-apis as part of the underlying
> cloud infra rather than user-facing, so I think we'd prefer not to see
> them in the service-catalog returned to users either... is there going
> to be a (standard) way to hide them?
>
> On 28 April 2017 at 09:15, Mike Dorman <mdor...@godaddy.com> wrote:
>> We make extensive use of the [glance]/api_servers list.  We configure 
that on hypervisors to direct them to Glance servers which are more “local” 
network-wise (in order to reduce network traffic across security 
zones/firewalls/etc.)  This way nova-compute can fail over in case one of the 
Glance servers in the list is down, without putting them behind a load 
balancer.  We also don’t run https for these “internal” Glance calls, to save 
the overhead when transferring images.
>>
>> End-user calls to Glance DO go through a real load balancer and then are 
distributed out to the Glance servers on the 

Re: [openstack-dev] [Openstack-operators] [nova][glance] Who needs multiple api_servers?

2017-04-28 Thread Mike Dorman
l exist. Instead of then deprecating the api_servers list, we keep 
it- but add a big doc warning listing the gotchas and limitations - but 
for those folks for whom they are not an issue, you've got an out.

Alternative Two - Hybrid Approach - optional list of URLs

We go ahead and move to service config being the standard way one lists 
how to consume a service from the catalog. One of the standard options 
for consuming services is "endpoint_override" - which is a way an API 
user can say "hi, please to ignore the catalog and use this endpoint 
I've given you instead". The endpoint in question is a full URL, so 
https/http and ports and whatnot are all handled properly.

We add, in addition, an additional option "endpoint_override_list" which 
allows you to provide a list of URLs (not API servers) and if you 
provide that option, we'll keep the logic of choosing one at random at 
API call time. It's still a poor load balancer, and we'll still put 
warnings in the docs about it not being a featureful load balancing 
solution, but again would be available if needed.

Alternative Three - We ignore you and give you docs

I'm only including this because in the name of completeness. But we 
could write a bunch of docs about a recommended way of putting your 
internal endpoints in a load balancer and registering that with the 
internal endpoint in keystone. (I would prefer to make the operators 
happy, so let's say whatever vote I have is not for this option)

Alternative Four - We update client libs to understand multiple values 
from keystone for endpoints

I _really_ don't like this one - as I think us doing dumb software 
loadbalancing client side is prone to a ton of failures. BUT - right now 
the assumption when consuming endpoints from the catalog is that one and 
only one endpoint will be returned for a given 
service_type/service_name/interface. Rather than special-casing the
url roundrobin in nova, we could move that round-robin to be in the base 
client library, update api consumption docs with round-robin 
recommendations and then have you register the list of endpoints with 
keystone.

I know the keystone team has long been _very_ against using keystone as 
a list of all the endpoints, and I agree with them. Putting it here for 
sake of argument.

Alternative Five - We update keystone to round-robin lists of endpoints

Potentially even worse than four and even more unlikely given the 
keystone team's feelings, but we could have keystone continue to only 
return one endpoint, but have it do the round-robin selection at catalog 
generation time.


Sorry - you caught me in early morning brainstorm mode.

I am neither nova core nor keystone core. BUT:

I think honestly if adding a load balancer in front of your internal 
endpoints is an undue burden and/or the usefulness of the lists 
outweighs the limitations they have, we should go with One or Two. (I 
think three through five are all terrible)

My personal preference would be for Two - the round-robin code winds up 
being the same logic in both cases, but at least in Two folks who want 
to SSL all the way _can_, and it shouldn't be an undue extra burden on 
those of you using the api_servers now. We also don't have to do the 
funky things we currently have to do to turn the api_severs list into 
workable URLs.


On 04/27/2017 11:50 PM, Blair Bethwaite wrote:
> We at Nectar are in the same boat as Mike. Our use-case is a little
> bit more about geo-distributed operations though - our Cells are in
> different States around the country, so the local glance-apis are
> particularly important for caching popular images close to the
> nova-computes. We consider these glance-apis as part of the underlying
> cloud infra rather than user-facing, so I think we'd prefer not to see
> them in the service-catalog returned to users either... is there going
> to be a (standard) way to hide them?
>
> On 28 April 2017 at 09:15, Mike Dorman <mdor...@godaddy.com> wrote:
>> We make extensive use of the [glance]/api_servers list.  We configure 
that on hypervisors to direct them to Glance servers which are more “local” 
network-wise (in order to reduce network traffic across security 
zones/firewalls/etc.)  This way nova-compute can fail over in case one of the 
Glance servers in the list is down, without putting them behind a load 
balancer.  We also don’t run https for these “internal” Glance calls, to save 
the overhead when transferring images.
>>
>> End-user calls to Glance DO go through a real load balancer and then are 
distributed out to the Glance servers on the 

Re: [Openstack-operators] [openstack-dev] [nova][glance] Who needs multiple api_servers?

2017-04-28 Thread Mike Dorman
Ok.  That would solve some of the problem for us, but we’d still be losing the 
redundancy.  We could do some HAProxy tricks to route around downed services, 
but it wouldn’t handle the case when that one physical box is down.

Is there some downside to allowing endpoint_override to remain a list?   That 
piece seems orthogonal to the spec and IRC discussion referenced, which are 
more around the service catalog.  I don’t think anyone in this thread is 
arguing against the idea that there should be just one endpoint URL in the 
catalog.  But it seems like there are good reasons to allow multiples on the 
override setting (at least for glance in nova-compute.)

Thanks,
Mike



On 4/28/17, 8:05 AM, "Eric Fried" <openst...@fried.cc> wrote:

Blair, Mike-

There will be an endpoint_override that will bypass the service
catalog.  It still only takes one URL, though.

Thanks,
Eric (efried)

On 04/27/2017 11:50 PM, Blair Bethwaite wrote:
> We at Nectar are in the same boat as Mike. Our use-case is a little
> bit more about geo-distributed operations though - our Cells are in
> different States around the country, so the local glance-apis are
> particularly important for caching popular images close to the
> nova-computes. We consider these glance-apis as part of the underlying
> cloud infra rather than user-facing, so I think we'd prefer not to see
> them in the service-catalog returned to users either... is there going
> to be a (standard) way to hide them?
> 
    > On 28 April 2017 at 09:15, Mike Dorman <mdor...@godaddy.com> wrote:
>> We make extensive use of the [glance]/api_servers list.  We configure 
that on hypervisors to direct them to Glance servers which are more “local” 
network-wise (in order to reduce network traffic across security 
zones/firewalls/etc.)  This way nova-compute can fail over in case one of the 
Glance servers in the list is down, without putting them behind a load 
balancer.  We also don’t run https for these “internal” Glance calls, to save 
the overhead when transferring images.
>>
>> End-user calls to Glance DO go through a real load balancer and then are 
distributed out to the Glance servers on the backend.  From the end-user’s 
perspective, I totally agree there should be one, and only one URL.
>>
>> However, we would be disappointed to see the change you’re suggesting 
implemented.  We would lose the redundancy we get now by providing a list.  Or 
we would have to shunt all the calls through the user-facing endpoint, which 
would generate a lot of extra traffic (in places where we don’t want it) for 
image transfers.
>>
>> Thanks,
>> Mike
>>
>>
>>
>> On 4/27/17, 4:02 PM, "Matt Riedemann" <mriede...@gmail.com> wrote:
>>
>> On 4/27/2017 4:52 PM, Eric Fried wrote:
>> > Y'all-
>> >
>> >   TL;DR: Does glance ever really need/use multiple endpoint URLs?
>> >
>> >   I'm working on bp use-service-catalog-for-endpoints[1], which 
intends
>> > to deprecate disparate conf options in various groups, and 
centralize
>> > acquisition of service endpoint URLs.  The idea is to introduce
>> > nova.utils.get_service_url(group) -- note singular 'url'.
>> >
>> >   One affected conf option is [glance]api_servers[2], which 
currently
>> > accepts a *list* of endpoint URLs.  The new API will only ever 
return *one*.
>> >
>> >   Thus, as planned, this blueprint will have the side effect of
>> > deprecating support for multiple glance endpoint URLs in Pike, and
>> > removing said support in Queens.
>> >
>> >   Some have asserted that there should only ever be one endpoint 
URL for
>> > a given service_type/interface combo[3].  I'm fine with that - it
>> > simplifies things quite a bit for the bp impl - but wanted to make 
sure
>> > there were no loudly-dissenting opinions before we get too far 
down this
>> > path.
>> >
>> > [1]
>> > 
https://blueprints.launchpad.net/nova/+spec/use-service-catalog-for-endpoints
>> > [2]
>> > 
https://github.com/openstack/nova/blob/7e7bdb198ed6412273e22dea72e37a6371fce8bd/nova/conf/glance.py#L27-L37
>> > [3]
>> > 
http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2017-04-27.log.html#t2017-04-27T20:38:29
>> >

Re: [openstack-dev] [Openstack-operators] [nova][glance] Who needs multiple api_servers?

2017-04-28 Thread Mike Dorman
Ok.  That would solve some of the problem for us, but we’d still be losing the 
redundancy.  We could do some HAProxy tricks to route around downed services, 
but it wouldn’t handle the case when that one physical box is down.

Is there some downside to allowing endpoint_override to remain a list?   That 
piece seems orthogonal to the spec and IRC discussion referenced, which are 
more around the service catalog.  I don’t think anyone in this thread is 
arguing against the idea that there should be just one endpoint URL in the 
catalog.  But it seems like there are good reasons to allow multiples on the 
override setting (at least for glance in nova-compute.)

Thanks,
Mike



On 4/28/17, 8:05 AM, "Eric Fried" <openst...@fried.cc> wrote:

Blair, Mike-

There will be an endpoint_override that will bypass the service
catalog.  It still only takes one URL, though.

Thanks,
Eric (efried)

On 04/27/2017 11:50 PM, Blair Bethwaite wrote:
> We at Nectar are in the same boat as Mike. Our use-case is a little
> bit more about geo-distributed operations though - our Cells are in
> different States around the country, so the local glance-apis are
> particularly important for caching popular images close to the
> nova-computes. We consider these glance-apis as part of the underlying
> cloud infra rather than user-facing, so I think we'd prefer not to see
> them in the service-catalog returned to users either... is there going
> to be a (standard) way to hide them?
> 
    > On 28 April 2017 at 09:15, Mike Dorman <mdor...@godaddy.com> wrote:
>> We make extensive use of the [glance]/api_servers list.  We configure 
that on hypervisors to direct them to Glance servers which are more “local” 
network-wise (in order to reduce network traffic across security 
zones/firewalls/etc.)  This way nova-compute can fail over in case one of the 
Glance servers in the list is down, without putting them behind a load 
balancer.  We also don’t run https for these “internal” Glance calls, to save 
the overhead when transferring images.
>>
>> End-user calls to Glance DO go through a real load balancer and then are 
distributed out to the Glance servers on the backend.  From the end-user’s 
perspective, I totally agree there should be one, and only one URL.
>>
>> However, we would be disappointed to see the change you’re suggesting 
implemented.  We would lose the redundancy we get now by providing a list.  Or 
we would have to shunt all the calls through the user-facing endpoint, which 
would generate a lot of extra traffic (in places where we don’t want it) for 
image transfers.
>>
>> Thanks,
>> Mike
>>
>>
>>
>> On 4/27/17, 4:02 PM, "Matt Riedemann" <mriede...@gmail.com> wrote:
>>
>> On 4/27/2017 4:52 PM, Eric Fried wrote:
>> > Y'all-
>> >
>> >   TL;DR: Does glance ever really need/use multiple endpoint URLs?
>> >
>> >   I'm working on bp use-service-catalog-for-endpoints[1], which 
intends
>> > to deprecate disparate conf options in various groups, and 
centralize
>> > acquisition of service endpoint URLs.  The idea is to introduce
>> > nova.utils.get_service_url(group) -- note singular 'url'.
>> >
>> >   One affected conf option is [glance]api_servers[2], which 
currently
>> > accepts a *list* of endpoint URLs.  The new API will only ever 
return *one*.
>> >
>> >   Thus, as planned, this blueprint will have the side effect of
>> > deprecating support for multiple glance endpoint URLs in Pike, and
>> > removing said support in Queens.
>> >
>> >   Some have asserted that there should only ever be one endpoint 
URL for
>> > a given service_type/interface combo[3].  I'm fine with that - it
>> > simplifies things quite a bit for the bp impl - but wanted to make 
sure
>> > there were no loudly-dissenting opinions before we get too far 
down this
>> > path.
>> >
>> > [1]
>> > 
https://blueprints.launchpad.net/nova/+spec/use-service-catalog-for-endpoints
>> > [2]
>> > 
https://github.com/openstack/nova/blob/7e7bdb198ed6412273e22dea72e37a6371fce8bd/nova/conf/glance.py#L27-L37
>> > [3]
>> > 
http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2017-04-27.log.html#t2017-04-27T20:38:29
>> >

Re: [Openstack-operators] [openstack-dev] [nova][glance] Who needs multiple api_servers?

2017-04-27 Thread Mike Dorman
We make extensive use of the [glance]/api_servers list.  We configure that on 
hypervisors to direct them to Glance servers which are more “local” 
network-wise (in order to reduce network traffic across security 
zones/firewalls/etc.)  This way nova-compute can fail over in case one of the 
Glance servers in the list is down, without putting them behind a load 
balancer.  We also don’t run https for these “internal” Glance calls, to save 
the overhead when transferring images.

End-user calls to Glance DO go through a real load balancer and then are 
distributed out to the Glance servers on the backend.  From the end-user’s 
perspective, I totally agree there should be one, and only one URL.

However, we would be disappointed to see the change you’re suggesting 
implemented.  We would lose the redundancy we get now by providing a list.  Or 
we would have to shunt all the calls through the user-facing endpoint, which 
would generate a lot of extra traffic (in places where we don’t want it) for 
image transfers.

Thanks,
Mike



On 4/27/17, 4:02 PM, "Matt Riedemann"  wrote:

On 4/27/2017 4:52 PM, Eric Fried wrote:
> Y'all-
>
>   TL;DR: Does glance ever really need/use multiple endpoint URLs?
>
>   I'm working on bp use-service-catalog-for-endpoints[1], which intends
> to deprecate disparate conf options in various groups, and centralize
> acquisition of service endpoint URLs.  The idea is to introduce
> nova.utils.get_service_url(group) -- note singular 'url'.
>
>   One affected conf option is [glance]api_servers[2], which currently
> accepts a *list* of endpoint URLs.  The new API will only ever return 
*one*.
>
>   Thus, as planned, this blueprint will have the side effect of
> deprecating support for multiple glance endpoint URLs in Pike, and
> removing said support in Queens.
>
>   Some have asserted that there should only ever be one endpoint URL for
> a given service_type/interface combo[3].  I'm fine with that - it
> simplifies things quite a bit for the bp impl - but wanted to make sure
> there were no loudly-dissenting opinions before we get too far down this
> path.
>
> [1]
> 
https://blueprints.launchpad.net/nova/+spec/use-service-catalog-for-endpoints
> [2]
> 
https://github.com/openstack/nova/blob/7e7bdb198ed6412273e22dea72e37a6371fce8bd/nova/conf/glance.py#L27-L37
> [3]
> 
http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2017-04-27.log.html#t2017-04-27T20:38:29
>
> Thanks,
> Eric Fried (efried)
> .
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

+openstack-operators

-- 

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [nova][glance] Who needs multiple api_servers?

2017-04-27 Thread Mike Dorman
We make extensive use of the [glance]/api_servers list.  We configure that on 
hypervisors to direct them to Glance servers which are more “local” 
network-wise (in order to reduce network traffic across security 
zones/firewalls/etc.)  This way nova-compute can fail over in case one of the 
Glance servers in the list is down, without putting them behind a load 
balancer.  We also don’t run https for these “internal” Glance calls, to save 
the overhead when transferring images.

End-user calls to Glance DO go through a real load balancer and then are 
distributed out to the Glance servers on the backend.  From the end-user’s 
perspective, I totally agree there should be one, and only one URL.

However, we would be disappointed to see the change you’re suggesting 
implemented.  We would lose the redundancy we get now by providing a list.  Or 
we would have to shunt all the calls through the user-facing endpoint, which 
would generate a lot of extra traffic (in places where we don’t want it) for 
image transfers.

Thanks,
Mike



On 4/27/17, 4:02 PM, "Matt Riedemann"  wrote:

On 4/27/2017 4:52 PM, Eric Fried wrote:
> Y'all-
>
>   TL;DR: Does glance ever really need/use multiple endpoint URLs?
>
>   I'm working on bp use-service-catalog-for-endpoints[1], which intends
> to deprecate disparate conf options in various groups, and centralize
> acquisition of service endpoint URLs.  The idea is to introduce
> nova.utils.get_service_url(group) -- note singular 'url'.
>
>   One affected conf option is [glance]api_servers[2], which currently
> accepts a *list* of endpoint URLs.  The new API will only ever return 
*one*.
>
>   Thus, as planned, this blueprint will have the side effect of
> deprecating support for multiple glance endpoint URLs in Pike, and
> removing said support in Queens.
>
>   Some have asserted that there should only ever be one endpoint URL for
> a given service_type/interface combo[3].  I'm fine with that - it
> simplifies things quite a bit for the bp impl - but wanted to make sure
> there were no loudly-dissenting opinions before we get too far down this
> path.
>
> [1]
> 
https://blueprints.launchpad.net/nova/+spec/use-service-catalog-for-endpoints
> [2]
> 
https://github.com/openstack/nova/blob/7e7bdb198ed6412273e22dea72e37a6371fce8bd/nova/conf/glance.py#L27-L37
> [3]
> 
http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2017-04-27.log.html#t2017-04-27T20:38:29
>
> Thanks,
> Eric Fried (efried)
> .
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

+openstack-operators

-- 

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] OsOps Reboot

2017-01-23 Thread Mike Dorman
+1!  Thanks for driving this.


From: Edgar Magana 
Date: Friday, January 20, 2017 at 1:23 PM
To: "m...@mattjarvis.org.uk" , Melvin Hillsman 

Cc: OpenStack Operators 
Subject: Re: [Openstack-operators] OsOps Reboot

I super second this! Yes, looking forward to amazing contributions there.

Edgar

From: Matt Jarvis 
Reply-To: "m...@mattjarvis.org.uk" 
Date: Friday, January 20, 2017 at 12:33 AM
To: Melvin Hillsman 
Cc: OpenStack Operators 
Subject: Re: [Openstack-operators] OsOps Reboot

Great stuff Melvin ! Look forward to seeing this move forward.

On Fri, Jan 20, 2017 at 6:32 AM, Melvin Hillsman 
> wrote:
Good day everyone,

As operators we would like to reboot the efforts started around OsOps. Initial 
things that may make sense to work towards are starting back meetings, 
standardizing the repos (like having a lib or common folder, READMEs include 
release(s) tool works with, etc), increasing feedback loop from operators in 
general, actionable work items, identifying teams/people with resources for 
continuous testing/feedback, etc.

We have got to a great place so let's increase the momentum and maximize all 
the work that has been done for OsOps so far. Please visit the following link [ 
https://goo.gl/forms/eSvmMYGUgRK901533
 ] to vote on day of the week and time (UTC) you would like to have OsOps 
meeting. And also visit this etherpad [ 
https://etherpad.openstack.org/p/osops-meeting
 ] to help shape the initial and ongoing agenda items.

Really appreciate you taking time to read through this email and looking 
forward to all the great things to come.

Also we started an etherpad for brainstorming around how OsOps could/would 
function; very rough draft/outline/ideas right now again please provide 
feedback: 
https://etherpad.openstack.org/p/osops-project-future


--
Kind regards,

Melvin Hillsman
Ops Technical Lead
OpenStack Innovation Center

mrhills...@gmail.com
phone: (210) 312-1267
mobile: (210) 413-1659
http://osic.org

Learner | Ideation | Belief | Responsibility | Command

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] RabbitMQ 3.6.x experience?

2017-01-09 Thread Mike Dorman
: 'autoheal'
> rabbitmq::config_variables:
>   'vm_memory_high_watermark': '0.6'
>   'collect_statistics_interval': 3
> rabbitmq::config_management_variables:
>   'rates_mode': 'none'
> rabbitmq::file_limit: '65535'
>
> Finally, if you do upgrade to 3.6.x please report back here with your
> results at scale!
>
>
> On Thu, Jan 5, 2017 at 8:49 AM, Mike Dorman <mdor...@godaddy.com> wrote:
>>
>> We are looking at upgrading to the latest RabbitMQ in an effort to ease
>> some cluster failover issues we’ve been seeing.  (Currently on 3.4.0)
>>
>>
>>
>> Anyone been running 3.6.x?  And what has been your experience?  Any
>> gottchas to watch out for?
>>
>>
>>
>> Thanks,
>>
>> Mike
>>
>>
>>
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] RabbitMQ 3.6.x experience?

2017-01-05 Thread Mike Dorman
We are looking at upgrading to the latest RabbitMQ in an effort to ease some 
cluster failover issues we’ve been seeing.  (Currently on 3.4.0)

Anyone been running 3.6.x?  And what has been your experience?  Any gottchas to 
watch out for?

Thanks,
Mike

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[openstack-dev] [neutron] neutron-server high CPU and RAM load when updating DHCP ports

2016-12-23 Thread Mike Dorman
We noticed an issue in one of our larger clouds (~700 hypervisors and ovs 
agents) where (Liberty) neutron-server CPU and RAM load would spike up quite a 
bit whenever a DHCP agent port was updated.  So much load that processes were 
getting OOM killed on our API servers, and so many queries were going to the 
database that it was affecting performance of other APIs (sharing the same 
database cluster.)

Kris Lindgren determined what was happening: any time a DHCP port is changed, 
Neutron forces a complete refresh of all security group filter rules for all 
ports on the same network as the DHCP port.  We run only provider networks 
which VMs plug directly into, and our largest network has several thousand 
ports.  This was generating an avalanche of RPCs from the OVS agents, thus 
loading up neutron-server with a lot of work.

We only use DHCP has a backup network configuration mechanism in case something 
goes wrong with config-drive, so we are not huge users of it.  But DHCP agents 
are being scheduled and removed often enough, and our networks contain a large 
enough number of ports, that this has begun affecting us quite a bit.

Kevin Benton suggested a minor patch [1] to disable the blanket refresh of 
security group filters on DHCP port changes.  I tested it out in our staging 
environment and can confirm that:

-  Security group filters are indeed not refreshed on DHCP port changes

-  iptables rules generated for regular VM ports still include the 
generic rules to allow the DHCP ports 67 and 68 regardless of the presence of a 
DHCP agent on the network or not.  (This covers the scenario where VM ports are 
created while there are no DHCP agents, and a DHCP agent is added later.)

I think there are some plans to deprecate this behavior.  As far as I know, it 
still exists in master neutron.  I’m happy to put the trivial patch up for 
review if people think this is a useful change to Neutron.

We are a bit of an edge case with such large number of ports per network.  We 
are also considering disabling DHCP altogether since it is really not used.  
But, wanted to share the experience with others in case people are running into 
the same issue.

Thanks,
Mike

[1] https://gist.github.com/misterdorm/37a8997aed43081bac8d12c7f101853b
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] Question on coding style on osops-tools-contrib repo

2016-12-22 Thread Mike Dorman
There is no standard strategy for this particular repo.  Although I do agree 
with you that the neutron directory is probably not the place for this.

I would suggest creating a top level lib/ directory and placing it there.  
There is already a multi/ directory, but that’s for scripts for non-specific 
services.  So I think it’s best to put library type things elsewhere.

Thanks!
Mike


On 12/21/16, 3:36 AM, "Saverio Proto"  wrote:

Hello,

in the osops-tools-contrib repo so far I proposed always python
scripts that are contained in a single file.

Now I have this file:
openstackapi.py

that I reuse in many scripts, look at this:
https://review.openstack.org/#/c/401409/

but maybe is not the best idea to commit this generic file in the
neutron folder.

any advice how to handle this ? what is the accepted python import strategy 
?

thanks

Saverio

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Ceilometer/oslo.messaging connect to multiple RMQ endpoints

2016-11-03 Thread Mike Dorman
Just what I needed, thanks Sam.  And I can also confirm this works like a champ.

I was digging through oslo.messaging stuff looking for this, I completely 
overlooked the notification settings in ceilometer itself.

Appreciate pointing me in the right direction!
Mike


From: Sam Morrison <sorri...@gmail.com>
Date: Thursday, November 3, 2016 at 7:04 PM
To: Mike Dorman <mdor...@godaddy.com>
Cc: OpenStack Operators <openstack-operators@lists.openstack.org>
Subject: Re: [Openstack-operators] Ceilometer/oslo.messaging connect to 
multiple RMQ endpoints

That was me! and yes you can do it when consuming notifications with 
ceilometer-agent-notification

Eg in our ceilometer.conf we have

[notification]
workers=12
disable_non_metric_meters=true
store_events = true
batch_size = 50
batch_timeout = 5
messaging_urls = rabbit://XX:XX@rabbithost1:5671/vhost1
messaging_urls = rabbit://XX:XX@rabbithost2:5671/vhost2
messaging_urls = rabbit://XX:XX@rabbithost3:5671/vhost3


If no messaging_urls are set then it will fall back to the settings in the 
[oslo_messaging_rabbit] config section
Also if you set messaging_urls then it won’t consume from the rabbit specified 
in [oslo_messaging_rabbit] so you have to add it to messaging_urls too.

Cheers,
Sam




On 4 Nov. 2016, at 10:28 am, Mike Dorman 
<mdor...@godaddy.com<mailto:mdor...@godaddy.com>> wrote:

I heard third hand from the summit that it’s possible to configure 
Ceilometer/oslo.messaging with multiple rabbitmq_hosts config entries, which 
will let you connect to multiple RMQ endpoints at the same time.

The scenario here is we use the Ceilometer notification agent the pipe events 
from OpenStack services into a Kafka queue for consumption by other team(s) in 
the company.  We also run Nova cells v1, so we have to run one Ceilometer agent 
for the API cell, as well as an agent for every compute cell (because they have 
independent RMQ clusters.)

Anyway, I tried configuring it this way and it still only connects to a single 
RMQ server.  We’re running Liberty Ceilometer and oslo.messaging, so I’m 
wondering if this behavior is only in a later version?  Can anybody shed any 
light?  I would love to get away from running so many Ceilometer agents.

Thanks!
Mike

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org<mailto:OpenStack-operators@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Updating oschecks

2016-11-03 Thread Mike Dorman
Absolutely agree.  The osops repos started as (and frankly, still are) mostly 
dumping grounds for tools folks had built and were running locally.  This was 
meant to be a first step at sharing and collaboration.  The kind of 
improvements you’re talking about is exactly the direction we want to take this 
stuff.

Thanks!!
Mike


From: Melvin Hillsman 
Organization: OpenStack Innovation Center
Reply-To: "mrhills...@gmail.com" 
Date: Thursday, November 3, 2016 at 12:48 PM
To: Lars Kellogg-Stedman , OpenStack Operators 

Subject: Re: [Openstack-operators] Updating oschecks

Hey Lars,

I think the needs you have are relevant to anyone who would use this tooling 
and think you should definitely move forward with implementing what you have 
prototyped. I personally believe any improvements to the tools in osops repos 
are welcome. Bringing modularity to this as well is great from my perspective.

On 11/03/2016 01:03 PM, Lars Kellogg-Stedman wrote:

I've recently started working with the oscheck scripts in the

osops-tools-monitoring project [1], and I found that in their current

form they didn't quite meet my needs.  In particular:



- They don't share a common set of authentication options

- They can't read credentials from files

- Many of them require a priori configuration of the openstack

  environment, which means they can't be used to health check a new

  deployment



I've spent a little time recently prototyping a new set of health

check scripts, available here:



  https://github.com/larsks/oschecks



I'd like to emphasize that these *are not* currently meant as a usable

replacement for the existing checks; they were to prototype (a) the

way I'd like the user interface to work and (b) the way I'd like

things like credentials to work.



This project offers the following features:



- They use os_client_config for managing credentials, so they can be

  configured from a clouds.yaml file, or the environment, or the

  command line, and it all Just Works.



- Authentication is handled in just one place in the code for all the

  checks.



- The checks are extensible (using the cliff framework), which means

  that checks with different sets of requirements can be

  packaged/installed separately.  See, for example:



https://github.com/larsks/oschecks_systemd



- For every supported service there is a simple "can I make an

  authenticated request to the API successfully" check that does not

  require any pre-existing resources to be created.



- They are (hopefully) structured such that it is relatively easy to

  write new checks the follow the same syntax and behavior of the

  other checks.



If people think this is a useful way of implementing these health

checks, I would be happy to do the work necessary to make them a mostly

drop-in replacement for the existing checks (adding checks that are

currently missing, and adding appropriate console-script entrypoints to

match the existing names, etc).



I would appreciate any feedback.  Sorry for the long message, and thanks

for taking the time to read this far!



[1]: 
https://github.com/openstack/osops-tools-monitoring/tree/master/monitoring-for-openstack/oschecks






___

OpenStack-operators mailing list

OpenStack-operators@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



--

Kind regards,

--

Melvin Hillsman

Ops Technical Lead

OpenStack Innovation Center



mrhills...@gmail.com

mobile: (210) 413-1659

office: (210) 312-1267

Learner | Ideation | Belief | Responsibility | Command

http://osic.org
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Nova cells patches against Liberty

2016-05-19 Thread Mike Dorman
d I 
think vm_state are set at that point.  The name accessor method references 
self.id, which (for some reason) is not lazy loadable.

So I tried moving the _ensure_cells_system_metadata later in the save method, 
after the object is loaded from the database (here: 
https://github.com/openstack/nova/blob/stable/liberty/nova/objects/instance.py#L676
 )  That seems to work in practice, but it causes some of the tox tests to 
fail:  https://gist.github.com/misterdorm/cc7dfd235ebcc2a23009b9115b58e4d5

Anyways, I’m at a bit of a loss here and curious if anybody might have some 
better insights.

Thanks,
Mike



From:  Sam Morrison <sorri...@gmail.com>
Date:  Wednesday, May 4, 2016 at 6:23 PM
To:  Mike Dorman <mdor...@godaddy.com>
Cc:  OpenStack Operators <openstack-operators@lists.openstack.org>
Subject:  Re: [Openstack-operators] Nova cells patches against Liberty


Hi Mike,

I’ve also been working on these and have some updated patches at:

https://github.com/NeCTAR-RC/nova/commits/stable/liberty-cellsv1

There are a couple of patches that you have in your tree that need updating for 
Liberty. Mainly around supporting the v2.1 API and more things moved to 
objects. I have also written some tests for some more of them too. I haven’t 
tested all of these functionally
 yet but they pass all tox tests.

Cheers,
Sam




On 5 May 2016, at 4:19 AM, Mike Dorman <mdor...@godaddy.com> wrote:

I went ahead and pulled out the Nova cells patches we’re running against 
Liberty so that others can use them if so desired.

https://github.com/godaddy/openstack-nova-patches

Usual disclaimers apply here, your mileage may vary, these may not work as 
expected in your environment, etc.  We have tested these at a basic level (unit 
tests), but are not running these for Liberty in real production yet.

Mike






___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators






___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Nova cells patches against Liberty

2016-05-04 Thread Mike Dorman
I went ahead and pulled out the Nova cells patches we’re running against 
Liberty so that others can use them if so desired.

https://github.com/godaddy/openstack-nova-patches

Usual disclaimers apply here, your mileage may vary, these may not work as 
expected in your environment, etc.  We have tested these at a basic level (unit 
tests), but are not running these for Liberty in real production yet.

Mike




___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [Large Deployments Team] Meeting tomorrow - 14:00 UTC

2016-04-20 Thread Mike Dorman
1600 UTC tomorrow (Thursday), right?  So if Outlook is calculating it right, 
that’s 1100 US Central, 0900 US Pacific?

Thanks,
Mike



From: Matt Van Winkle >
Date: Wednesday, April 20, 2016 at 9:38 AM
To: OpenStack Operators 
>
Subject: [Openstack-operators] [Large Deployments Team] Meeting tomorrow - 
14:00 UTC

Hello LDT folks,
So, I totally dropped the ball last month, but it sounds like most of us were 
busy running clouds and forgot about the meeting.  I want to make sure we get 
together tomorrow, though.  The Summit is next week and we have a good amount 
of time to work on things there, so I’d like to firm up our agenda for those 
sessions.  The etherpad for the summit sessions  is here - 
https://etherpad.openstack.org/p/AUS-ops-Large-Deployments-Team  For those team 
members who aren’t in timezone friendly spots for this month’s meetings, please 
make some notes in there for things you’d like discussed in one of the three 
sessions.

Thanks!
VW
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Ops Summit Call for Lightning Talks!

2016-04-13 Thread Mike Dorman
The time is upon us again to put together some great lightning talks for the 
Ops summit!  We’ll have a single block of 90 minutes this time, so will have 
time for lots of good content from our community.

Details and sign up on the etherpad:  
https://etherpad.openstack.org/p/AUS-ops-lightning-talks

* Monday 14:50 - 16:20

* Talks should be 5 - 10 minutes, INCLUDING time for a few questions

* Potential topic ideas:
  * Project updates for existing or new project
  * Architecture show-and-tell (these are always popular and really useful for 
people)
  * Talk about a problem or bug you encountered and how you addressed it
  * Tools/utilities you've developed to help your daily life of running a cloud
  * Success stories/anecdotes about working with the larger community

I highly encourage everyone to participate.  This is a great opportunity to 
share about what you’ve been working on recently, as well as get your feet wet 
with presenting if you’ve never done a formal summit talk before.

Check out the etherpad, and sign up!

Thanks,
Mike


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [operators] updates to osops readme files

2016-02-01 Thread Mike Dorman
I’ve created several reviews in the osops repos to update the README files with 
more details on how to contribute, etc.  Please review:

https://review.openstack.org/#/c/274772
https://review.openstack.org/#/c/274773
https://review.openstack.org/#/c/274777
https://review.openstack.org/#/c/274774
https://review.openstack.org/#/c/274776

Thanks,
Mike

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [puppet] [neutron] Serious bug in puppet neutron cli json output parsing.

2016-01-04 Thread Mike Dorman
I wonder if we should just refactor the Neutron provider to support either 
format?  That way we stay independent from whatever the particular 
installation’s cliff/tablib situation is.

We can probably safely assume that none of the Neutron objects will have 
attributes called ‘Field’ or ‘Value’, so it would be fairly easy to detect 
which format is there.

Mike


From: Denis Egorenko >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Thursday, December 31, 2015 at 5:59 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [puppet] [neutron] Serious bug in puppet neutron 
cli json output parsing.

Last I checked, which was quite a while ago, openstackclient didn't support 
everything we were using from the neutron client.

That's true. Openstack client doesn't support all features of neutron client 
[1].

I would prefer use 3) option, but, unfortunately, i also don't see way, how to 
detect stevedore and cliff.

[1] 
https://github.com/openstack/python-openstackclient/blob/master/setup.cfg#L329-L339

2015-12-30 19:53 GMT+03:00 Colleen Murphy 
>:


On Wed, Dec 30, 2015 at 8:37 AM, Sofer Athlan-Guyot 
> wrote:
Hi,

I have added neutron people as they may help to find a solution.

After banging my head against the wall for a whole afternoon I
discovered a serious bug in puppet neutron module.

I not going to repeat here the detail of the bug report[1].  Basically:
 - neutron_port
 - neutron_subnet
 - neutron_router
 - neutron_network

may break idempotency randomly and won't work at all when clifftablib is
removed from the package dependency of python-openstackclient
(Mitaka[2])

So the problem is that neutron cli json output on liberty (at least, and
maybe before) is not consistent and may come from cliff or clifftablib.
I didn't test it but the same may apply to openstack cli. As we don't
use the openstack cli json output it's not a issue (for puppet modules)

The available solution I can see are:
 1. go back to parsing csv, shell output (revert [3])
 2. find a way to leverage openstacklib parsing for neutron as well
 3. keep json and parse the right output (cliff) and find a way to make
sure that it is always used by stevedore
 4. ?

Last I checked, which was quite a while ago, openstackclient didn't support 
everything we were using from the neutron client. I would like to reevaluate 
that and go with option 2 if we can. Otherwise option 1 seems reasonable.

From my point of view 3) is not a option, but other may disagree.

The problem is tricky and the fact that the CI cannot detect this is
trickier[4].

So before Mitaka, the json parsing should go.  I would love to see an
interface that all puppet modules would use (solution 2).  The current
openstacklib parses openstack client well enough.  The neutron command
is not that different and I think there is space for code reuse.  This
would be a long term solution.  It would bring the advantage of having
only one interface to change if it was decided to use the API directly
for instance[5]

In the meantime, a quick solution to this bug must be found.

Looking forward to your comments.

Regards,

[1] https://bugs.launchpad.net/puppet-neutron/+bug/1530163
[2] https://bugs.launchpad.net/python-neutronclient/+bug/1529914
[3] https://review.openstack.org/#/c/238156/
[4] https://review.openstack.org/#/c/262223/
[5] http://lists.openstack.org/pipermail/openstack-dev/2015-October/076439.html
--
Sofer Athlan-Guyot

Colleen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Best Regards,
Egorenko Denis,
Deployment Engineer
Mirantis
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack-operators] Better error messages for API policy enforcements

2015-12-02 Thread Mike Dorman
We use some custom API policies (as in policy.json) to restrict certain 
operations to particular roles or requiring some fields on calls (i.e. we 
require that users give us an availability zone when booting an instance.)

When the policy causes the operation to be denied, the only response that goes 
back to the user is something like “operation is denied by policy.”  This is 
confusing and it’d be really nice if we could send back a response like “you 
need to have  role to do this”, or “availability zone is required.”

I was thinking about writing up a RFE bug for a feature that would allow 
configuration of a custom “policy denied” message in policy.json.  Would this 
be useful/desired by others?

Mike

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-operators][osops] tools-contrib is open for business!

2015-11-20 Thread Mike Dorman
First let me just say that I find nothing more infuriating than “software 
licensing.”  We burn so much effort on this, when all we want to do is share 
the code to help others.  

I agree with Joe, I thought the agreement was stuff in “contrib” wouldn’t need 
license, specific formatting, test, etc.  So I feel like a blanket notice in 
the README is sufficient there.  We should keep the bar to this very low 
(simple lint checking only, IMO.)  Then do get into the “curated” repos, you 
have to have a proper license, formatting, etc.

Since the Apache 2 doc says “SHOULD”, I would argue we don’t make a license 
required for contrib, but recommend that people do put it in there as best 
practice.  But we shouldn’t hold up reviews and not merge stuff into contrib 
just because of a lack of license.

As an end user who just wants to put my stuff out there, it’s this kind of crap 
that makes me abandon the effort and not try again.  (Here is another good 
example: https://review.openstack.org/#/c/247725/ )

Mike





On 11/19/15, 9:40 PM, "Tom Fifield"  wrote:

>On 20/11/15 12:29, Matt Fischer wrote:
>> Is there a reason why we can't license the entire repo with Apache2 and
>> if you want to contribute you agree to that? Otherwise it might become a
>> bit of a nightmare.  Or maybe at least do "Apache2 unless otherwise stated"?
>
>According to http://www.apache.org/dev/apply-license.html#new
>
>"Each original source document (code and documentation, but excluding 
>the LICENSE and NOTICE files) SHOULD include a short license header at 
>the top."
>
>
>> On Thu, Nov 19, 2015 at 9:17 PM, Joe Topjian > > wrote:
>>
>> Thanks, JJ!
>>
>> It looks like David Wahlstrom submitted a script and there's a
>> question about license.
>>
>> https://review.openstack.org/#/c/247823/
>>
>> Though contributions to contrib do not have to follow a certain
>> coding style, can be very lax on error handling, etc, should they at
>> least mention a license? Thoughts?
>>
>>
>> On Wed, Nov 18, 2015 at 2:38 PM, JJ Asghar > > wrote:
>>
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA512
>>
>>
>> Hey everyone,
>>
>> I just want to announce that tools-contrib[1] is now open for
>> submissions. Please take a moment to read the README[2] to get
>> yourself familiar with it. I'm hoping to see many scripts and tools
>> start to trickle in.
>>
>> Remember, by committing to this repository, even a simple bash
>> script
>> you wrote, you're helping out your future Operators. This is for
>> your
>> future you, and our community, so treat em nice ;)!
>>
>> [1]: https://github.com/openstack/osops-tools-contrib
>> [2]:
>> 
>> https://github.com/openstack/osops-tools-contrib/blob/master/README.rst
>>
>> - --
>> Best Regards,
>> JJ Asghar
>> c: 512.619.0722  t: @jjasghar irc: j^2
>> -BEGIN PGP SIGNATURE-
>> Version: GnuPG/MacGPG2 v2
>> Comment: GPGTools - https://gpgtools.org
>>
>> iQIcBAEBCgAGBQJWTO+/AAoJEDZbxzMH0+jTRxQQAK2DJdCTnihR7YJhJAXgbdIn
>> NZizqkK4lEhnfdis0XZJekofAib7NytuAtTuWUQOTLQaFv02UAnMqSyX5ofX42PZ
>> mGaLtZ452k+EhdeJprO5254fka8VSaRvFOZUJg0K0QjZrj5qFwtG0T1yqVBBCQmI
>> wdUkxBB/cL8M0Ve6LaQNS4vmx03ZC81FLEtVX2O62EV8FrP8sxuXc7XDTCRbLnhR
>> rb2HJC7R9/AZtr2gjwr7id714QFEEAgCKca79l+vsaE3VRfy+KbHsKqY9vPrxPVn
>> qqXLQOm8ZDgXedjxYraCDBbay/FQqVrsEt/0RiAKrtAIRbLm2ZkiR/XL6J3BtNzi
>> 2sNt12m/VkrMv9zWUT/8oqiBb73eg3TbUipVeKmh4TD12KK16EYMSF+mH9T7DY2Z
>> eP2AT6XEs+BDohP+I3L7WM5r/AKl9r40ulLEqRR7y+jcn5qwAOEb+UzUpna4wTt/
>> mZD5UNNemoN5h2P4eMPpfnZnpNcy4Qe/qoohZdAov4Gvdm3tmbG9jIzUKF3Q9Av5
>> Uqpe6gUcp3Qd2EaKYGR47B2f+QRLlTs9Sk5lLBJSyOxpA53KcK9125fS0YM6VMVQ
>> wETlxAggnmt4diwSoJt8VSYrqXlieo7eHkjv/s4hSGIcYBqtkCPZnNPliJmvMmfh
>> s/wsl6ICrB7oe55ghDbM
>> =EWDz
>> -END PGP SIGNATURE-
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> 
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>>
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
>
>___

[Openstack-operators] [Nova] Support for server groups in cells v1

2015-11-19 Thread Mike Dorman
We recently patched Nova for support of server groups under cells v1.  It’s 
pretty rudimentary, but it works.

For anyone that’s interested, the patch is here:  
https://gist.github.com/misterdorm/5e9513bb1211b49e551c and I did a short 
write-up with some details at 
http://www.dorm.org/blog/nova-cells-v1-support-for-server-groups/

Mike

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [nova] FYI, local conductor mode is deprecated, pending removal in N

2015-11-12 Thread Mike Dorman
We do have a backlog story to investigate this more deeply, we just have not 
had the time to do it yet.  For us, it’s been easier/faster to add more 
hardware to conductor to get over the hump temporarily.

We kind of have that work earmarked for after the Liberty upgrade, in hopes 
that maybe it’ll be fixed there.

If anybody else has done even some trivial troubleshooting already, it’d be 
great to get that info as a starting point.  I.e. which specific calls to 
conductor are causing the load, etc.

Mike





On 11/12/15, 9:36 AM, "Matt Riedemann"  wrote:

>
>
>On 11/12/2015 8:15 AM, Andrew Laski wrote:
>> On 11/12/15 at 09:31pm, Yaguang Tang wrote:
>>> There are still performance issue in large deployment to use
>>> nova-conductor, how do we make this decision to deprecate local mode?
>>> I see
>>> the patch is merged only three days after submit.
>>
>> There is no timeline for removal of local mode at this time for exactly
>> the reason you mention.  It will be removed no earlier than the N cycle
>> though.  The deprecation is intended to signal that deployers should
>> start thinking about how to move to remote conductor and not expect
>> local mode to be around forever.  And it was always intended to be
>> temporary while deployments transitioned.
>>
>> It would be helpful if you could report the performance issues that you
>> are seeing so that remote conductor can be improved to accommodate
>> deployments of all sizes.
>
>Yes, also because this came up in the operators list:
>
>http://lists.openstack.org/pipermail/openstack-operators/2015-October/008577.html
>
>Conductor was called out but I'm not sure if people have dug into the 
>main issues. I had replied to that other thread if we wanted to pick 
>this up there.
>
>>
>>>
>>> On Thu, Nov 12, 2015 at 10:51 AM, Matt Riedemann
>>> >>
 Details are in the change:

 https://review.openstack.org/#/c/242168/

 --

 Thanks,

 Matt Riedemann


 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

>>>
>>>
>>>
>>> --
>>> Yaguang Tang
>>> Technical Support, Mirantis China
>>>
>>> *Phone*: +86 15210946968
>>
>>> ___
>>> OpenStack-operators mailing list
>>> OpenStack-operators@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
>-- 
>
>Thanks,
>
>Matt Riedemann
>
>
>___
>OpenStack-operators mailing list
>OpenStack-operators@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [Large Deployments Team] [nova] Cells v1 attach/detach blueprint

2015-11-10 Thread Mike Dorman
We met with nova last week [1] and did there is some support for merging cells 
v1 patches, provided they are covered by tests.

So for the attach/detatch patch [2], there’s a nova cells + neutron job [3] 
that’s being worked on in order to cover that functionality.  So getting that 
test in place is the first step toward merging this one.

If you have experience with Devstack and/or gate testing, I’d encourage you to 
help review that test so we can move forward.

Thanks!


[1] 
http://eavesdrop.openstack.org/meetings/nova/2015/nova.2015-11-05-21.00.log.html
[2] https://review.openstack.org/#/c/215459/
[3] https://review.openstack.org/#/c/235485


From: Mike Dorman <mdor...@godaddy.com<mailto:mdor...@godaddy.com>>
Date: Wednesday, November 4, 2015 at 3:29 PM
To: OpenStack Operators 
<openstack-operators@lists.openstack.org<mailto:openstack-operators@lists.openstack.org>>
Subject: [Openstack-operators] [Large Deployments Team] [nova] Cells v1 
attach/detach blueprint

I reached out to nova today to see what is the best way forward on this patch 
[1].  mriedem had created a blueprint skeleton for this already [2].

It sounds like it’s unlikely that nova would accept it, because it will require 
a new CI job [3] and it goes against the philosophy of not adding to the v1 
code base.

They suggested we add it to the nova meeting agenda [4], by which I hope to get 
to a definite “yea” or “nay” on if this can go forward.

It’s tomorrow (Thursday 11/5) 2100 UTC, #openstack-meeting.  Please attend as 
you’re able.

Thanks,
Mike


[1] https://review.openstack.org/#/c/215459/
[2] https://blueprints.launchpad.net/nova/+spec/cells-v1-interface-events
[3] https://review.openstack.org/#/c/235485/
[4] https://wiki.openstack.org/wiki/Meetings/Nova#Agenda_for_next_meeting
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [Large Deployments Team] Summit Meeting Notes & Actions

2015-11-04 Thread Mike Dorman
Thanks to everyone who was able to attend the LDT session in Tokyo!  I wanted 
to just briefly summarize some of the key points and action items.

* Neutron segmented/routed networks
  * Encourage you to follow & provide feedback on the Neutron spec [1]

* Cells v1 Patches
  * Many of the patches we gathered in [2] are no longer relevant or have been 
addressed some other way
  * Several will require a blueprint to get them into Nova
  * ACTION: mdorman to build a spec for vif plug patch [3].  If this goes well 
we can model future specs for other patches after this.
  * ACTION: sorrison to resubmit trivial cell name display patch [4]

* Public Clouds
  * Gathered good list of requirements/nice-to-haves in the etherpad [5]
  * Will go over these more in future IRC meeting to figure out what next steps 
to take on these.

* Glance Asks
  * List gathered in etherpad [6]
  * Also will discuss these in more detail in later meetings to determine 
appropriate RFE process.

I’ve also updated our couple wiki pages [7,8] with more current info based on 
results from the summit.

Our next meeting will be on Friday, November 20th at 3:00 UTC in 
#openstack-operators [8]  Please join us!

Thanks all!
Mike


[1] https://review.openstack.org/#/c/225384/
[2] https://etherpad.openstack.org/p/PAO-LDT-cells-patches
[3] https://review.openstack.org/#/c/235485/
[4] https://review.openstack.org/#/c/184158/
[5] https://etherpad.openstack.org/p/TYO-ops-large-deployments-team
[6] https://etherpad.openstack.org/p/LDT-glance-asks
[7] https://wiki.openstack.org/wiki/Large_Deployment_Team
[8] https://wiki.openstack.org/wiki/Meetings/LDT
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [Large Deployments Team] [nova] Cells v1 attach/detach blueprint

2015-11-04 Thread Mike Dorman
I reached out to nova today to see what is the best way forward on this patch 
[1].  mriedem had created a blueprint skeleton for this already [2].

It sounds like it’s unlikely that nova would accept it, because it will require 
a new CI job [3] and it goes against the philosophy of not adding to the v1 
code base.

They suggested we add it to the nova meeting agenda [4], by which I hope to get 
to a definite “yea” or “nay” on if this can go forward.

It’s tomorrow (Thursday 11/5) 2100 UTC, #openstack-meeting.  Please attend as 
you’re able.

Thanks,
Mike


[1] https://review.openstack.org/#/c/215459/
[2] https://blueprints.launchpad.net/nova/+spec/cells-v1-interface-events
[3] https://review.openstack.org/#/c/235485/
[4] https://wiki.openstack.org/wiki/Meetings/Nova#Agenda_for_next_meeting
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [Large Deployments Team] Summit Meeting Notes & Actions

2015-11-04 Thread Mike Dorman
Sorry, wrong link for [3] below.



On 11/4/15, 3:05 PM, "Mike Dorman" <mdor...@godaddy.com> wrote:

>Thanks to everyone who was able to attend the LDT session in Tokyo!  I wanted 
>to just briefly summarize some of the key points and action items.
>
>* Neutron segmented/routed networks
>  * Encourage you to follow & provide feedback on the Neutron spec [1]
>
>* Cells v1 Patches
>  * Many of the patches we gathered in [2] are no longer relevant or have been 
> addressed some other way
>  * Several will require a blueprint to get them into Nova
>  * ACTION: mdorman to build a spec for vif plug patch [3].  If this goes well 
> we can model future specs for other patches after this.
>  * ACTION: sorrison to resubmit trivial cell name display patch [4]
>
>* Public Clouds
>  * Gathered good list of requirements/nice-to-haves in the etherpad [5]
>  * Will go over these more in future IRC meeting to figure out what next 
> steps to take on these.
>
>* Glance Asks
>  * List gathered in etherpad [6]
>  * Also will discuss these in more detail in later meetings to determine 
> appropriate RFE process.
>
>I’ve also updated our couple wiki pages [7,8] with more current info based on 
>results from the summit.
>
>Our next meeting will be on Friday, November 20th at 3:00 UTC in 
>#openstack-operators [8]  Please join us!
>
>Thanks all!
>Mike
>
>
>[1] https://review.openstack.org/#/c/225384/
>[2] https://etherpad.openstack.org/p/PAO-LDT-cells-patches
>[3] https://review.openstack.org/#/c/215459/ (FIXED)
>[4] https://review.openstack.org/#/c/184158/
>[5] https://etherpad.openstack.org/p/TYO-ops-large-deployments-team
>[6] https://etherpad.openstack.org/p/LDT-glance-asks
>[7] https://wiki.openstack.org/wiki/Large_Deployment_Team
>[8] https://wiki.openstack.org/wiki/Meetings/LDT
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [Action Required]: Mitaka Operators Policy Changes

2015-11-03 Thread Mike Dorman
A bit late to the party here, but I’ve added all the policy changes for Go 
Daddy in the etherpad.  Happy to talk about any of them in more detail if 
desired.

Thanks,
Mike


From: Lance Bragstad >
Date: Tuesday, October 27, 2015 at 2:09 AM
To: OpenStack Operators 
>
Subject: [Openstack-operators] [Action Required]: Mitaka Operators Policy 
Changes

Hello ops folks,

During our session on global admin [0] [1], we heard several operators were 
making changes to policy that worked towards a common goal. Unfortunately, our 
session ended before we could gather all deployer changes.

As a follow-up, we'd like to give all the operators who are making changes to 
policy the chance to record them [2]. Don't worry about duplicates. If you're 
making the same policy change as another deployer, add your changes and use 
cases anyway. We can prune and consolidate duplicates and it serves as helpful 
data. We' d like to see if there is a pattern we can abstract and move it 
upstream.

Please use the following template in the etherpad.

operating organization: organization
operator: name and IRC nick!
change made to policy: general description or a link to a diff works
why: descriptions and use cases describing why you need to change policy

Thanks!

Lance
irc: lbragstad

[0] https://etherpad.openstack.org/p/mitaka-cross-project-global-admin
[1] https://etherpad.openstack.org/p/mitaka-cross-project-service-catalog-tng
[2] https://etherpad.openstack.org/p/mitaka-ops-policy-modifications
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [puppet] Creating puppet-keystone-core and proposing Richard Megginson core-reviewer

2015-11-02 Thread Mike Dorman
+1

From: Clayton O'Neill >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Sunday, November 1, 2015 at 5:13 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [puppet] Creating puppet-keystone-core and 
proposing Richard Megginson core-reviewer

+1

Big thanks for Rich for all the work he’s done so far.

On Mon, Nov 2, 2015 at 3:34 AM, Adam Young 
> wrote:
On 10/31/2015 10:55 AM, Emilien Macchi wrote:
At the Summit we discussed about scaling-up our team.
We decided to investigate the creation of sub-groups specific to our modules 
that would have +2 power.

I would like to start with puppet-keystone:
https://review.openstack.org/240666

And propose Richard Megginson part of this group.

Rich is leading puppet-keystone work since our Juno cycle. Without his 
leadership and skills, I'm not sure we would have Keystone v3 support in our 
modules.
He's a good Puppet reviewer and takes care of backward compatibility. He also 
has strong knowledge at how Keystone works. He's always willing to lead our 
roadmap regarding identity deployment in OpenStack.

Having him on-board is for us an awesome opportunity to be ahead of other 
deployments tools and supports many features in Keystone that real deployments 
actually need.

I would like to propose him part of the new puppet-keystone-core group.

As a Keystone developer I have to say I am indebted to Rich for his efforts.  
+1 from me.


Thank you Rich for your work, which is very appreciated.
--
Emilien Macchi



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] Informal Ops Meetup?

2015-10-29 Thread Mike Dorman
I’ll be there.





On 10/29/15, 4:56 PM, "gustavo panizzo (gfa)"  wrote:

>+1
>
>On Thu, Oct 29, 2015 at 07:39:33 +, Kris G. Lindgren wrote:
>> Hello all,
>> 
>> I am not sure if you guys have looked at the schedule for Friday… but its 
>> all working groups.  I was talking with a few other operators and the idea 
>> came up around doing an informal ops meetup tomorrow.  So I wanted to float 
>> this idea by the mailing list and see if anyone was interested in trying to 
>> do an informal ops meet up tomorrow.
>> 
>> ___
>> Kris Lindgren
>> Senior Linux Systems Engineer
>> GoDaddy
>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>-- 
>1AE0 322E B8F7 4717 BDEA BF1D 44BB 1BA7 9F6C 6333
>
>keybase: http://keybase.io/gfa
>
>___
>OpenStack-operators mailing list
>OpenStack-operators@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Ops Lightning Talk schedule

2015-10-25 Thread Mike Dorman
SHouldn’t be a major problem, we’ll make it work.

Anyone currently on the Wednesday list available to move to Tuesday?  Obviously 
Tuesday has the most conflicts.

Thanks all!





On 10/23/15, 11:40 AM, "gustavo panizzo (gfa)" <g...@zumbi.com.ar> wrote:

>I moved from Tuesday to Wednesday, but I feel like Wednesday is already
>packed while Tuesday is not.
>
>On Tuesday I conflict with non-production environments. besides that I'm
>ok
>
>On Tue, Oct 20, 2015 at 07:48:28PM +, Mike Dorman wrote:
>> I’ve gone ahead and (somewhat arbitrarily) scheduled out the lightning talks 
>> between the sessions on Tuesday and Wednesday:
>> 
>> https://etherpad.openstack.org/p/TYO-ops-lightning-talks
>> 
>> Please confirm your timeslot, or if you need to go on the opposite day, 
>> update the etherpad.
>> 
>> A few talks have been dropped, so we still have time for more.  If you’d 
>> like a chance to talk about something of interest to you for 5 – 10 minutes, 
>> please sign up!
>> 
>> Thanks.
>> 
>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>-- 
>1AE0 322E B8F7 4717 BDEA BF1D 44BB 1BA7 9F6C 6333
>
>keybase: http://keybase.io/gfa
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Ops Lightning Talk schedule

2015-10-20 Thread Mike Dorman
I’ve gone ahead and (somewhat arbitrarily) scheduled out the lightning talks 
between the sessions on Tuesday and Wednesday:

https://etherpad.openstack.org/p/TYO-ops-lightning-talks

Please confirm your timeslot, or if you need to go on the opposite day, update 
the etherpad.

A few talks have been dropped, so we still have time for more.  If you’d like a 
chance to talk about something of interest to you for 5 – 10 minutes, please 
sign up!

Thanks.

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-operators] OSOps we have a room in Toyko!

2015-10-20 Thread Mike Dorman
Could we get the etherpad link added to the sched page for this please?





On 10/20/15, 11:16 AM, "JJ Asghar"  wrote:

>-BEGIN PGP SIGNED MESSAGE-
>Hash: SHA512
>
>
>Hey everyone!
>
>Thanks to Tom and believing in our project he's given us a room[1] to
>get some work done.  I've started a etherpad here[2] to start putting
>down some thoughts.
>
>This is great news, please put any topics or thoughts so we can make
>sure we hit everything that we need to.
>
>
>
>[1]:
>https://mitakadesignsummit.sched.org/event/25d0acfd326696198af1e209a3671d1f#.ViZv8JdgjXo
>[2]: https://etherpad.openstack.org/p/osops-toyko-2015
>
>- -- 
>Best Regards,
>JJ Asghar
>c: 512.619.0722 t: @jjasghar irc: j^2
>-BEGIN PGP SIGNATURE-
>Version: GnuPG/MacGPG2 v2
>Comment: GPGTools - https://gpgtools.org
>
>iQIcBAEBCgAGBQJWJnb7AAoJEDZbxzMH0+jTyPgP/2vX8MESirwluMK8MWpQSZUk
>oug8hlumAHS/x0H7gO9gcz7yWFdFR5Ne+foilyIyiYaMfSY4wiX4QORdq7uH8Cy8
>2DhZH7UL4GHoK19bQLDNfIbDZVQ1pegIVS5xQm2bqiOtL3e0ZylcG93kHGfoC3fi
>R7TW9oRfOU0mYoR2kZ3ObLWrDudXfNlfGnoHP67J9sv1UYTQbyWiNklyFl+5Zayt
>Gj9b1mPtW2wnGynTB+8ajNovEkK4Mj6pv0fu/+I5zXIf5RMXtY7nwDoe2pYjDhU2
>8o7U8CYScJtAm9QTiFz233r5RTAVlvRWQVxFaSkYEH5Rz47+MJQxwoP4k6V0bHxT
>6flH81VFB+lkYm2gPzKZNQb4PoHNSv4c6bz8yGFCDGSClUlWi29uzenp4fzmOeDa
>kiWhWHeeXCyj6GViNhNi6JZSUnmTck4OiaodSMZmTOROmuzPMKFFxLf+ydN7AUo9
>o8rWfE6bJIoptsg+VhbbwaydajjdYo9/rtWEMGW6jnZLF6XVQOsOB9aF1z09se+4
>6kcgTs+jrIg8Hdf/zRCape4QTu9jBGg7hDA8sq6HxFrMtXnaA014HMoOm0RM1Ctc
>gx/IzR744pEOdwCSO5Typefl9/tAEAAl/rmoZ9mCjhukSKBqTyZ8neO2GBLpvg5W
>XVKwGsKgeujjiCLY9F11
>=Fb1d
>-END PGP SIGNATURE-
>
>___
>OpenStack-operators mailing list
>OpenStack-operators@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[openstack-dev] [Large Deployment Team] Action/Work Items Ahead of Summit, please review

2015-10-15 Thread Mike Dorman
During our meeting [1] today we discussed our agenda [2] and action items for 
the LDT working session [3] at the summit.

We’ve compiled a list of a few things folks can be working on before getting to 
Tokyo to ensure this session is productive:


1. Neutron Segmented/Routed Networks:
   * Review the current WIP spec: [4]
   * Neutron likely will have their own session on this topic [5], so we may 
opt to have most of the discussion there.  Schedule for that is TBD, watch the 
ether pads [2] [5] for updates.

2. Neutron General
   * Dev team is seeking “pain points” feedback, please add yours to the 
etherpad [6]

3. Common Cells v1 Patch sets
   * If you have not already done so, please add any cells patches you are 
carrying to the etherpad [7]
   * We’d like to assign the low-hanging-fruit patches out to folks to get into 
Nova reviews as a step toward merging them into upstream

4. Public Clouds
   * Plan to focus discussion during the session on identifying specific gaps 
and/or RFEs needed by this constituency
   * Please add these to the etherpad [2] and come ready to discuss at the 
summit

5. Glance Asks
   * Similar to Public Clouds, this will focus on current gaps and RFEs
   * Fill them in on the etherpad [8] and come prepared to discuss

6. Performance Issues
   * The cross project group has a session specifically for this [9] [10], so 
we will forego this discussion in LDT in lieu of that


Thanks for your participation in getting these work items moved forward.  We 
have a big agenda with only 90 minutes.  We can accomplish more in Tokyo if we 
prepare some ahead of time!


[1]  
http://eavesdrop.openstack.org/meetings/large_deployments_team_monthly_meeting/2015/large_deployments_team_monthly_meeting.2015-10-15-16.01.html
[2]  https://etherpad.openstack.org/p/TYO-ops-large-deployments-team
[3]  http://sched.co/4Nl4
[4]  https://review.openstack.org/#/c/225384/
[5]  https://etherpad.openstack.org/p/mitaka-neutron-next-network-model
[6]  https://etherpad.openstack.org/p/mitaka-neutron-next-ops-painpoints
[7]  https://etherpad.openstack.org/p/PAO-LDT-cells-patches
[8]  https://etherpad.openstack.org/p/LDT-glance-asks
[9]  http://sched.co/4Qds
[10] 
https://etherpad.openstack.org/p/mitaka-cross-project-performance-team-kick-off
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack-operators] Nova DB archiving script

2015-10-06 Thread Mike Dorman
I posted a patch against one of the Nova DB archiving scripts in the 
osops-tools-generic repo a few days ago to support additional tables:

https://review.openstack.org/#/c/229013/2

We’d like a few more folks to review to make sure it looks good.  Please take a 
few minutes and take a look.  Thanks!

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack] [Neutron] DB migration error during production upgrade

2015-08-20 Thread Mike Dorman
Check that both tables are set to the same collation (InnoDB, etc.)  Several of 
us have seen this same thing on foreign keys, and it’s because of a collation 
mismatch between the tables.





On 8/20/15, 9:25 AM, Jonathan Proulx j...@jonproulx.com wrote:

H i All,

I'm hitting a DB migration error while attempting a production upgrade
(despite having successfully ruin the same upgrade on an only slightly
older copy of the database last week)

in:
INFO  [alembic.migration] Running upgrade 38495dc99731 - 4dbe243cd84d, nsxv

Failing:
sqlalchemy.exc.OperationalError: (OperationalError) (1005, Can't
create table 'csail_stata_neutron.nsxv_internal_networks' (errno:
150)) \nCREATE TABLE nsxv_internal_networks (\n\tnetwork_purpose
ENUM('inter_edge_net') NOT NULL, \n\tnetwork_id VARCHAR(36),
\n\tPRIMARY KEY (network_purpose), \n\tFOREIGN KEY(network_id)
REFERENCES networks (id) ON DELETE CASCADE\n)ENGINE=InnoDB\n\n ()

poking by hand I can verify the FOREIGN KEY bit is what's failing but
I definitely have a 'networks' table and it definitely has 'id' fields
so not sure what is wrong here.

My production control plane is currently off line so any suggestions
would be much appreciated.

-Jon

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack-operators] Draft Agenda for PAO Ops Meetup (August18, 19)

2015-08-13 Thread Mike Dorman
I have sent out the Zoom meeting for everybody that’s supposed to be on it, I 
believe.

If I missed you, or there are others that want to join, here is the link:

https://godaddy.zoom.us/j/468307641

Tuesday 8/18 1330 US Pacific time.

Mike


From: Matt Van Winkle
Date: Monday, August 10, 2015 at 7:56 AM
To: Mike Dorman
Cc: OpenStack Operators
Subject: Re: [Openstack-operators] Draft Agenda for PAO Ops Meetup (August18, 
19)

Sorry, folks – late to the party here.  I'm fine with keeping the Neutron bits 
as the first part of the LDT meeting.  Looks like Mike has a mechanism set up 
for the folks far off.

As for the Question on public clouds, I'm open to combining if there is no 
other home for the later.  Many of them (Rackspace, GoDaddy and HP) have been 
active in the LDT activities at the summits and other places, so there is some 
overlap.  I'm also fine organizing an informal chat one of the evenings to 
figure out where the common ground lies with Public Cloud providers and how/if 
we need to organize a group around  that – assuming an official spot hasn't 
been found and I just haven't found it in my inbox yet.

Thanks!
VW


From: Mike Dorman mdor...@godaddy.commailto:mdor...@godaddy.com
Date: Thursday, August 6, 2015 10:13 PM
Cc: OpenStack Operators 
openstack-operators@lists.openstack.orgmailto:openstack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] Draft Agenda for PAO Ops Meetup (August18, 
19)

Yep!  I have you on the list to add to the Hangout when I set it up (probably 
next week.)

From: Kevin Benton
Date: Thursday, August 6, 2015 at 8:29 PM
To: Ryan Moats
Cc: mord...@inaugust.commailto:mord...@inaugust.com, OpenStack Operators
Subject: Re: [Openstack-operators] Draft Agenda for PAO Ops Meetup (August18, 
19)

Even though I will be in China at the bug hackathon, I would like to call in 
for the network meeting since this will have a direct impact on Neutron 
development.

On Thu, Aug 6, 2015 at 4:35 PM, Ryan Moats 
rmo...@us.ibm.commailto:rmo...@us.ibm.com wrote:

The network discussion was going to be the first item for that breakout, so I 
think that might be possible (Mike, what say you?)

Ryan Moats

Geoff Arnold ge...@geoffarnold.commailto:ge...@geoffarnold.com wrote on 
08/06/2015 03:32:19 PM:

 From: Geoff Arnold ge...@geoffarnold.commailto:ge...@geoffarnold.com
 To: Ryan Moats/Omaha/IBM@IBMUS
 Cc: mord...@inaugust.commailto:mord...@inaugust.com, 
 openstack-operators@lists.openstack.orgmailto:openstack-operators@lists.openstack.org,
 Tom Fifield t...@openstack.orgmailto:t...@openstack.org
 Date: 08/06/2015 03:31 PM
 Subject: Re: [Openstack-operators] Draft Agenda for PAO Ops Meetup
 (August18, 19)


 I was afraid of that.

 We’ve got 90 minutes for the breakout sessions - can we treat Large
 Scale and Public as a single session, and divide it 45-45 or 60-30?

 Geoff

 On Aug 6, 2015, at 12:51 PM, Ryan Moats 
 rmo...@us.ibm.commailto:rmo...@us.ibm.com wrote:

 Part of the LDT is discussing networking issues with Neutron folks
 that will not be in the room, so moving it would require re-
 synchronizing schedules. I'd rather avoid that if possible...

 Ryan Moats

 Geoff Arnold ge...@geoffarnold.commailto:ge...@geoffarnold.com wrote on 
 08/06/2015 12:37:49 PM:

  From: Geoff Arnold ge...@geoffarnold.commailto:ge...@geoffarnold.com
  To: Tom Fifield t...@openstack.orgmailto:t...@openstack.org, 
  mord...@inaugust.commailto:mord...@inaugust.com
  Cc: 
  openstack-operators@lists.openstack.orgmailto:openstack-operators@lists.openstack.org
  Date: 08/06/2015 12:38 PM
  Subject: Re: [Openstack-operators] Draft Agenda for PAO Ops Meetup
  (August 18, 19)
 
  Well, I’d prefer not to put Public Clouds into the existing HPC
  slot, because I’m involved in the Product WG discussion that’s
  scheduled at the same time.
 
  Could we put Public Clouds in the Tuesday breakout mix, and Large
  Deployments in the Wednesday breakout (replacing the HPC WG).
 
  Geoff
 
  PS Will you be there, Monty? Any preferences?
 
 
   On Aug 5, 2015, at 7:31 PM, Tom Fifield 
   t...@openstack.orgmailto:t...@openstack.org wrote:
  
   Thanks Geoff.
  
   Which session would you propose to replace?
  
  
   Regards,
  
  
   Tom
  
   On 06/08/15 03:14, Geoff Arnold wrote:
   I’d like to see some time spent on specific issues associated with
   public cloud operations.  (This is not the same as Large Deployments.)
   As Stefano pointed out yesterday:
  
   http://maffulli.net/2015/08/04/a-new-push-for-openstack-public-clouds/
  
   this is an area which probably needs more attention.
  
   Cheers,
  
   Geoff


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.orgmailto:OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




--
Kevin Benton
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http

[Openstack-operators] PAO lightning talks

2015-08-11 Thread Mike Dorman
Hi all,

If you plan on giving a lightning talk at the ops meetup next week, please 
confirm your contact details on the etherpad:  
https://etherpad.openstack.org/p/PAO-ops-lightning-talks  I’ll reach out to 
everybody off list once we get a little closer to coordinate the schedule.

We have 60 minutes total, so please figure on 5-10 minutes (including time for 
questions.)  Please have any slides or content available online, or send them 
to me, so we can all present off the same laptop and avoid the A/V context 
switches.

Thanks!
Mike

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Draft Agenda for PAO Ops Meetup (August18, 19)

2015-08-06 Thread Mike Dorman
Yep!  I have you on the list to add to the Hangout when I set it up (probably 
next week.)

From: Kevin Benton
Date: Thursday, August 6, 2015 at 8:29 PM
To: Ryan Moats
Cc: mord...@inaugust.commailto:mord...@inaugust.com, OpenStack Operators
Subject: Re: [Openstack-operators] Draft Agenda for PAO Ops Meetup (August18, 
19)

Even though I will be in China at the bug hackathon, I would like to call in 
for the network meeting since this will have a direct impact on Neutron 
development.

On Thu, Aug 6, 2015 at 4:35 PM, Ryan Moats 
rmo...@us.ibm.commailto:rmo...@us.ibm.com wrote:

The network discussion was going to be the first item for that breakout, so I 
think that might be possible (Mike, what say you?)

Ryan Moats

Geoff Arnold ge...@geoffarnold.commailto:ge...@geoffarnold.com wrote on 
08/06/2015 03:32:19 PM:

 From: Geoff Arnold ge...@geoffarnold.commailto:ge...@geoffarnold.com
 To: Ryan Moats/Omaha/IBM@IBMUS
 Cc: mord...@inaugust.commailto:mord...@inaugust.com, 
 openstack-operators@lists.openstack.orgmailto:openstack-operators@lists.openstack.org,
 Tom Fifield t...@openstack.orgmailto:t...@openstack.org
 Date: 08/06/2015 03:31 PM
 Subject: Re: [Openstack-operators] Draft Agenda for PAO Ops Meetup
 (August18, 19)


 I was afraid of that.

 We’ve got 90 minutes for the breakout sessions - can we treat Large
 Scale and Public as a single session, and divide it 45-45 or 60-30?

 Geoff

 On Aug 6, 2015, at 12:51 PM, Ryan Moats 
 rmo...@us.ibm.commailto:rmo...@us.ibm.com wrote:

 Part of the LDT is discussing networking issues with Neutron folks
 that will not be in the room, so moving it would require re-
 synchronizing schedules. I'd rather avoid that if possible...

 Ryan Moats

 Geoff Arnold ge...@geoffarnold.commailto:ge...@geoffarnold.com wrote on 
 08/06/2015 12:37:49 PM:

  From: Geoff Arnold ge...@geoffarnold.commailto:ge...@geoffarnold.com
  To: Tom Fifield t...@openstack.orgmailto:t...@openstack.org, 
  mord...@inaugust.commailto:mord...@inaugust.com
  Cc: 
  openstack-operators@lists.openstack.orgmailto:openstack-operators@lists.openstack.org
  Date: 08/06/2015 12:38 PM
  Subject: Re: [Openstack-operators] Draft Agenda for PAO Ops Meetup
  (August 18, 19)
 
  Well, I’d prefer not to put Public Clouds into the existing HPC
  slot, because I’m involved in the Product WG discussion that’s
  scheduled at the same time.
 
  Could we put Public Clouds in the Tuesday breakout mix, and Large
  Deployments in the Wednesday breakout (replacing the HPC WG).
 
  Geoff
 
  PS Will you be there, Monty? Any preferences?
 
 
   On Aug 5, 2015, at 7:31 PM, Tom Fifield 
   t...@openstack.orgmailto:t...@openstack.org wrote:
  
   Thanks Geoff.
  
   Which session would you propose to replace?
  
  
   Regards,
  
  
   Tom
  
   On 06/08/15 03:14, Geoff Arnold wrote:
   I’d like to see some time spent on specific issues associated with
   public cloud operations.  (This is not the same as Large Deployments.)
   As Stefano pointed out yesterday:
  
   http://maffulli.net/2015/08/04/a-new-push-for-openstack-public-clouds/
  
   this is an area which probably needs more attention.
  
   Cheers,
  
   Geoff


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.orgmailto:OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




--
Kevin Benton
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [Openstack-operators] [Neutron][L3] [Large Deployments Team] Representing a networks connected by routers

2015-08-04 Thread Mike Dorman
Ok, cool.  We plan to discuss this during the LDT time slot at 1330-1500 
Pacific on Tuesday 8/18.  We can have this as the first agenda item so there’s 
a defined start time for those who are remote.

I'll take ownership of setting up a hangout (or whatever.)  Do people have a 
preference on what videoconference tool to us?  Absent any opinions, I’ll just 
do a Google Hangout.

Thanks!
Mike


From: Kyle Mestery
Date: Tuesday, August 4, 2015 at 8:09 AM
To: Ryan Moats
Cc: Mike Dorman, OpenStack Development Mailing List (not for usage 
questions), OpenStack Operators
Subject: Re: [Openstack-operators] [openstack-dev] [Neutron][L3] [Large 
Deployments Team] Representing a networks connected by routers

Can you also try to have some sort of remote option? I'd like to attend this, 
and I'd like Carl to try and attend as well. Thanks!

On Tue, Aug 4, 2015 at 8:50 AM, Ryan Moats 
rmo...@us.ibm.commailto:rmo...@us.ibm.com wrote:

I will be there for my lightning talk, and I think armax and kevin_benton will 
be there - it would be good to find some time for us to pow-wow, along with 
some teleconference so that carl_baldwin and mestery can join in...

Ryan Moats (regXboi)

Mike Dorman mdor...@godaddy.commailto:mdor...@godaddy.com wrote on 
08/03/2015 10:07:23 PM:

 From: Mike Dorman mdor...@godaddy.commailto:mdor...@godaddy.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org,
  OpenStack Operators openstack-
 operat...@lists.openstack.orgmailto:operat...@lists.openstack.org
 Date: 08/03/2015 10:08 PM
 Subject: Re: [Openstack-operators] [openstack-dev] [Neutron][L3]
 [Large Deployments Team] Representing a networks connected by routers


 I hope we can move this idea moving forward.  I was disappointed to see
 the spec abandoned.

 Some of us from the large deployers group will be at the Ops Meetup.  Will
 there be any representation from Neutron there that we could discuss with
 more?

 Thanks,
 Mike





 On 8/3/15, 12:27 PM, Carl Baldwin 
 c...@ecbaldwin.netmailto:c...@ecbaldwin.net wrote:

 Kevin, sorry for the delay in response.  Keeping up on this thread was
 getting difficult while on vacation.
 
 tl;dr:  I think it is worth it to talk through the idea of inserting
 some sort of a subnet group thing in the model to which floating ips
 (and router external gateways) will associate.  It has been on my mind
 for a long time now.  I didn't pursue it because a few informal
 attempts to discuss it with others indicated to me that it would be a
 difficult heavy-lifting job that others may not appreciate or
 understand.  Scroll to the bottom of this message for a little more on
 this.
 
 Carl
 
 On Tue, Jul 28, 2015 at 1:15 AM, Kevin Benton 
 blak...@gmail.commailto:blak...@gmail.com wrote:
 Also, in my proposal, it is more the router that is the grouping
 mechanism.
 
  I can't reconcile this with all of the points you make in the rest of
 your
  email. You want the collection of subnets that a network represents,
 but you
  don't want any other properties of the network.
 
 This is closer to what I'm trying to say but isn't quite there.  There
 are some subnets that should be associated with the segments
 themselves and there are some that should be associated with the
 collection of segments.  I want floating IPs that are not tied the an
 L2 network.  None of the alternate proposals that I'd heard addressed
 this.
 
 that the network object is currently the closest thing we have to
  representing the L3 part of the network.
 
  The L3 part of a network is the subnets. You can't have IP addresses
 without
  the subnets, you can't have floating IPs without the subnets, etc.
 
 You're right but in the current model you can't have IP addresses
 without the network either which is actually the point I'm trying to
 make.
 
  A Neutron network is an L2 construct that encapsulates L3 things. By
  encapsulating them it also provides an implicit grouping. The routed
  networks proposal basically wants that implicit grouping without the
  encapsulation or the L2 part.
 
 This sounds about right.  I think it is wrong to assume that we need
 an L2 network to encapsulate L3 things.  I'm feeling restricted by the
 model and the insistence that a neutron network is a purely L2
 construct.
 
 We don't associate floating ips with a network because we want to arp
 for
  them.  You're taking a consequence of the current model and its
 constraints
  and presenting that as the motivation for the model. We do so because
 there
  is no better L3 object to associate it to.
 
  Don't make assumptions about how people use floating IPs now just
 because it
  doesn't fit your use-case well. When an external network is implemented
 as a
  real Neutron network (leaving external_network_bridge blank like we
 suggest
  in the networking guide), normal ports can be attached and can
  co-exist/communicate with the floating IPs because it behaves as an L2

Re: [openstack-dev] [Neutron][L3] [Large Deployments Team] Representing a networks connected by routers

2015-08-03 Thread Mike Dorman
I hope we can move this idea moving forward.  I was disappointed to see 
the spec abandoned.

Some of us from the large deployers group will be at the Ops Meetup.  Will 
there be any representation from Neutron there that we could discuss with 
more?

Thanks,
Mike





On 8/3/15, 12:27 PM, Carl Baldwin c...@ecbaldwin.net wrote:

Kevin, sorry for the delay in response.  Keeping up on this thread was
getting difficult while on vacation.

tl;dr:  I think it is worth it to talk through the idea of inserting
some sort of a subnet group thing in the model to which floating ips
(and router external gateways) will associate.  It has been on my mind
for a long time now.  I didn't pursue it because a few informal
attempts to discuss it with others indicated to me that it would be a
difficult heavy-lifting job that others may not appreciate or
understand.  Scroll to the bottom of this message for a little more on
this.

Carl

On Tue, Jul 28, 2015 at 1:15 AM, Kevin Benton blak...@gmail.com wrote:
Also, in my proposal, it is more the router that is the grouping 
mechanism.

 I can't reconcile this with all of the points you make in the rest of 
your
 email. You want the collection of subnets that a network represents, 
but you
 don't want any other properties of the network.

This is closer to what I'm trying to say but isn't quite there.  There
are some subnets that should be associated with the segments
themselves and there are some that should be associated with the
collection of segments.  I want floating IPs that are not tied the an
L2 network.  None of the alternate proposals that I'd heard addressed
this.

that the network object is currently the closest thing we have to
 representing the L3 part of the network.

 The L3 part of a network is the subnets. You can't have IP addresses 
without
 the subnets, you can't have floating IPs without the subnets, etc.

You're right but in the current model you can't have IP addresses
without the network either which is actually the point I'm trying to
make.

 A Neutron network is an L2 construct that encapsulates L3 things. By
 encapsulating them it also provides an implicit grouping. The routed
 networks proposal basically wants that implicit grouping without the
 encapsulation or the L2 part.

This sounds about right.  I think it is wrong to assume that we need
an L2 network to encapsulate L3 things.  I'm feeling restricted by the
model and the insistence that a neutron network is a purely L2
construct.

We don't associate floating ips with a network because we want to arp 
for
 them.  You're taking a consequence of the current model and its 
constraints
 and presenting that as the motivation for the model. We do so because 
there
 is no better L3 object to associate it to.

 Don't make assumptions about how people use floating IPs now just 
because it
 doesn't fit your use-case well. When an external network is implemented 
as a
 real Neutron network (leaving external_network_bridge blank like we 
suggest
 in the networking guide), normal ports can be attached and can
 co-exist/communicate with the floating IPs because it behaves as an L2
 network exactly as implied by the API. The current model works quite 
well if
 your fabric can extend the external network everywhere it needs to be.

Yes, when an external network is implemented as a real Neutron
network all of this is true and my proposal doesn't change any of
this.  I'm wasn't making any such assumptions.  I acknowledge that the
current model works well in this case and didn't intend to change it
for current use cases.  It is precisely that because it does not fit
my use case well that I'm pursuing this.

Notice that a network marked only as external doesn't allow normal
tenants to create ports.  It must also be marked shared to allow it.
Unless tenants are creating regular ports then they really don't care
if arp or anything else L2 is involved because such an external
network is meant to give access external to the cloud where L2 is
really just an implementation detail.  It is the deployer that cares
because of whatever infrastructure (like a gateway router) needs to
work with it.  If the L2 is important, then the deployer will not
attempt to use an L3 only network, she will use the same kinds of
networks as always.

The bad assumption here is that floating IPs need an explicit
association with an L2 only construct:  tenant's allocate a floating
IP by selecting the Neutron network it is recorded in the DB that way.
Tenant's aren't even allowed to see the subnets on an external
network.  This is counter-intuitive to me because I believe that, in
most cases, tenants want a floating IP to get L3 access to the world
(or a part of it) that is external to Openstack.  Yet, they can only
see the L2 object?  These are the factors that make me view the
Neutron network as an L2 + L3 construct.

 If you don't want floating IPs to be reachable on the network they are
 associated with, then let's stop associating them with a 

Re: [Openstack-operators] Draft Agenda for PAO Ops Meetup (August 18, 19)

2015-08-03 Thread Mike Dorman
Looks great, Tom, thanks.  Coupling breakouts with lunch is a good plan.


From: Tom Fifield
Date: Monday, August 3, 2015 at 4:48 AM
To: OpenStack Operators
Subject: [Openstack-operators] Draft Agenda for PAO Ops Meetup (August 18, 19)

Hi all,

Registrations are going well for our meetup in Palo Alto. If you're on the 
fence, hopefully this discussion will get you quickly over the line so you 
don't miss out!

http://www.eventbrite.com/e/openstack-ops-mid-cycle-meetup-tickets-17703258924

So, I've taken our suggestions and attempted to wrangle them into something 
that would fit in the space we have over 2 days.

As a reminder, we have two different kind of sessions - General Sessions, which 
are discussions for the operator community aimed to produce actions (eg best 
practices, feedback on badness), andWorking groupsfocus on specific topics 
aiming to make concrete progress on tasks in that area.

As always, some stuff has been munged and mangled in an attempt to fit it in. 
For example, we'd expect to talk about Kolla more generally in the context of 
Using Containers for Deployment, because there are some other ways to do that 
too. Similarly, we'd expect the ops project discussion to be rolled into the 
session on the user committee.

Anyway, take a look at the below and reply with your comments! Is anything 
missing? Something look like a terrible idea? Want to completely change the 
room layout? There's still a little bit of flexibility at this stage.



Tuesday Med II  Med III Salon A Salon B Bacchus
9:00 - 10:00Registration




10:00 - 10:30   Introduction




10:30 - 11:15   Burning Issues




11:15 - 11:55   Hypervisor Tuning




11:55 - 12:05   Breakout Explain




12:05 - 13:30   Lunch




13:30 - 15:00   Large Deployments Team  Burning Issues  Logging WG  
Upgrades WG Ops Guide Fixing
15:00 - 15:30   Coffee




15:30 - 16:00   Breakout Reports




16:00 - 17:00   Using Containers for Deployment




17:00 - 18:00   Lightening Talks


















Wednesday   Med II  Med III Salon A Salon B Bacchus
9:00 - 09:45CMDB: use cases




9:45 - 10:30Deployment Tips - read only slaves? admin-only API servers?




10:30 - 11:15   What network model are you using? Are you happy?




11:15 - 11:30   Coffee




11:30 - 12:15   User Committee Discussion




12:15 - 12:20   Breakout Explain




12:20 - 13:30   Lunch




13:30 - 15:00   Tools and MonitoringProduct WG  Packaging   HPC 
Working Group   Ops Tags Team
15:00 - 15:30   Coffee




15:30 - 16:00   Breakout Reports




16:00 - 17:00   Feedback Session, Tokyo Planning






There will be a followup email shortly regarding moderators for the sessions - 
thanks to those who volunteered so far!


Regards,


Tom
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [puppet] module dependencies and different openstack versions

2015-07-28 Thread Mike Dorman
We use the OpenStack modules, but glue everything together with a 
monolithic composition module (our own.)  We do want to get to a place 
where we can upgrade/apply config/etc. each OpenStack component 
separately, but have’t tackled it yet.  I think it will be possible, but 
will take some work.  I have heard of a few others who have been working 
toward the same thing, though I don’t think there’s really anything 
concrete in the upstream modules yet.

WRT the dependencies, we use r10k with a manually populated Puppetfile, so 
we don’t rely on the module metadata to determine which modules to pull 
in.  That’s one way to get exactly what you want rather than all the 
dependency sprawl.

Mike





On 7/27/15, 5:10 PM, Sam Morrison sorri...@gmail.com wrote:


 On 27 Jul 2015, at 11:25 pm, Emilien Macchi emil...@redhat.com wrote:
 
 
 
 On 07/27/2015 02:32 AM, Sam Morrison wrote:
 We currently use our own custom puppet modules to deploy openstack, I 
have been looking into the official openstack modules and have a few 
barriers to switching.
 
 We are looking at doing this at a project at a time but the modules 
have a lot of dependencies. Eg. they all depend on the keystone module 
and try to do things in keystone suck as create users, service 
endpoints etc.
 
 This is a pain as I don’t want it to mess with keystone (for one we 
don’t support setting endpoints via an API) but also we don’t want to 
move to the official keystone module at the same time. We have some 
custom keystone stuff which means we’ll may never move to the official 
keystone puppet module.
 
 Well, in that case it's going to be very hard for you to use the
 modules. Trying to give up forks and catch-up to upstream is really
 expensive and challenging (Fuel is currently working on this).
 
 What I suggest is:
 1/ have a look at the diff between your manifests and upstream ones.
 2/ try to use upstream modules with the maximum number of classes, and
 put the rest in a custom module (or a manifest somewhere).
 3/ submit patches if you think we're missing something in the modules.
 The neutron module pulls in the vswitch module but we don’t use 
vswitch and it doesn’t seem to be a requirement of the module so maybe 
doesn’t need to be in metadata dependencies?
 
 AFIK there is no conditional in metadata.json, so we need the module
 anyway. It should not cause any trouble to you, except if you have a
 custom 'vswitch' module.

Yeah it would be nice if you could specify dependencies as well as 
recommended much like debian packages do. We use librarian-puppet to 
manage all our modules and you can’t disable it installing all the 
dependencies. But that is another issue…

 It looks as if all the openstack puppet modules are designed to all be 
used at once? Does anyone else have these kind of issues? It would be 
great if eg. the neutron module would just manage neutron and not try 
and do things in nova, keystone, mysql etc.
 
 We try to design our modules to work together because Puppet OpenStack
 is a single project composed of modules that are supposed to -together-
 deploy OpenStack.

All the puppet modules we use are very modular (hence the name), the 
openstack modules aren’t at this stage. Ideally each module would be self 
contained and then if people wanted to deploy “openstack” there could be 
an “openstack” module that would pull in all the individual project 
modules and make them work together.

It’s the first tip for writing a module listed at 
https://docs.puppetlabs.com/puppet/latest/reference/modules_fundamentals.h
tml#tips

I guess I’m just wondering if other people are having the same issue I 
am? and if so is there a way forward to make the puppet modules more 
modular or do I just stick with my own modules.

 In your case, I would just install the module from source (git) and not
 trying to pull them from Puppetforge.
 
 
 The other issue we have is that we have different services in 
openstack running different versions. Currently we have Kilo, Juno and 
Icehouse versions of different bits in the same cloud. It seems as if 
the puppet modules are designed just to manage one openstack version? 
Is there any thoughts on making it support different versions at the 
same time? Does this work?
 
 1/ you're running Kilo, Juno and Icehouse in the same cloud? Wow. You're
 brave!

We are a large deployment spanning multiple data centres and 1000+ hosts 
so upgrading in one big bang isn’t an option. I don’t think this is brave 
it is the norm for people running large openstack clouds in production.

 2/ Puppet modules do not hardcode OpenStack packages version. Though our
 current master is targeting Liberty, but we have stable/kilo,
 stable/juno, etc. You can even disable the package dependency in most of
 the classes.

The packages aren’t the issue it’s more the configs that get pushed out 
and so on, when config variables change location etc. with different 
versions this becomes hard.

 I'm not sure this is an issue here, 

Re: [openstack-dev] [Neutron][L3] Representing a networks connected by routers

2015-07-27 Thread Mike Dorman

On 7/23/15, 8:54 AM, Carl Baldwin c...@ecbaldwin.net wrote:

On Thu, Jul 23, 2015 at 8:51 AM, Kevin Benton blak...@gmail.com wrote:
Or, migration scheduling would need to respect the constraint that a
 port may be confined to a set of hosts.  How can be assign a port to a
 different network?  The VM would wake up and what?  How would it know
 to reconfigure its network stack?

 Right, that's a big mess. Once a network is picked for a port I think we
 just need to rely on a scheduler filter that limits the migration to 
where
 that network is available.

+1.  That's where I was going.

Agreed, this seems reasonable to me for the migration scheduling case.

I view the pre-created port scenario as an edge case.  By explicitly 
pre-creating a port and using it for a new instance (rather than letting 
nova create a port for you), you are implicitly stating that you have more 
knowledge about the networking setup.  In so doing, you’re removing the 
guard rails (of nova scheduling the instance to a good network for the 
host it's on), and therefore are at higher risk to crash and burn.  To me 
that’s an acceptable trade-off.

Mike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack-operators] [Neutron] [Large Deployments Team] Discussion around routed networks

2015-07-27 Thread Mike Dorman
I wanted to bring this to the attention of anybody who may have missed it 
on openstack-dev.  Particularly the LDT team folks who have been talking 
about the routed networks/disparate L2 domains stuff [1] [2].

http://lists.openstack.org/pipermail/openstack-dev/2015-July/thread.html#70
028

This is a discussion stemming from Carl’s segmented, routed networks spec 
[3].  I think the “ask” from operators has been somewhat well represented, 
but if others could review and chime in as appropriate, I think that could 
be useful.

Also somewhat related is this patch [4] for better scheduling DHCP agents 
on the appropriate L2 segment.  Might be worth a +1 if it would be useful 
to you as an operator.

[1] https://bugs.launchpad.net/neutron/+bug/1458890
[2] https://etherpad.openstack.org/p/Network_Segmentation_Usecases
[3] https://review.openstack.org/#/c/196812/
[4] https://review.openstack.org/#/c/205631/

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [Neutron][L3] Representing a networks connected by routers

2015-07-27 Thread Mike Dorman

On 7/23/15, 9:42 AM, Carl Baldwin c...@ecbaldwin.net wrote:

On Wed, Jul 22, 2015 at 3:21 PM, Kevin Benton blak...@gmail.com wrote:
 The issue with the availability zone solution is that we now force
 availability zones in Nova to be constrained to network configuration. 
In
 the L3 ToR/no overlay configuration, this means every rack is its own
 availability zone. This is pretty annoying for users to deal with 
because
 they have to choose from potentially hundreds of availability zones and 
it
 rules out making AZs based on other things (e.g.  current phase, cooling
 systems, etc).

 I may be misunderstanding and you could be suggesting to not expose this
 availability zone to the end user and only make it available to the
 scheduler. However, this defeats one of the purposes of availability 
zones
 which is to let users select different AZs to spread their instances 
across
 failure domains.

I was actually talking with some guys at dinner during the Nova
mid-cycle last night (Andrew ??, Robert Collins, Dan Smith, probably
others I can't remember) about the relationship of these network
segments to AZs and cells.  I think we were all in agreement that we
can't confine segments to AZs or cells nor the other way around.


I just want to +1 this one from the operators’ perspective.  Network 
segments, availability zones, and cells are all separate constructs which 
are used for different purposes.  We prefer to not have any relationships 
forced between them.

Mike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] Nova migrate-flavor-data woes

2015-07-27 Thread Mike Dorman
I had this frustration, too, when doing this the first time.

FYI (and for the Googlers who stumble across this in the future), this 
patch [1] fixes the --max_number thing.

[1] https://review.openstack.org/#/c/175890/






On 7/27/15, 8:45 AM, Jay Pipes jaypi...@gmail.com wrote:

On 07/26/2015 01:15 PM, Lars Kellogg-Stedman wrote:
 So, the Kilo release notes say:

  nova-manage migrate-flavor-data

 But nova-manage says:

  nova-manage db migrate_flavor_data

 But that says:

  Missing arguments: max_number

 And the help says:

  usage: nova-manage db migrate_flavor_data [-h]
[--max-number number]

 Which indicates that --max-number is optional, but whatever, so you
 try:

  nova-manage db migrate_flavor_data --max-number 100

 And that says:

  Missing arguments: max_number

 So just for kicks you try:

  nova-manage db migrate_flavor_data --max_number 100

 And that says:

  nova-manage: error: unrecognized arguments: --max_number

 So finally you try:

  nova-manage db migrate_flavor_data 100

 And holy poorly implement client, Batman, it works.

LOL. Well, the important thing is that the thing eventually worked. ;P

In all seriousness, though, yeah, the nova-manage CLI tool is entirely 
different from the main python-novaclient CLI tool. It's not been a 
priority whatsoever to clean it up, but I think it would be some pretty 
low-hanging fruit to make the CLI consistent with the design of, say, 
python-openstackclient...

Perhaps something we should develop a backlog spec for.

Best,
-jay

___
Mailing list: 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack-operators] Nova cells v2 and operational impacts

2015-07-21 Thread Mike Dorman
Seems reasonable.

For us already running v1, will we be creating another new cell database 
for v2?  Or will our existing v1 cell database become that second database 
under v2?

Somewhat beyond the scope of this thread, but my main concern is the 
acrobatics going from v1 in Kilo to the hybrid v1/v2 in Liberty, to full 
v2 in Mitaka.  I think we all realize there will be some amount of pain to 
get to v2, but as long as that case for us existing cells users can be 
handled in a somewhat sane way, I’m happy.  

Mike





On 7/21/15, 8:45 AM, Michael Still mi...@stillhq.com wrote:

Heya,

the nova developer mid-cycle meetup is happening this week. We've been
talking through the operational impacts of cells v2, and thought it
would be a good idea to mention them here and get your thoughts.

First off, what is cells v2? The plan is that _every_ nova deployment
will be running a new version of cells. The default will be a
deployment of a single cell, which will have the impact that existing
single cell deployments will end up having another mysql database that
is required by cells. However, you wont be required to bring up any
additional nova services at this point [1], as cells v2 lives inside
the nova-api service.

The advantage of this approach is that cells stops being a weird
special case run by big deployments. We're forced to implement
everything in cells, instead of the bits that a couple of bigger
players cared enough about, and we're also forced to test it better.
It also means that smaller deployments can grow into big deployments
much more easily. Finally, it also simplifies the nova code, which
will reduce our tech debt.

This is a large block of work, so cells v2 wont be fully complete in
Liberty. Cells v1 deployments will effective run both cells v2 and
cells v1 for this release, with the cells v2 code thinking that there
is a single very large cell. We'll continue the transition for cells
v1 deployments to pure cells v2 in the M release.

So what's the actual question? We're introducing an additional mysql
database that every nova deployment will need to possess in Liberty.
We talked through having this data be in the existing database, but
that wasn't a plan that made us comfortable for various reasons. This
means that operators would need to do two db_syncs instead of one
during upgrades. We worry that this will be annoying to single cell
deployments.

We therefore propose the following:

 - all operators when they hit Liberty will need to add a new
connection string to their nova.conf which configures this new mysql
database, there will be a release note to remind you to do this.
 - we will add a flag which indicates if a db_sync should imply a sync
of the cells database as well. The default for this flag will be true.

This means that you can still do these syncs separately if you want,
but we're not forcing you to remember to do it if you just want it to
always happen at the same time.

Does this sound acceptable? Or are we over thinking this? We'd
appreciate your thoughts.

Cheers,
Michael

1: there is some talk about having a separate pool of conductors to
handle the cells database, but this wont be implemented in Liberty.

-- 
Rackspace Australia

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [puppet] puppet-designate POC implementation of virtualenv and docker support.

2015-07-15 Thread Mike Dorman
I have been meaning to ask you about this, so thanks for posting.

I like the approach.  Definitely a lot cleaner than the somewhat hardcoded 
dependencies and subscriptions that are in the modules now.

Do you envision that long term the docker/venv/whatever else implementation 
(like you have in designate_ext) would actually be part of the upstream Puppet 
module?  Or would we provide the hooks that you describe, and leave it up to 
other modules to handle the non-package-based installs?

Mike


From: Clayton O'Neill
Reply-To: OpenStack Development Mailing List (not for usage questions)
Date: Monday, July 13, 2015 at 8:34 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [puppet] puppet-designate POC implementation of 
virtualenv and docker support.

Last year I put together a virtualenv patch for the Designate puppet module, 
but the patch was too invasive of a change and too opinionated to be practical 
to merge.  I've taken another shot at this with the approach of implementing 
well defined hooks for various phases of the module. This should  allow 
external support for alternative ways of installing and running services (such 
as virtualenv, and docker).  I think this patch is now mostly ready for some 
outside reviews (we'll be running the virtualenv support in production soon).

The puppet-designate change to support this can be found here:  
https://review.openstack.org/#/c/197172/

The supporting puppet-designate_ext module can be found here: 
https://github.com/twc-openstack/puppet-designate_ext

The basic approach is to split the module dependency chain into 3 phases:

 * install begin/end
 * config begin/end
 * service begin/end

Each of these phases have a pair of corresponding anchors that are used 
internally for dependencies and notifications.  This allows external modules to 
hook into this flow without having to change the module.  For example, the 
virtualenv support will build the virtualenv environment between the 
designate::install::begin and designate::install::end anchors.  Additionally, 
the virtualenv support will notify the designate::install::end anchor.  This 
allows other resources to subscribe to this anchor without needing to know if 
the software is being installed as a package, virtualenv, or docker image.

I think this approach could be applied mostly as is to at least some of the 
existing modules with similar levels of changes.  For example, horizon, 
keystone  heat would probably be fairly straightforward.  I suspect this 
approach would need refinement for more complex services like neutron and nova. 
 We would need to work out how to manage things like external packages that 
would still be needed if running a virtualenv based install, but probably not 
needed if running a docker based install.  We would probably also want to 
consider how to be more granular about service notifications.

I'd love to get some feedback on this approach if people have time to look it 
over.  We're still trying to move away from using packages for service installs 
and I'd like to figure out how to do that without carrying heavyweight and 
fragile patches to the openstack puppet modules.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] cross-site console web socket proxies no longer work

2015-07-13 Thread Mike Dorman
I noticed in Kilo there’s a validation check in the console web socket proxies 
to ensure the hostnames from the Origin and Host headers match.  This was as a 
result of CVE-2015-0259 (https://bugs.launchpad.net/nova/+bug/1409142).  
Effectively it disabled cross-site web socket connections.

This is OK for Horizon, but we also run our own custom UI that’s on a different 
hostname from the console proxy servers.  Therefore we need to have the 
cross-site connections work.  I have opened 
https://bugs.launchpad.net/nova/+bug/1474079 for this.

My thought is to add a new nova configuration parameter which would list 
additional allowed Origin hosts for the proxy servers.  And add those to the 
check at 
https://github.com/openstack/nova/blob/master/nova/console/websocketproxy.py#L116

I will probably go ahead and implement that for us internally, but interested 
in opinions on this approach for upstream Nova purposes.  I’m happy to do the 
work, but want to make sure this is generally in line with what the community 
would accept first.

Thanks,
Mike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] oslo.messaging version and RabbitMQ heartbeat support

2015-07-01 Thread Mike Dorman
As a follow up to the discussion during the IRC meeting yesterday, please vote 
for one of these approaches:

1)  Make the default for the rabbit_heartbeat_timeout_threshold parameter 60, 
which matches the default in Kilo oslo.messaging.  This will by default enable 
the RMQ heartbeat feature, which also matches the default in Kilo 
oslo.messaging.  Operators will need to set this parameter to 0 in order to 
disable the feature, which will be documented in the comments within the 
manifest.  We will reevaluate the default value for the Liberty release, since 
the oslo.messaging default most likely will change to 0 for that release.

2)  In addition to #1 above, also add a rabbit_enable_heartbeat parameter, 
which will default to false.  Setting that to true will be needed to explicitly 
enable the RMQ heartbeat feature, regardless of the value of 
rabbit_heartbeat_timeout_threshold.  By default the RMQ heartbeat feature will 
be disabled, which may be a marginally safer approach (due to the 
requirements.txt stuff, see below), but will not match the upstream Kilo 
oslo.messaging default.  This will also turn off the feature for people who 
have already been “getting it for free” if they do nothing and don’t update 
their composition module.

My vote is for #1.

Let’s plan to close the voting by next week’s IRC meeting, so we can come to a 
final conclusion at that time.

Thanks,
Mike




From: Mike Dorman
Reply-To: OpenStack Development Mailing List (not for usage questions)
Date: Tuesday, June 23, 2015 at 5:07 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [puppet] oslo.messaging version and RabbitMQ heartbeat 
support

As a follow up to https://review.openstack.org/#/c/194399/ and the meeting 
discussion earlier today, I’ve determined that everybody (RDU, Ubuntu, Debian) 
is packaging oslo.messaging 1.8.2 or 1.8.3 with the Kilo build.  (This is also 
the version we get on our internal Anvil-based build.)  This is considerably 
lower than 1.11.0 where the default rabbit_heartbeat_timeout_threshold changes 
(from 60 to 0.)

If we go forward using the default rabbit_heartbeat_timeout_threshold value of 
60, that will be the correct default behavior up through oslo.messaging 1.10.x.

When people upgrade to 1.11.0 or higher, we’ll no longer match the upstream 
default behavior.  But, it should maintain the _actual_ behavior (heartbeating 
enabled) for people doing an upgrade.  Once Liberty is cut, we should 
reevaluate to make sure we’re matching whatever the default is at that time.

However, the larger problem I see is that oslo.messaging requirements.txt in 
=1.10.x does not enforce the needed versions of kombu and amqp for heartbeat 
to work:
https://github.com/openstack/oslo.messaging/blob/1.8.2/requirements.txt#L25-L26 
 This is confusing as heartbeat is enabled by default!

I am not sure what the behavior is when heartbeat is enabled with older kombu 
or amqp.  Does anyone know?  If it silently fails, maybe we don’t care.  But if 
enabling heartbeat (by default, rabbit_heartbeat_timeout_threshold=60) actively 
breaks, that would be bad.

I see two options here:

1)  Make default rabbit_heartbeat_timeout_threshold=60 in the Puppet modules, 
to strictly follow the upstream default in Kilo.  Reevaluate this default value 
for Liberty.  Ignore the kombu/amqp library stuff and hope “it just works 
itself out naturally.”

2)  Add a rabbit_enable_heartbeat parameter to explicitly enable/disable the 
feature, and default to disable.  This goes against the current default 
behavior, but will match it for oslo.messaging =1.11.0.  I think this is the 
safest path, as we won’t be automatically enabling heartbeat for people who 
don’t have a new enough kombu or amqp.

Personally, I like #1, because I am going to enable this feature, anyway.  And 
I can’t really imagine why one would _not_ want to enable it.  But I am fine 
implementing it either way, I just want to get it done so I can get off my 
local forks. :)

Thoughts?

Mike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [puppet] OpenStack Puppet Modules Usage Questions

2015-07-01 Thread Mike Dorman
+1, I think we should just go ahead and do this.





On 7/1/15, 3:23 PM, Richard Raseley rich...@raseley.com wrote:

Matt Fischer wrote:
 We've been discussing this for 3 months now, so my vote is soon. Can
 we make the Puppet Labs ML have an auto-responder that redirects people?

Yes we can. I propose the following:

1) Put up an auto-responder ASAP with the following text:

---

Thank you for your message to the puppet-openstack mailing list. As part 
of our move under the OpenStack 'big tent'[0] we have transitioned to 
use of the official OpenStack mailing lists[1]. Please resend your 
message with the following considerations:

1) If your question or comment is related to the development, structure, 
or processes surrounding the OpenStack Puppet Modules, please send it to 
'openstack-...@lists.openstack.org' with the tag '[puppet]' as the first 
component of the subject.

2) if your question or comment is related to the operational application 
or usage of the OpenStack Puppet Modules, please send it to 
'openstack-operators@lists.openstack.org' with the tag '[puppet]' as the 
first component of the subject.

Please note that in order to successfully post a message to one of the 
lists above, you must first subscribe to it. To do this, please follow 
the instructions found on the OpenStack wiki[1].

If you have any questions or concerns, please drop by our 
#puppet-openstack channel on Freenode IRC.

[0] - http://ttx.re/the-way-forward.html
[1] - https://wiki.openstack.org/wiki/Mailing_Lists

---

2) Configure the setting to prevent outside emails to the list. The list 
will remain public to serve as an historical archive.

Regards,

Richard

-- 

To unsubscribe from this group and stop receiving emails from it, send an 
email to puppet-openstack+unsubscr...@puppetlabs.com.
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Scaling the Ops Meetup

2015-06-30 Thread Mike Dorman
I pretty much agree with everyone so far.  No vendor booths, distributed 
“underwriters”, modest registration fee, and sans evening event.  Not sure 
separate regional meetings are a good idea, but would be in favor of 
alternating North America vs. other region, like the summits.

I’ve been looking for approximate meal sponsorship costs, too.  We may have 
funds available for some sort of underwriting as well, but the first question I 
get when going to ask for that is “how much $?”  So it’s difficult to get 
sponsorship commitments without those details.  Could you let us know some 
ballpark figures based on past events, so we have some more data points?

Thanks!!
Mike


From: Jesse Keating
Date: Tuesday, June 30, 2015 at 1:06 PM
To: Matt Fischer
Cc: OpenStack Operators
Subject: Re: [Openstack-operators] Scaling the Ops Meetup

RE Evening event: I agree it was pretty crowded. Perhaps just a list of area 
venues for various activities and a sign up board somewhere. Let it happen 
organically, and everybody is on their own for paying for whatever they do. 
That way those that may not be into the bar scene don't feel left out because 
everybody else went and got drink/food. Lets eliminate the social pressure to 
put everybody into the same social event.


- jlk

On Tue, Jun 30, 2015 at 10:46 AM, Matt Fischer 
m...@mattfischer.commailto:m...@mattfischer.com wrote:
My votes line up with Dave's and Joe's pretty much.

I think that vendor booth's are a bad idea as well.

As for registration, I think having a fee that covers the meals/coffee is fair. 
This is not a typical walk in off the street meeting. I don't think many 
companies would balk at an extra $100-$200 fee for registration. Especially if 
you're already paying for travel like 99% of us will be doing. I'm also +1 
canceling the evening event to cut costs, it was overcrowded last time and with 
300 people will be unmanageable.

Tom, What is the actual per-head price range for meals?

On Tue, Jun 30, 2015 at 11:36 AM, Joe Topjian 
j...@topjian.netmailto:j...@topjian.net wrote:

-1 on paid registration, I think we need to be mindful of the smaller openstack 
deployers, their voice is an important one, and their access to the larger 
operations teams is invaluable to them.  I like the idea of local teams showing 
up because it's in the neighborhood and they don't need to hassle their 
budgeting managers too much for travel approval / expenses.  This is more 
accessible currently than the summits for many operators.  Let's keep it that 
way.

I understand your point.

IMO, the Ops mid-cycle meetup is a little different than a normal local meetup 
you'll find at meetup.comhttp://meetup.com. It's a multi-day event that 
includes meals and an evening event. Being able to attend for free, while a 
great goal, may not be practical. I would not imagine that the fee would be as 
much as a Summit ticket, nor even broken down to the daily cost of a Summit 
ticket. I see it as something that would go toward the cost of food and such.

The OpenStack foundation does a lot to ensure that people who are unable to pay 
registration fees are still able to attend summits. The same courtesy could be 
extended here as well. As an example, David M has mentioned that TWC may help 
(I understand that may not be official, just used as an example of how others 
may be willing to help with that area).

Joe

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.orgmailto:OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.orgmailto:OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[openstack-dev] [puppet] oslo.messaging version and RabbitMQ heartbeat support

2015-06-23 Thread Mike Dorman
As a follow up to https://review.openstack.org/#/c/194399/ and the meeting 
discussion earlier today, I’ve determined that everybody (RDU, Ubuntu, Debian) 
is packaging oslo.messaging 1.8.2 or 1.8.3 with the Kilo build.  (This is also 
the version we get on our internal Anvil-based build.)  This is considerably 
lower than 1.11.0 where the default rabbit_heartbeat_timeout_threshold changes 
(from 60 to 0.)

If we go forward using the default rabbit_heartbeat_timeout_threshold value of 
60, that will be the correct default behavior up through oslo.messaging 1.10.x.

When people upgrade to 1.11.0 or higher, we’ll no longer match the upstream 
default behavior.  But, it should maintain the _actual_ behavior (heartbeating 
enabled) for people doing an upgrade.  Once Liberty is cut, we should 
reevaluate to make sure we’re matching whatever the default is at that time.

However, the larger problem I see is that oslo.messaging requirements.txt in 
=1.10.x does not enforce the needed versions of kombu and amqp for heartbeat 
to work:
https://github.com/openstack/oslo.messaging/blob/1.8.2/requirements.txt#L25-L26 
 This is confusing as heartbeat is enabled by default!

I am not sure what the behavior is when heartbeat is enabled with older kombu 
or amqp.  Does anyone know?  If it silently fails, maybe we don’t care.  But if 
enabling heartbeat (by default, rabbit_heartbeat_timeout_threshold=60) actively 
breaks, that would be bad.

I see two options here:

1)  Make default rabbit_heartbeat_timeout_threshold=60 in the Puppet modules, 
to strictly follow the upstream default in Kilo.  Reevaluate this default value 
for Liberty.  Ignore the kombu/amqp library stuff and hope “it just works 
itself out naturally.”

2)  Add a rabbit_enable_heartbeat parameter to explicitly enable/disable the 
feature, and default to disable.  This goes against the current default 
behavior, but will match it for oslo.messaging =1.11.0.  I think this is the 
safest path, as we won’t be automatically enabling heartbeat for people who 
don’t have a new enough kombu or amqp.

Personally, I like #1, because I am going to enable this feature, anyway.  And 
I can’t really imagine why one would _not_ want to enable it.  But I am fine 
implementing it either way, I just want to get it done so I can get off my 
local forks. :)

Thoughts?

Mike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] Re: duplicate keystone endpoints

2015-06-17 Thread Mike Dorman
We’ve had this same problem, too, and I’d agree it should fail the Puppet run 
rather than just passing.  Would you mind writing up a bug report for this at 
https://launchpad.net/puppet-openstacklib ?

I have this on my list of stuff to fix when we go to Kilo (soon), so if 
somebody else doesn’t fix it, then I will.

Thanks!


From: Black, Matthew
Reply-To: 
puppet-openst...@puppetlabs.commailto:puppet-openst...@puppetlabs.com
Date: Wednesday, June 17, 2015 at 12:54 PM
To: puppet-openst...@puppetlabs.commailto:puppet-openst...@puppetlabs.com
Subject: duplicate keystone endpoints

I was digging around in the icehouse puppet code and I found what I believe is 
the cause of a duplicate endpoint creation during a short network disruption. 
In my environments the keystone servers do not reside in the same network as 
the regions. It looks like the puppet code fails the first request, sleeps 10 
seconds, tries again and if that fails it then returns with a nil. The code 
then returns an empty array to the provider which then is assumed to mean that 
the endpoint does not exist. If the network blip is over by that point it will 
attempt to create the endpoint and thus a duplicate endpoint in the catalog.

https://github.com/openstack/puppet-keystone/blob/stable/icehouse/lib/puppet/provider/keystone.rb#L139

https://github.com/openstack/puppet-keystone/blob/stable/icehouse/lib/puppet/provider/keystone.rb#L83-L88


Looking at the juno code, which it is using the openstacklib, the issue still 
exists but in a slightly different fashion.

https://github.com/openstack/puppet-openstacklib/blob/master/lib/puppet/provider/openstack.rb#L55-L66

I believe this should be changed that instead of a breaking out of the loop it 
should throw an exception.

--


To unsubscribe from this group and stop receiving emails from it, send an email 
to 
puppet-openstack+unsubscr...@puppetlabs.commailto:puppet-openstack+unsubscr...@puppetlabs.com.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Change abandonment policy

2015-06-04 Thread Mike Dorman
I vote #2, with a smaller N.

We can always adjust this policy in the future if find we have to manually 
abandon too many old reviews.


From: Colleen Murphy
Reply-To: 
puppet-openst...@puppetlabs.commailto:puppet-openst...@puppetlabs.com
Date: Tuesday, June 2, 2015 at 12:39 PM
To: OpenStack Development Mailing List (not for usage questions), 
puppet-openst...@puppetlabs.commailto:puppet-openst...@puppetlabs.com
Subject: [puppet] Change abandonment policy

In today's meeting we discussed implementing a policy for whether and when core 
reviewers should abandon old patches whose author's were inactive. (This 
doesn't apply to authors that want to abandon their own changes, only for core 
reviewers to abandon other people's changes.) There are a few things we could 
do here, with potential policy drafts for the wiki:

1) Never abandon

```
Our policy is to never abandon changes except for our own.
```

The sentiment here is that an old change in the queue isn't really hurting 
anything by just sitting there, and it is more visible if someone else wants to 
pick up the change.

2) Manually abandon after N months/weeks changes that have a -2 or were fixed 
in a different patch

```
If a change is submitted and given a -1, and subsequently the author becomes 
unresponsive for a few weeks, reviewers should leave reminder comments on the 
review or attempt to contact the original author via IRC or email. If the 
change is easy to fix, anyone should feel welcome to check out the change and 
resubmit it using the same change ID to preserve original authorship. Core 
reviewers will not abandon such a change.

If a change is submitted and given a -2, or it otherwise becomes clear that the 
change can not make it in (for example, if an alternate change was chosen to 
solve the problem), and the author has been unresponsive for at least 3 months, 
a core reviewer should abandon the change.
```

Core reviewers can click the abandon button only on old patches that are 
definitely never going to make it in. This approach has the advantage that it 
is easier for contributors to find changes and fix them up, even if the change 
is very old.

3) Manually abandon after N months/weeks changes that have a -1 that was never 
responded to

```
If a change is submitted and given a -1, and subsequently the author becomes 
unresponsive for a few weeks, reviewers should leave reminder comments on the 
review or attempt to contact the original author via IRC or email. If the 
change is easy to fix, anyone should feel welcome to check out the change and 
resubmit it using the same change ID to preserve original authorship. If the 
author is unresponsive for at least 3 months and no one else takes over the 
patch, core reviewers can abandon the patch, leaving a detailed note about how 
the change can be restored.

If a change is submitted and given a -2, or it otherwise becomes clear that the 
change can not make it in (for example, if an alternate change was chosen to 
solve the problem), and the author has been unresponsive for at least 3 months, 
a core reviewer should abandon the change.
```

Core reviewers can click the abandon button on changes that no one has shown an 
interest in in N months/weeks, leaving a message about how to restore the 
change if the author wants to come back to it. Puppet Labs does this for its 
module pull requests, setting N at 1 month.

4) Auto-abandon after N months/weeks if patch has a -1 or -2

```
If a change is given a -2 and the author has been unresponsive for at least 3 
months, a script will automatically abandon the change, leaving a message about 
how the author can restore the change and attempt to resolve the -2 with the 
reviewer who left it.
```

We would use a tool like this one[1] to automatically abandon changes meeting a 
certain criteria. We would have to decide whether we want to only auto-abandon 
changes with -2's or go as far as to auto-abandon those with -1's. The policy 
proposal above assumes -2. The tool would leave a canned message about how to 
restore the change.


Option 1 has the problem of leaving clutter around, which the discussion today 
seeks to solve.

Option 3 leaves the possibility that a change that is mostly good becomes 
abandoned, making it harder for someone to find and restore it.

 I don't think option 4 is necessary because there are not an overwhelming 
number of old changes (I count 9 that are currently over six months old). In 
working through old changes a few months ago I found that many of them are easy 
to fix up to remove a -1, and auto-abandoning removes the ability for a human 
to make that call. Moreover, if a patch has a procedural -2 that ought to be 
lifted after some point, auto-abandonment has the potential to accidentally 
throw out a change that was intended to be kept (though presumably the core 
reviewer who left the -2 would notice the abandonment and restore it if that 
was the case).

I am in favor of option 2. I think setting N 

Re: [openstack-dev] [puppet] Renaming the IRC channel to #openstack-puppet

2015-05-29 Thread Mike Dorman
+1 Let’s do it.


From: Matt Fischer
Reply-To: OpenStack Development Mailing List (not for usage questions)
Date: Friday, May 29, 2015 at 1:46 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [puppet] Renaming the IRC channel to 
#openstack-puppet

I would love to do this. +2!

On Fri, May 29, 2015 at 1:39 PM, Mathieu Gagné 
mga...@iweb.commailto:mga...@iweb.com wrote:
Hi,

We recently asked for our IRC channel (#puppet-openstack) to be logged
by the infra team. We happen to be the only channel suffixing the word
openstack instead of prefixing it. [1]

I would like to propose renaming our IRC channnel to #openstack-puppet
to better fit the mold (convention) already in place and be more
intuitive for new comers to discover.

Jemery Stanley (fungi) explained to me that previous IRC channel renames
were done following the Ubuntu procedure. [2] Last rename I remember of
was #openstack-stable to #openstack-release and it went smoothly without
any serious problem.

What do you guys think about the idea?

[1] http://eavesdrop.openstack.org/irclogs/
[2] https://wiki.ubuntu.com/IRC/MovingChannels

Note: I already registered the channel name for safe measures.

--
Mathieu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack-operators] [Large Deployers Team] neutron ask for network segments

2015-05-22 Thread Mike Dorman
Kris and I along with Belmiro drafted up a description and ask to Neutron about 
the network segments topic.  We just put it at the end of the etherpad, others 
can review:

https://etherpad.openstack.org/p/YVR-ops-large-deployments

(In the “Feedback to Neutron team…” section.)

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] RabbitMQ ops session summary

2015-05-19 Thread Mike Dorman
Thought I’d post a quick summary/highlights of our RMQ session this morning.  
Thanks to all who attended and participated!  For reference, the etherpad link 
is at [1].

  *   A lot fewer people attended this time than at previous summits and ops 
meet ups.  I read that as an indication things are generally getting better 
wrt. RabbitMQ
  *   The heartbeat patch [2] (which is part of Kilo) is working well at Go 
Daddy and NeCTAR (under Juno.)
 *   Some interest in backporting this to Icehouse, but that may not be 
practical.
  *   There are still a couple outlying problems with oslo.messaging when 
connecting to RMQ cluster nodes through a load balancer.  Most people are using 
the multi RMQ server option and are not using a load balancer.
  *   Pivotal is taking a more active role in the OpenStack community and 
helping to make oslo.messaging, etc. better.  A big thanks to them for this 
effort!
 *   Best way to feed back experience/requests to them is via 
rabbitmq-users ML [3]

Thanks again to everyone for your help making the OpenStack experience with 
RabbitMQ better.  Please enjoy the remainder of the Summit!

Mike


[1] https://etherpad.openstack.org/p/YVR-ops-rabbitmq
[2] https://review.openstack.org/#/c/146047/
[3] https://groups.google.com/forum/#!forum/rabbitmq-users

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack] modify policy for security group on neutron

2015-05-17 Thread Mike Dorman
Yup.  This is exactly what we do, with Neutron policy.json.  I can confirm that 
this works and achieves what you need.

Mike


From: Salvatore Orlando
Date: Saturday, May 16, 2015 at 12:54 AM
To: Giuseppa Muscianisi
Cc: openstack@lists.openstack.orgmailto:openstack@lists.openstack.org
Subject: Re: [Openstack] modify policy for security group on neutron

Perhaps you can achieve this by editing policy.json (located by default in 
/etc/neutron).

For instance you can allow only admin users to add security group rules to any 
security group by specifying the following:

create_security_group_rule: admin_only

Similar rules for update and deletion of security group rules will prevent you 
from modifying existing rules.
This same set of rules will anyway allow admin users to add rules to the 
default security group.

Salvatore




On 15 May 2015 at 09:31, Giuseppa Muscianisi 
g.muscian...@cineca.itmailto:g.muscian...@cineca.it wrote:

Dear all,

in our openstack cluster, we would restrict the actions that users can do with 
security group and security group rules.

Here's what we'd like to achieve: 1. Lock down security group (and rules) so 
that only admin (or tenant admin?) can modify them. 2. Add additional rules to 
the default security group.

Can you please give me some advices on how to achieve these goals?

Thanks in advance, Giusy

--
---
 Considerate la vostra semenza:
  fatti non foste a viver come bruti,
  ma per seguir virtute e canoscenza 

Dante Alighieri
 Divina Commedia - Inferno - Canto XXVI
---

Giuseppa Muscianisi, Ph.D.
CINECA - SuperComputing, Applications and Innovation Department
Via Magnanelli 6/3, 40033 Casalecchio di Reno (BO) - Italy
Phone: +39 051 6171 775
www.cineca.ithttp://www.cineca.it

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : 
openstack@lists.openstack.orgmailto:openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack-operators] RabbitMQ Ops session at Vancouver Summit

2015-05-11 Thread Mike Dorman
Hello Operators!

Our RabbitMQ Ops summit session is on Tuesday at 11:15am, room 306 [1].

I’ve put together a preliminary agenda of discussion topics [2], but I’d like 
to gather any other items people want to cover.  Please review the etherpad and 
add any specific outcomes/topics you’d like to see discussed.

For sure we’ll go over experiences people have had with the heartbeat patch 
[3], especially any outstanding problems that are still going on even after 
applying the patch.

Travel safe and see you all next week!
Mike


[1] http://sched.co/3Bcc
[2] https://etherpad.openstack.org/p/YVR-ops-rabbitmq
[3] https://review.openstack.org/#/c/146047/

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [puppet] Proposal to configure Oslo libraries

2015-05-08 Thread Mike Dorman
+1 I agree we should do this, etc., etc.

I don’t have a strong preference for #1 or #2, either.  But I do think #1 
is slightly more complicated from a deployer/operator perspective.  It’s 
another module I have to manage, pull in, etc.  Granted this is a trivial 
amount of incremental work.

I confess I am not super familiar with openstacklib, but I don’t 
understand why We have to differentiate *common-in-OpenStack* and 
*common-in-our-modules*.”  To me, openstacklib is for _anything_ that’s 
common.  Maybe you could expand upon your thinking on this a little more, 
just so it’s a little more explicit?

Since others are not chomping at the bit to chime in here, I guess there 
is probably not many major preferences on this.  I would be happy with 
getting this done, regardless of how it’s implemented.

Thanks,
Mike






On 5/8/15, 7:50 AM, Rich Megginson rmegg...@redhat.com wrote:

On 05/08/2015 07:17 AM, Doug Hellmann wrote:
 Excerpts from Ben Nemec's message of 2015-05-07 15:57:48 -0500:
 I don't know much about the puppet project organization so I won't
 comment on whether 1 or 2 is better, but a big +1 to having a common
 way to configure Oslo opts.  Consistency of those options across all
 services is one of the big reasons we pushed so hard for the libraries
 to own their option definitions, so this would align well with the way
 the projects are consumed.

 - -Ben
 Well said, Ben.

 Doug

 On 05/07/2015 03:19 PM, Emilien Macchi wrote:
 Hi,

 I think one of the biggest challenges working on Puppet OpenStack
 modules is to keep code consistency across all our modules (~20).
 If you've read the code, you'll see there is some differences
 between RabbitMQ configuration/parameters in some modules and this
 is because we did not have the right tools to make it properly. A
 lot of the duplicated code we have comes from Oslo libraries
 configuration.

 Now, I come up with an idea and two proposals.

 Idea 

 We could have some defined types to configure oslo sections in
 OpenStack configuration files.

 Something like: define oslo::messaging::rabbitmq( $user, $password
 ) { ensure_resource($name, 'oslo_messaging_rabbit/rabbit_userid',
 {'value' = $user}) ... }

 Usage in puppet-nova: ::oslo::messaging::rabbitmq{'nova_config':
 user = 'nova', password = 'secrete', }

 And patch all our modules to consume these defines and finally
 have consistency at the way we configure Oslo projects (messaging,
 logging, etc).

 Proposals =

 #1 Creating puppet-oslo ... and having oslo::messaging::rabbitmq,
 oslo::messaging::qpid, ..., oslo::logging, etc. This module will be
 used only to configure actual Oslo libraries when we deploy
 OpenStack. To me, this solution is really consistent with how
 OpenStack works today and is scalable as soon we contribute Oslo
 configuration changes in this module.

+1 - For the Keystone authentication options, I think it is important to 
encapsulate this and hide the implementation from the other services as 
much as possible, to make it easier to use all of the different types of 
authentication supported by Keystone now and in the future.  I would 
think that something similar applies to the configuration of other 
OpenStack services.


 #2 Using puppet-openstacklib ... and having
 openstacklib::oslo::messaging::(...) A good thing is our modules
 already use openstacklib. But openstacklib does not configure
 OpenStack now, it creates some common defines  classes that are
 consumed in other modules.


 I personally prefer #1 because: * it's consistent with OpenStack. *
 I don't want openstacklib the repo where we put everything common.
 We have to differentiate *common-in-OpenStack* and
 *common-in-our-modules*. I think openstacklib should continue to be
 used for common things in our modules, like providers, wrappers,
 database management, etc. But to configure common OpenStack bits
 (aka Oslo©), we might want to create puppet-oslo.

 As usual, any thoughts are welcome,

 Best,



 __
 

 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 -BEGIN PGP SIGNATURE-

 
_
_
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for 

Re: [Openstack] [nova] secure websocket (wss) and websocketproxy setup for serial console

2015-05-08 Thread Mike Dorman
Yeah, you will need:

DEFAULT/ssl_ca_file
DEFAULT/ssl_cert_file
DEFAULT/ssl_key_file

In nova.conf.  IIRC that’s all that’s needed to enable SSL on this.

I don’t remember exactly, but that may turn on SSL for other nova services 
as well (spice proxy, etc.)  So just be aware of that.

Mike










On 5/8/15, 7:05 AM, Markus Zoeller mzoel...@de.ibm.com wrote:

How do I setup a secure websocket connection (wss) for the 
nova-serialproxy service? I have the following setting on the 
compute node (nova.conf):
[serial_console]
enabled = True
base_url = wss://ip-of-controller-node:6083/  # wss !!
proxyclient_address = ip-of-compute-node

As soon as I want to use that with Horizon (via https) the 
nova-serialproxy service logs this trace (from the module 
nova.console.websocketproxy; timestamps and location truncated):

[...] [-] exception vmsg 
/usr/lib/python2.7/site-packages/websockify/websocket.py:824
 Traceback (most recent call last):
   File /usr/lib/python2.7/site-packages/websockify/websocket.py, 
line 874, in top_new_client
 client = self.do_handshake(startsock, address)
   File /usr/lib/python2.7/site-packages/websockify/websocket.py, 
line 786, in do_handshake
 keyfile=self.key)
   File /usr/lib/python2.7/site-packages/eventlet/green/ssl.py, 
line 
339, in wrap_socket
 return GreenSSLSocket(sock, *a, **kw)
   File /usr/lib/python2.7/site-packages/eventlet/green/ssl.py, 
line 
64, in __init__
 ca_certs, do_handshake_on_connect and six.PY2, *args, **kw)
   File /usr/lib64/python2.7/ssl.py, line 141, in __init__
 ciphers)
 SSLError: [Errno 336265225] _ssl.c:351: error:140B0009:SSL 
routines:SSL_CTX_use_PrivateKey_file:PEM lib

I assume that I have to set the nova.conf options cert and key 
([DEFAULT] section) on the controller node but I couldn't figure out
the right setup.

Thanks in advance!
Markus Zoeller (markus_z)


___
Mailing list: 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack-operators] Spec for Neutron Network IP Usage Extension API

2015-05-07 Thread Mike Dorman
Hi all,

For those of you who were interested in our network IP usage API 
extensions from the PHL meet up, we have this spec in play for it.  Please 
review and comment!

As a reminder, the raw code/patch for this is also available at 
https://github.com/godaddy/openstack-neutron/tree/network-ip-usage and 
https://github.com/godaddy/openstack-neutron/commit/fcf325f9f9f7a9f87ba6bc1
c53f9212d0e2decee

I’ll briefly review this stuff during arch show and tell at the summit, so 
happy to discuss more at that time as well.  Thanks!

Thanks,
Mike





On 5/6/15, 6:34 PM, David Bingham (Code Review) rev...@openstack.org 
wrote:

David Bingham has uploaded a new change for review.

  https://review.openstack.org/180803

Change subject: Propose Network IP Usage Extension API for Liberty
..

Propose Network IP Usage Extension API for Liberty

Propose adding a simple new Network IP Usage API Extension for
operators to predict and react when networks reaching capacity.

Change-Id: Ifd42982aea5bbfc095e30bd2a221ec472d55519a
---
A specs/liberty/network-ip-usage-api.rst
1 file changed, 542 insertions(+), 0 deletions(-)


  git pull ssh://review.openstack.org:29418/openstack/neutron-specs 
refs/changes/03/180803/1
-- 
To view, visit https://review.openstack.org/180803
To unsubscribe, visit https://review.openstack.org/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: Ifd42982aea5bbfc095e30bd2a221ec472d55519a
Gerrit-PatchSet: 1
Gerrit-Project: openstack/neutron-specs
Gerrit-Branch: master
Gerrit-Owner: David Bingham dbing...@godaddy.com
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] expanding to 2nd location

2015-05-06 Thread Mike Dorman
+1 to second site = second region.

I would not recommend using cells unless you have a real nova scalability 
problem.  There are a lot of caveats/gotchas.  Cells v2 I think should come as 
an experimental feature in Liberty, and past that point cells will be the 
default mode of operation.  It will probably be much easier to go from no cells 
to cells v2 than cells v1 to v2.

Mike



From: Joseph Bajin
Date: Wednesday, May 6, 2015 at 8:06 AM
Cc: OpenStack Operators
Subject: Re: [Openstack-operators] expanding to 2nd location

Just to add in my $0.02, we run in multiple sites as well.  We are using 
regions to do this.  Cells at this point have a lot going for it, but we 
thought it wasn't there yet.  We also don't have the necessary resources to 
make our own changes to it like a few other places do.

With that, we said the only real thing that we should do is make sure items 
such as Tenant and User ID's remained the same. That allows us to do show-back 
reporting and it makes it easier on the user-base on when they want to deploy 
from one region to another.   With that requirement, we did used galera in the 
same manner that many of the others mentioned.  We then deployed Keystone 
pointing to that galera DB.  That is the only DB that is replicated across 
sites.  Everything else such as Nova, Neutron, etc are all within its own 
location.

The only real confusing piece for our users is the dashboard.  When you first 
go to the dashboard, there is a dropdown to select a region.  Many users think 
that is going to send them to a particular location, so their information from 
that location is going to show up.  It is really to which region do you want to 
authenticate against.  Once you are in the dashboard, you can select which 
Project you want to see.  That has been a major point of confusion. I think our 
solution is to just rename that text.





On Tue, May 5, 2015 at 11:46 AM, Clayton O'Neill 
clay...@oneill.netmailto:clay...@oneill.net wrote:
On Tue, May 5, 2015 at 11:33 AM, Curtis 
serverasc...@gmail.commailto:serverasc...@gmail.com wrote:
Do people have any comments or strategies on dealing with Galera
replication across the WAN using regions? Seems like something to try
to avoid if possible, though might not be possible. Any thoughts on
that?

We're doing this with good luck.  Few things I'd recommend being aware of:

Set galera_group_segment so that each site is in a separate segment.  This will 
make it smarter about doing replication and for state transfer.

Make sure you look at the timers and tunables in Galera and make sure they make 
sense for your network.  We've got lots of BW and lowish latency (37ms), so the 
defaults have worked pretty well for us.

Make sure that when you do provisioning in one site, you don't have CM tools in 
the other site breaking things.  We can ran into issues during our first deploy 
like this where Puppet was making a change in one site to a user, and Puppet in 
the other site reverted the change nearly immediately.  You may have to tweak 
your deployment process to deal with that sort of thing.

Make sure you're running Galera or Galera Arbitrator in enough sites to 
maintain quorum if you have issues.  We run 3 nodes in one DC, and 3 nodes in 
another DC for Horizon, Keystone and Designate.  We run a Galera arbitrator in 
a third DC to settle ties.

Lastly, the obvious one is just to stay up to date on patches.  Galera is 
pretty stable, but we have run into bugs that we had to get fixes for.

On Tue, May 5, 2015 at 11:33 AM, Curtis 
serverasc...@gmail.commailto:serverasc...@gmail.com wrote:
Do people have any comments or strategies on dealing with Galera
replication across the WAN using regions? Seems like something to try
to avoid if possible, though might not be possible. Any thoughts on
that?

Thanks,
Curtis.

On Mon, May 4, 2015 at 3:11 PM, Jesse Keating 
j...@bluebox.netmailto:j...@bluebox.net wrote:
 I agree with Subbu. You'll want that to be a region so that the control
 plane is mostly contained. Only Keystone (and swift if you have that) would
 be doing lots of site to site communication to keep databases in sync.

 http://docs.openstack.org/arch-design/content/multi_site.html is a good read
 on the topic.


 - jlk

 On Mon, May 4, 2015 at 1:58 PM, Allamaraju, Subbu 
 su...@subbu.orgmailto:su...@subbu.org wrote:

 I suggest building a new AZ (“region” in OpenStack parlance) in the new
 location. In general I would avoid setting up control plane to operate
 across multiple facilities unless the cloud is very large.

  On May 4, 2015, at 1:40 PM, Jonathan Proulx 
  j...@jonproulx.commailto:j...@jonproulx.com wrote:
 
  Hi All,
 
  We're about to expand our OpenStack Cloud to a second datacenter.
  Anyone one have opinions they'd like to share as to what I would and
  should be worrying about or how to structure this?  Should I be
  thinking cells or regions (or maybe both)?  Any obvious or not so
  obvious pitfalls I should try to avoid?
 
  

Re: [openstack-dev] [puppet][operators] How to specify Keystone v3 credentials?

2015-05-06 Thread Mike Dorman
We also run all masterless/puppet apply.  And we just populate a bare 
bones keystone.conf on any box that does not have keystone installed, but 
Puppet needs to be able to create keystone resources.

Also agreed on avoiding puppetdb, for the same reasons.

(Something to note for those of us doing masterless today: there are plans 
from Puppet to move more of the manifest compiling functionality to run 
only in the puppet master process.  So at some point, it’s likely that 
masterless setups may not be possible.)

Mike




 If you do not wish to explicitly define Keystone resources for
 Glance on Keystone nodes but instead let Glance nodes manage
 their own resources, you could always use exported resources.

 You let Glance nodes export their keystone resources and then
 you ask Keystone nodes to realize them where admin credentials
 are available. (I know some people don't really like exported
 resources for various reasons)

 I'm not familiar with exported resources.  Is this a viable
 option that has less impact than just requiring Keystone
 resources to be realized on the Keystone node?
 
 I'm not in favor of having exported resources because it requires 
 PuppetDB, and a lot of people try to avoid that. For now, we've
 been able to setup all OpenStack without PuppetDB in TripleO and in
 some other installers, we might want to keep this benefit.
 
 +100
 
 We're looking at using these puppet modules in a bit, but we're also a
 few steps away from getting rid of our puppetmaster and moving to a
 completely puppet apply based workflow. I would be double-plus
 sad-panda if we were not able to use the openstack puppet modules to
 do openstack because they'd been done in such as way as to require a
 puppetmaster or puppetdb.

100% agree.

Even if you had a puppetmaster and puppetdb, you would still end up in
this eventual consistency dance of puppet runs.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][operators] How to specify Keystone v3 credentials?

2015-05-06 Thread Mike Dorman
Cool, fair enough.  Pretty glad to hear that actually!


From: Colleen Murphy
Reply-To: OpenStack Development Mailing List (not for usage questions)
Date: Wednesday, May 6, 2015 at 5:31 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [puppet][operators] How to specify Keystone v3 
credentials?


On Wed, May 6, 2015 at 4:26 PM, Mike Dorman 
mdor...@godaddy.commailto:mdor...@godaddy.com wrote:
We also run all masterless/puppet apply.  And we just populate a bare
bones keystone.conf on any box that does not have keystone installed, but
Puppet needs to be able to create keystone resources.

Also agreed on avoiding puppetdb, for the same reasons.

(Something to note for those of us doing masterless today: there are plans
from Puppet to move more of the manifest compiling functionality to run
only in the puppet master process.  So at some point, it’s likely that
masterless setups may not be possible.)
I don't think that's true. I think making sure puppet apply works is a priority 
for them, just the implementation as they move to a C++-based agent has yet to 
be figured out.

Colleen

Mike
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] rabbit/kombu settings deprecations

2015-04-16 Thread Mike Dorman
I feel somewhat responsible for this whole thing, since I landed the first 
review that kicked off all this.  We had gone to a kilo oslo-messaging for 
RMQ improvements, which was what spurred me to patch in order to get rid 
of the deprecation warnings.  I should have actually validated it against 
Juno, known it would break, and called that out.  Sorry about that.  (On 
the other hand, thanks to Gael for hitting up all the other modules that I 
did not do.)

But, I have to say that I’m sympathetic with Matt on this.  We also more 
or less track the master branches, and have the same challenge.

Emilien’s idea below for a bot creating the backport cherry pick is 
intriguing.  Tbh, from a contributor’s perspective, the main reason I 
would not create the cherry pick review is 1) lack of time, and, 2) I’m 
tracking master so I (selfishly) don’t necessarily care about the stable 
branch.  If we had a bot that would automate some of this process, that 
would reduce the resistance somewhat.  But I have no idea what the 
effort/feasibility is of setting up such a thing.  Is there a way in 
Gerrit to make tags more visible when viewing a review?  Like checkboxes 
or something, rather than just having to know the tag and typing it in?

For me, personally, I would be more open to tracking stable branches, too, 
if the backports were better/faster.  Once I was on a stable branch, I 
would be more motivated to do the cherry picks/backports as well.  So 
maybe somewhat of a chicken-and-egg thing.

In any case, definitely a challenge that we should come to some decision 
on.  Then at least there’ll be consistent behavior, one way or another, 
going forward.

Mike

 




On 4/16/15, 12:42 PM, Emilien Macchi emil...@redhat.com wrote:



On 04/16/2015 02:15 PM, Clayton O'Neill wrote:
 On Thu, Apr 16, 2015 at 10:50 AM, Emilien Macchi emil...@redhat.com
 mailto:emil...@redhat.com wrote:
 
 We do our best now to backport what is backportable to stable/juno.
 
 
 This certainly has gotten much better, but I don't think it's 100% there
 yet either.  It's just a ton of work and we probably need better tooling
 around this to expect it to be as good as it should be.
  
 
 FWIW, even without rabbit/kombu topic, master won't work on Juno, 
there
 is plenty of things that are brought in Kilo.
 
 
 That may be the case in some areas, but we're using it without issues
 (until now) on Ubuntu with the features we need.
  
 
 My opinion is we should follow other projects that use stable 
branches
 with doing deprecation for one (or more?) release (currently our 
master)
 and then drop the deprecations after some time.
 
 So I would propose this policy:
 * for new features, patch master with backward compatibility
 
 
 Agreed, I think some of these might also be candidates for back port if
 they're new module features.  For example a new cinder backend that
 existed in the previous release might get back ported if they're just
 adding a new class.
  

A solution could be to add a tag in commits that can be backported?
Something like:

backport-juno
backport-icehouse

or just:
backport-icehouse

And the patch once merged would create the backport auto-magically with
a bot ?

We would have to add a rule in our policy, to ensure a patch has the tag
if needed (core-reviewers will have to take care to see if the tag
deserves to be here or not).
This is a proposal, it could be wrong at all.

 * backports relevant patches from master to stable branches (mainly
 bugs)
 
 Agreed.
  
 
 * in the case of rabbit or any update in OpenStack upstream, update
 master without backward compatibility, except if we accept to have 
a lot
 of if/else in our code, and a lot of backwards to support; I'm not 
in
 favor of that.
 
 
 I think I agree here also.  However, I'd like to see us avoid making
 breaking changes solely to avoid deprecation warnings until x amount of
 time after a release comes out.  If we're able to support some level of
 backwards compatibility, then it also makes upgrading between releases a
 lot easier.  Upgrading all of your packages, db schemas, etc is a lot
 less scary and easier to test than upgrading all that + every OpenStack
 puppet module you use at the same time.

Well, we also rely on OpenStack upstream (oslo, etc), that use to change
configuration parameters. But I agree with you, we should more take care
of this kind of changes.

 
 
_
_
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [Openstack-operators] [Neutron]floatingip with security group

2015-04-08 Thread Mike Dorman
Yup, you need to configure an “address pair” for the floating IP.  This isn’t 
specifically a security groups thing, but it will allow traffic to the floating 
IP to pass into the VM to which it is associated.

Under the covers, it’s implemented similarly to security groups, but is not 
directly a security groups function.


From: LeeKies
Date: Wednesday, April 8, 2015 at 2:42 AM
To: OpenStack Operators
Subject: [Openstack-operators] [Neutron]floatingip with security group

I create a VM with a default security group , then I create and associate a 
floating ip with this VM.
But I can't connect the floating ip, I check the security group, and I think 
it's the sg problem. I add a rule in default sg, and then I can connect the 
floating ip.

When I create a floating ip , Does I have to add a rule in security group to 
allow the ip for ingress ??
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] max_age and until_refresh for fixing Nova quotas

2015-03-14 Thread Mike Dorman
Yeah the default is just ‘0’ for both, which disables the refresh.



The one downside is that it may not be 100% transparent to the user.  If 
the quota is already (incorrectly) too high, and exceeding the quota 
limit, the reservation that triggers the refresh will still fail.  I.e. 
the reservation is attempted based on the quota usage values _before_ the 
refresh.  But then after that the quota should be fixed and it will work 
again on the next reservation.

But my thinking is that most quota issues happen slowly over time.  If we 
are correcting them often and automatically, they hopefully never get to 
the point where they’re bad enough to manifest reservation errors to the 
user.

I don’t have any information re: db load.  I assume it regenerates based 
on what’s in the instances or reservations table.  I imagine the load for 
doing a single refresh is probably comparable to doing a ‘nova list’.

Mike



On 3/14/15, 2:27 PM, Tim Bell tim.b...@cern.ch wrote:

Interesting... what are the defaults ?

Assuming no massive DB load, getting synced within a day would seem 
reasonable. Is the default no max age ?

Tim

 -Original Message-
 From: Jesse Keating [mailto:j...@bluebox.net]
 Sent: 14 March 2015 16:59
 To: openstack-operators@lists.openstack.org
 Subject: Re: [Openstack-operators] max_age and until_refresh for fixing 
Nova
 quotas
 
 On 3/14/15 8:11 AM, Mike Dorman wrote:
  I did short write-up here http://t.co/Q5X1hTgJG1 if you are interested
  in the details.
 
 
 Thanks for sharing Matt! That's an excellent write up.
 
 --
 -jlk
 
 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Ops meetup

2015-03-13 Thread Mike Dorman
On 3/13/15, 3:20 PM, Jonathan Proulx j...@jonproulx.com wrote:


On Fri, Mar 13, 2015 at 4:56 PM, Subbu Allamaraju su...@subbu.org wrote:
 Regarding the discussion on tags, here is my take from the discussion on
 Monday.

 1. There is vehement agreement that having an integrated release 
consisting
 of a small “inner ring” set of service is desirable. There are 6-7 
projects
 that a majority of operators deploy.

 2. There is a general concern that dropping the integrated release 
process
 entirely creates more operability hurdles than that exist today.

There was definite concern about co-gating, being sure there is a
defined set of core project releases that work together. I don't think
fixing these in an integrated release was seen as the only solution.

For example if Neutron is releasing versions foo, bar and baz that all
gate with Nova foo there's no pressing reason that nova need to make
three releases just to keep naming in sync (or vice versa if Nova
moves at higher velocity the Neutron)

The pressing need i s the ability to express that when the cogating
set changes  so it is easily discoverable what set is tested togather.
Keeping the core integrated may be the simplest solution but I don't
think it was the only option in the room.


Personally, I like the WordPress plugin model for discovering the 
compatibility between versions.

See https://wordpress.org/plugins/akismet/ , right side near the bottom of 
the page.

Allows you to figure out if service A version x works with service B 
version y.




 3. Many other projects are not seeing adoption for reasons such as 
lacking a
 minimum viable feature set, lack of operational feedback, concerns on
 scalability etc. Examples include designate, trove, zaqar, ceilometer 
etc.

 4. There is also a concern of whether tags make the tent too large and
 chaotic as any project can essentially claim to be an OpenStack project.
 This may leave it open for vendors and distro providers to define a 
product
 out of those OpenStack projects, eventually fragmenting the open source
 nature

 5. Operators can certainly help shape an MVP feature set.

there is a list of email volunteer on
https://etherpad.openstack.org/p/PHL-ops-tags to make up a ops-tags
working group to continue these discussions, so anyone who couldn't
make it to PHL and is interested in being part of that working group
should probably sign on on the etherpad.

also those of us who have already signed on should probably figure out
what that means in terms of email discussion (should it be her or on
the user-comittee list or somewhere else) and possibly IRC meetings or
what have you...

-Jon

 Subbu


 On Mar 13, 2015, at 12:29 PM, Daniel Comnea comnea.d...@gmail.com 
wrote:

 Tim,

 Are you aware of any other feedback etherpads for the other discussions?

 Cheers,
 Dani

 P.S for Nova Sean provided one of the best summaries.


 On Tue, Mar 10, 2015 at 10:04 AM, Tim Bell tim.b...@cern.ch wrote:

 I don't think there is any recording. The etherpads and summaries on
 superuser.openstack.org give a feeling for the discussions though and 
are an
 interesting read. The tags one looked particularly lively.

 Tim

  -Original Message-
  From: Adam Huffman [mailto:adam.huff...@gmail.com]
  Sent: 10 March 2015 10:50
  To: Daniel Comnea
  Cc: Tim Bell; openstack-operators@lists.openstack.org
  Subject: Re: [Openstack-operators] Ops meetup
 
  I was going to ask that too...
 
  Cheers,
  Adam
 
  On Tue, Mar 10, 2015 at 7:38 AM, Daniel Comnea 
comnea.d...@gmail.com
  wrote:
   Hi Tim,
  
   For those who can't be physically there, will be any sort of
   recordings/ output coming out from this sessions?
  
   Thanks,
   Dani
  
   On Sat, Mar 7, 2015 at 7:24 PM, Tim Bell tim.b...@cern.ch wrote:
  
  
  
   Great to see lots of input for the ops meetup nexts week. Feel 
free
   to add your items to the agenda
  
  
  
  
   Session Etherpads:
  
   Draft Agenda:
   
http://lists.openstack.org/pipermail/openstack-operators/2015-Februar
   y/006268.html
  
  
  
   Monday
  
   https://etherpad.openstack.org/p/PHL-ops-OVS
  
   https://etherpad.openstack.org/p/PHL-ops-security
  
   https://etherpad.openstack.org/p/PHL-ops-app-eco-wg
  
   https://etherpad.openstack.org/p/PHL-ops-tools-wg
  
   https://etherpad.openstack.org/p/PHL-ops-large-deployments
  
   https://etherpad.openstack.org/p/PHL-ops-tags
  
   https://etherpad.openstack.org/p/PHL-ops-hardware
  
   https://etherpad.openstack.org/p/PHL-ops-arch-show-tell
  
  
  
   Tuesday
  
   https://etherpad.openstack.org/p/PHL-ops-rabbit-queue
  
   https://etherpad.openstack.org/p/PHL-ops-nova-feedback
  
   https://etherpad.openstack.org/p/PHL-ops-network-performance
  
   https://etherpad.openstack.org/p/PHL-ops-capacity-mgmt
  
   https://etherpad.openstack.org/p/PHL-ops-testing-interop
  
   https://etherpad.openstack.org/p/PHL-ops-burning-issues
  
   https://etherpad.openstack.org/p/PHL-ops-packaging
  
   

[Openstack-operators] bug for scheduler failures (was FW: [openstack-dev] [nova] readout from Philly Operators Meetup)

2015-03-12 Thread Mike Dorman
For those of you who don’t subscribe to openstack-dev, see below for the 
nova bug to track better logging of scheduler failures:  
https://bugs.launchpad.net/nova/+bug/1431291





On 3/12/15, 5:23 AM, Sean Dague s...@dague.net wrote:

On 03/11/2015 12:53 PM, Sylvain Bauza wrote:
 Reporting on Scheduler Fails
 

 Apparently, some time recently, we stopped logging scheduler fails
 above DEBUG, and that behavior also snuck back into Juno as well
 (https://etherpad.openstack.org/p/PHL-ops-nova-feedback L78). This
 has made tracking down root cause of failures far more difficult.

 Action: this should hopefully be a quick fix we can get in for Kilo
 and backport.
 It's unfortunate that failed scheduling attempts are providing only an
 INFO log. A quick fix could be at least to turn the verbosity up to WARN
 so it would be noticied more easily (including the whole filters stack
 with their results).
 That said, I'm pretty against any proposal which would expose those
 specific details (ie. the number of hosts which are succeeding per
 filter) in an API endpoint because it would also expose the underlying
 infrastructure capacity and would ease DoS discoveries. A workaround
 could be to include in the ERROR message only the name of the filter
 which has been denied so the operators could very easily match what the
 user is saying with what they're seeing in the scheduler logs.
 
 Does that work for people ? I can provide changes for both.
 
 -Sylvain

Bug filed for tracking this here -
https://bugs.launchpad.net/nova/+bug/1431291 if any additional folks
want to add details.

It's been set as High and a kilo-3 item so it doesn't get lost.

   -Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Operators Summit: RabbitMQ

2015-03-05 Thread Mike Dorman
I’ll follow the other posts today with some info on the RabbitMQ session for 
Tuesday morning.

I’d like to start by quickly going over the different RMQ architectures that 
people run, and how you manage and maintain those.

Then I think it’ll be helpful to classify the general issues people typically 
have, which I think are fairly well known at this point, but could use a recap. 
 Then we can dive deeper into how folks are handling and working around them.  
I think this will be the most practical and useful topic for people.

As always, please add details/comments/other topics to the etherpad:  
https://etherpad.openstack.org/p/PHL-ops-rabbit-queue

If there are any reviews in flight around RMQ performance that we as operators 
can help comment on, please add those to the etherpad as well.  I only found 
one that seemed very relevant.

Thanks!
Mike

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [nova] The strange case of osapi_compute_unique_server_name_scope

2015-02-20 Thread Mike Dorman
I can report that we do use this option (‘global' setting.)  We have to 
enforce name uniqueness for instances’ integration with some external 
systems (namely AD and Spacewalk) which require unique naming.

However, we also do some external name validation which I think 
effectively enforces uniqueness as well.  So if this were deprecated, I 
don’t know if it would directly affect us for our specific situation.

Other operators, anyone else using osapi_compute_unique_server_name_scope?

Mike





On 2/19/15, 3:18 AM, Matthew Booth mbo...@redhat.com wrote:

Nova contains a config variable osapi_compute_unique_server_name_scope.
Its help text describes it pretty well:

When set, compute API will consider duplicate hostnames invalid within
the specified scope, regardless of case. Should be empty, project or
global.

So, by default hostnames are not unique, but depending on this setting
they could be unique either globally or in the scope of a project.

Ideally a unique constraint would be enforced by the database but,
presumably because this is a config variable, that isn't the case here.
Instead it is enforced in code, but the code which does this predictably
races. My first attempt to fix this using the obvious SQL solution
appeared to work, but actually fails in MySQL as it doesn't support that
query structure[1][2]. SQLite and PostgreSQL do support it, but they
don't support the query structure which MySQL supports. Note that this
isn't just a syntactic thing. It looks like it's still possible to do
this if we compound the workaround with a second workaround, but I'm
starting to wonder if we'd be better fixing the design.

First off, do we need this config variable? Is anybody actually using
it? I suspect the answer's going to be yes, but it would be extremely
convenient if it's not.

Assuming this configurability is required, is there any way we can
instead use it to control a unique constraint in the db at service
startup? This would be something akin to a db migration. How do we
manage those?

Related to the above, I'm not 100% clear on which services run this
code. Is it possible for different services to have a different
configuration of this variable, and does that make sense? If so, that
would preclude a unique constraint in the db.

Thanks,

Matt

[1] Which has prompted me to get the test_db_api tests running on MySQL.
See this series if you're interested:
https://review.openstack.org/#/c/156299/

[2] For specifics, see my ramblings here:
https://review.openstack.org/#/c/141115/7/nova/db/sqlalchemy/api.py,cm
line 2547
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] Monitoring Discussions at the Mid-Cycle Meetup

2015-02-03 Thread Mike Dorman
Absolutely. I think demos and then follow on discussions would be 
fantastic.




On 2/3/15, 7:33 PM, Hochmuth, Roland M roland.hochm...@hp.com wrote:

Hi Folks, A number of us involved with the Monasca, 
https://wiki.openstack.org/wiki/Monasca, and StackTach, 
https://launchpad.net/stacktach projects, were discussing the possibility 
of having more in-depth discussions around monitoring, Monasca and 
StackTach at the Operators Mid-Cycle Meetup. Discussions could cover a 
variety of topics ranging from discussing monitoring in more detail, such 
as requirements, as well as going in-depth about Monasca and StackTach. A 
demo and tutorial are also possibilities.

We think we have an excellent platform to build on and the eco-system has 
been growing. We would like to start engaging more directly with the 
operator community and take our projects to the next level. There are 
lot's of ways you can help us to understand what to monitor and what 
features to work on next and maybe even contribute to the projects. 
Additionally, we would like to raise general awareness around the work we 
are doing.

Would this be an area that folks are interested in covering at the meetup?

Regards --Roland





___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [Large deployment] Meetings

2015-01-30 Thread Mike Dorman
Awesome, thanks for this update Tim!


From: Tim Bell tim.b...@cern.chmailto:tim.b...@cern.ch
Date: Friday, January 30, 2015 at 12:30 AM
To: OpenStack Operators 
openstack-operators@lists.openstack.orgmailto:openstack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] [Large deployment] Meetings

The spec for quotas in Nova for hierarchical multitenancy was accepted for Kilo 
(https://review.openstack.org/#/c/129420/) and the code has now been dropped 
(https://review.openstack.org/#/c/149828/). Bit more polishing to do but 
hopeful it can make Kilo.

Tim

From: Matt Van Winkle [mailto:mvanw...@rackspace.com]
Sent: 30 January 2015 01:09
To: 
openstack-operators@lists.openstack.orgmailto:openstack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] [Large deployment] Meetings

Updated either pad for today's meeting - 
https://etherpad.openstack.org/p/Large-Deployment-Team-Meetings Please add to 
it if you would like to discuss anything specific.

I'll see everyone online in a couple of hours.  Full disclosure, I'm heading to 
a little league tryout for my son that should be done by then, but there is 
an outside chance I might be a few minutes late.

See you all there!
Matt

From: Matt Van Winkle mvanw...@rackspace.commailto:mvanw...@rackspace.com
Date: Tuesday, January 27, 2015 11:01 PM
Subject: [Openstack-operators] [Large deployment] Meetings

Hey folks,
I dropped the ball following the holidays and didn't get a doodle out to pick a 
time for the APAC friendly meeting this month.  And, I missed the 3rd Thursday 
to boot – sorry folks.

That being said, I'd still like to get together this week to catch up for 
January.  We can find out if anyone in the group went to the Nova mid-cycle 
and/or has caught up with attendees (I'll be trying to sync with some our devs 
on Thursday as well).  Also, we can start planning what we want, as a working 
group, form the Ops mid cycle.

Let's get together at 02:00 UTC on Friday, January 30th (20:00 Central on 
Thursday, January 19th) in #openstack-meeting-4

We agreed in the Dec meeting that 3rd Thursdays/Fridays are best.  We also 
agreed that alternating between the 16:00 UTC / 10:00 Central and an APAC 
friendly time slot.  To be fair, I'll get a doodle out for the March meeting in 
case 02:00 UTC / 20:00 Central isn't ideal

I'll get a rough agenda put together and get another update out tomorrow.

Thanks!
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators