Re: [openstack-dev] [oslo][keystone] oslo_config and wsgi middlewares

2015-08-07 Thread Joshua Harlow

Michael Krotscheck wrote:

On Thu, Aug 6, 2015 at 10:08 AM Mehdi Abaakouk sil...@sileht.net
mailto:sil...@sileht.net wrote:


Yes, but you can't use oslo.config without hardcode the loading the
middleware to pass the oslo.config object into the application.


Yes, and that is intentional, because the use of global variables of any
sort is bad. They're unconstrained, there's no access control to
guarantee the thing you want hasn't been modified, and in the case of
oslo.config, they require initialization before they can be used.

Writing any kind of logic that assumes that a magic global instance has
been initialized is brittle. The pastedeploy wsgi chain is a perfect
example, because the application is only created after the middleware
chain has been executed. This leaves you with - at best - a
malfunctioning piece of middleware that breaks because the global
oslo.config object isn't ready. At worst it's a security vulnerability
that permits bypassing things like keystone.

Passing the config object is a _good_ thing, because it doesn't rely on
magic. Magic is bad. If someone looks at the code and says: I wonder
how this piece of middleware gets its values, and they don't see the
config object being passed, they have to dig into the middleware itself
to figure out what's going on.


It only relies on the rest of the config object to 'magically' fetch the 
values of attributes from somewhere, organize them into some grouping, 
and perform the right type checking/conversion (type checking in python, 
woah), and the magic of digging into help strings instead of docstrings 
(which means the generated code docs of config object using components 
either have to replicate the help string or do something else)... but 
point taken ;)


(I've always preferred APIs that use the standard things u see in python 
to document arguments, parameter types, what there usage is; yes I know 
the history here, just saying this is just another different kind of magic).




I'm clearly on the operator side too, and I just try to find a
solution to
be able to use all middlewares without having to write code for each
in each application and use oslo.config. Zaqar, Gnocchi and Aodh are
the first projects that do to not use cfg.CONF and can't load many
middlewares without writing code for each. When middleware should be
just something that deployer enabled and configuration. Our
middleware looks more like a lib than a middleware)


Sorry, but you're talking from the point of view of someone who wants to
not have to write code for each. That's a developer. It's our job as
developers to write code until it's as easy as possible, and passing in
a config object is _dead simple_ in your application initialization.

Here's the thing. If the middleware is _optional_ like keystone auth,
then including it via paste.ini makes way more sense. In fact, keystone
auth has gone to great lengths to have no dependencies for that very
same reason. If, instead, the middleware is a feature that should ship
with the service - like CORS, or a simple caching layer - then it should
be baked into your application initialization directly.

Michael

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova][ci] Tintri Cinder CI failures after Nova change

2015-08-07 Thread Matt Riedemann



On 8/6/2015 3:30 PM, Skyler Berg wrote:

After the change cleanup NovaObjectDictCompat from virtual_interface
[1] was merged into Nova on the morning of August 5th, Tintri's CI for
Cinder started failing 13 test cases that involve a volume being
attached to an instance [2].

I have verified that the tests fail with the above mentioned change and
pass when running against the previous commit.

If anyone knows why this patch is causing an issue or is experiencing
similar problems, please let me know.

In the meantime, expect Tintri's CI to be either down or reporting
failures until a solution is found.

[1] https://review.openstack.org/#/c/200823/
[2] http://openstack-ci.tintri.com/tintri/refs-changes-06-201406-35/



From the n-cpu logs this is the TypeError:

2015-08-05 06:34:54.826 8 ERROR nova.compute.manager Traceback (most 
recent call last):

2015-08-05 06:34:54.826 8 ERROR nova.compute.manager
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager   File 
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py, 
line 142, in _dispatch_and_reply
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager 
executor_callback))

2015-08-05 06:34:54.826 8 ERROR nova.compute.manager
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager   File 
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py, 
line 186, in _dispatch
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager 
executor_callback)

2015-08-05 06:34:54.826 8 ERROR nova.compute.manager
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager   File 
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py, 
line 129, in _do_dispatch
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager result = 
func(ctxt, **new_args)

2015-08-05 06:34:54.826 8 ERROR nova.compute.manager
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager   File 
/opt/stack/nova/nova/network/floating_ips.py, line 113, in 
allocate_for_instance

2015-08-05 06:34:54.826 8 ERROR nova.compute.manager **kwargs)
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager   File 
/opt/stack/nova/nova/network/manager.py, line 496, in 
allocate_for_instance
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager context, 
instance_uuid)

2015-08-05 06:34:54.826 8 ERROR nova.compute.manager
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager   File 
/usr/lib/python2.7/site-packages/oslo_utils/excutils.py, line 119, in 
__exit__
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager 
six.reraise(self.type_, self.value, self.tb)

2015-08-05 06:34:54.826 8 ERROR nova.compute.manager
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager   File 
/opt/stack/nova/nova/network/manager.py, line 490, in 
allocate_for_instance

2015-08-05 06:34:54.826 8 ERROR nova.compute.manager networks, macs)
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager   File 
/opt/stack/nova/nova/network/manager.py, line 755, in 
_allocate_mac_addresses

2015-08-05 06:34:54.826 8 ERROR nova.compute.manager network['id'])
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager   File 
/opt/stack/nova/nova/network/manager.py, line 774, in 
_add_virtual_interface

2015-08-05 06:34:54.826 8 ERROR nova.compute.manager vif.create()
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager   File 
/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py, line 
205, in wrapper
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager self[key] = 
field.from_primitive(self, key, value)

2015-08-05 06:34:54.826 8 ERROR nova.compute.manager
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager TypeError: 
'VirtualInterface' object does not support item assignment

2015-08-05 06:34:54.826 8 ERROR nova.compute.manager
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager

It looks like you're missing this change in whatever version of 
oslo.versionedobjects you have in your CI:


https://review.openstack.org/#/c/202200/

That should be in o.vo 0.6.0, latest is 0.7.0.  What version of 
oslo.versionedobjects is on the this system?  It would be helpful to 
have pip freeze output.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Nova][Cinder] glance_store and glance

2015-08-07 Thread Matt Riedemann



On 8/7/2015 3:56 AM, Kuvaja, Erno wrote:

Hi,

Flagged Nova and Cinder into this discussion as they were the first intended 
adopters iirc.

I don't have big religious view about this topic. I wasn't huge fan of the idea 
separating it in the first place and I'm not huge fan of keeping it separate 
either.

After couple of cycles we have so far witnessed only the downside of 
glance_store being on it's own. We break even our own gate with our own lib 
releases, we have one extra bug tracker to look after and even not huge but it 
just increases the load on the release and stable teams as well.

In my understanding the interest within Nova to consume glance_store directly 
has pretty much died off since we separated it, please do correct me if I'm 
wrong.
I haven't heard anyone expressing any interest to consume glance_store directly 
within Cinder either.
So far I have failed to see use-case for glance_store alone apart from Glance 
API Server and the original intended use-cases/consumers have either not 
expressed interest what so ever or directly expressed being not interested.

Do we have any reason what so ever keeping doing the extra work to keep these 
two components separate? I'm more than happy to do so or at least extend this 
discussion for a cycle if there is projects out there planning to utilize it. I 
don't want to be in middle of separating it again next cycle because someone 
wanted to consume and forked out the old tree because we decided to kill it but 
I'm not keen to take the overhead of it either without reason.

- Erno


-Original Message-
From: Nikhil Komawar [mailto:nik.koma...@gmail.com]
Sent: Friday, August 07, 2015 6:21 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Glance] glance_store and glance

Hi,

During the mid-cycle we had another proposal that wanted to put back the
glance_store library back into the Glance repo and not leave it is as a
separate repo/project.

The questions outstanding are: what are the use cases that want it as a
separate library?

The original use cases that supported a separate lib have not had much
progress or adoption yet. There have been complaints about overhead of
maintaining it as a separate lib and version tracking without much gain.
The proposals for the re-factor of the library is also a worrysome topic in
terms of the stability of the codebase.

The original use cases from my memory are:
1. Other projects consuming glance_store -- this has become less likely to be
useful.
2. another upload path for users for the convenience of tasks -- not
preferable as we don't want to expose this library to users.
3. ease of addition of newer drivers for the developers -- drivers are only
being removed since.
4. cleaner api / more methods that support backend store capabilities - a
separate library is not necessarily needed, smoother re-factor is possible
within Glance codebase.

Also, the authN/Z complexities and ACL restrictions on the back-end stores
can be potential security loopholes with the library and Glance evolution
separately.

In order to move forward smoothly on this topic in Liberty, I hereby request
input from all concerned developer parties. The decision to keep this as a
separate library will remain in effect if we do not come to resolution within 2
weeks from now. However, if there aren't any significant use cases we may
consider a port back of the same.

Please find some corresponding discussion from the latest Glance weekly
meeting:
http://eavesdrop.openstack.org/meetings/glance/2015/glance.2015-08-06-
14.03.log.html#l-21

--

Thanks,
Nikhil


__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-
requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



As far as I know no one is actively trying to integrate glance_store 
into nova like what the cinder team did with os-brick.  I'm not entirely 
sure how glance_store drops into nova either.  The os-brick integration 
was pretty seamless since it was mostly duplicate code.


I thought glance_store somehow got nova closer to using glance v2 but it 
seems that's not the case?


And now there is a separate proposal to work on a new thing in nova's 
tree that's not python-glanceclient but gets nova to using glance v2 
(and v3?), which seems like more splintering.


When the cinder team got nova to support cinder v2, it was Mike Perez 
taking over the change to add that support, so I'd expect the same type 
of effort from the glance team if they want to propagate newer versions 
of the glance API in order to deprecate v1.

[openstack-dev] [nova] python-novaclient support microversions now

2015-08-07 Thread Alex Xu
Hi, All,

Currently we have microversions support in python-novaclient now! It’s time to 
submit microversions client support if your nova api server side patch merged. 
There is example how to add specific microversion in client 
https://review.openstack.org/#/c/136458/ 
https://review.openstack.org/#/c/136458/ 

If you get any trouble or need any help, please free feel to ping Andrey (irc: 
andreykurilin) or me (irc: alex_xu)

And thanks Andrey hard work on the microversions client implementation!

Thanks
Alex__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] creating a stack with a config_drive

2015-08-07 Thread Maish Saidel-Keesing
I have been looking for a working example to create Heat stack with a 
config_drive attached.


I know it is possible to deploy a nova instance with the CLI [1]

I see that OS::Nova::Server has a config_drive property that is a 
Boolean value [2]


What I cannot find is how this can be used. Where is the path defined 
for the config file?

Or am I completely missing what and how this should be used?

Anyone with more info on this - I would be highly grateful.

Thanks.

[1] http://docs.openstack.org/user-guide/cli_config_drive.html
[2] 
http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Nova::Server



--
Best Regards,
Maish Saidel-Keesing

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] creating a stack with a config_drive

2015-08-07 Thread Randall Burt
config_drive: true just tells the instance to mount the drive. You pass data 
via the user_data property.

 Original message 
From: Maish Saidel-Keesing
Date:08/07/2015 8:08 AM (GMT-06:00)
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Heat] creating a stack with a config_drive

I have been looking for a working example to create Heat stack with a
config_drive attached.

I know it is possible to deploy a nova instance with the CLI [1]

I see that OS::Nova::Server has a config_drive property that is a
Boolean value [2]

What I cannot find is how this can be used. Where is the path defined
for the config file?
Or am I completely missing what and how this should be used?

Anyone with more info on this - I would be highly grateful.

Thanks.

[1] http://docs.openstack.org/user-guide/cli_config_drive.html
[2]
http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Nova::Server


--
Best Regards,
Maish Saidel-Keesing

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] creating a stack with a config_drive

2015-08-07 Thread Maish Saidel-Keesing

On 08/07/15 16:22, Randall Burt wrote:
config_drive: true just tells the instance to mount the drive. You 
pass data via the user_data property.



Thanks Randall that is what I was thinking.

But I am confused.

When booting an instance with nova boot, I can configure a local 
file/directory to be mounted as a config drive on the instance upon 
boot. I can also provide information and commands regularly through the 
user_data


Through Heat I can provide configuration through user_data. And I can 
also mount a config_drive.


Where do I define what that config_drive contains?



 Original message 
From: Maish Saidel-Keesing
Date:08/07/2015 8:08 AM (GMT-06:00)
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Heat] creating a stack with a config_drive

I have been looking for a working example to create Heat stack with a
config_drive attached.

I know it is possible to deploy a nova instance with the CLI [1]

I see that OS::Nova::Server has a config_drive property that is a
Boolean value [2]

What I cannot find is how this can be used. Where is the path defined
for the config file?
Or am I completely missing what and how this should be used?

Anyone with more info on this - I would be highly grateful.

Thanks.

[1] http://docs.openstack.org/user-guide/cli_config_drive.html
[2]
http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Nova::Server


--
Best Regards,
Maish Saidel-Keesing

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Best Regards,
Maish Saidel-Keesing
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] OS::Neutron::Port fails to set security group by name, no way to retrieve group ID from Neutron::SecurityGroup

2015-08-07 Thread jason witkowski
Thanks for the replies guys.  The issue is that it is not working.  If you
take a look at the pastes I linked from the first email I am using the
get_resource function in the security group resource. I am not sure if it
is not resolving to an appropriate value or if it is resolving to an
appropriate value but then not assigning it to the port.  I am happy to
provide any more details or examples but I'm not sure what else I can do
but provide the configuration examples I am using that are not working?
It's very possible my configurations are wrong but I have scoured the
internet for any/all examples and it looks like what I have should be
working but it is not.


Best Regards,

Jason Witkowski

On Fri, Aug 7, 2015 at 3:42 AM, Kairat Kushaev kkush...@mirantis.com
wrote:

 Hello Jason,
 Agree with TianTian. It would be good if you provide more details about
 the error you have.
 Additionally, it would be perfect if you'll use heat IRC channel: #heat or
 ask.openstack.org to resolve such kind of questions.

 Best regards,
 Kairat Kushaev
 Software Engineer, Mirantis

 On Fri, Aug 7, 2015 at 9:43 AM, TIANTIAN tiantian...@163.com wrote:

 1) OS::Neutron::Port does not seem to recognize security groups by name
 --
 https://github.com/openstack/heat/blob/stable/kilo/heat/engine/resources/openstack/neutron/port.py#L303

 https://github.com/openstack/heat/blob/stable/kilo/heat/engine/clients/os/neutron.py#L111
 we can recognize group name
 2) OS::Neutron::SecurityGroup has no attributes so it can not return a
 security group ID
 --
 https://github.com/openstack/heat/blob/stable/kilo/heat/engine/resources/openstack/neutron/neutron.py#L133
 https://github.com/openstack/heat/blob/stable/kilo/heat/engine/resources/openstack/neutron/neutron.py#L133,
 we can get the resource id (security group id) by function
 'get_resource'
 So what do you want? And what's the problems?


 At 2015-08-07 11:10:37, jason witkowski jwit...@gmail.com wrote:

 Hey All,

 I am having issues on the Kilo branch creating an auto-scaling template
 that builds a security group and then adds instances to it.  I have tried
 every various method I could think of with no success.  My issues are as
 such:

 1) OS::Neutron::Port does not seem to recognize security groups by name
 2) OS::Neutron::SecurityGroup has no attributes so it can not return a
 security group ID

 These issues combined find me struggling to automate the building of a
 security group and instances in one heat stack.  I have read and looked at
 every example online and they all seem to use either the name of the
 security group or the get_resource function to return the security group
 object itself.  Neither of these work for me.

 Here are my heat template files:

 autoscaling.yaml - http://paste.openstack.org/show/412143/
 redirector.yaml - http://paste.openstack.org/show/412144/
 env.yaml - http://paste.openstack.org/show/412145/

 Heat Client: 0.4.1
 Heat-Manage: 2015.1.1

 Any help would be greatly appreciated.

 Best Regards,

 Jason


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] creating a stack with a config_drive

2015-08-07 Thread Randall Burt
The drive will contain the user data. Its an alternative to the metadata 
service and isn't a normal drive. Its created, mounted, and populated by Nova.

 Original message 
From: Maish Saidel-Keesing
Date:08/07/2015 8:35 AM (GMT-06:00)
To: Randall Burt , maishsk+openst...@maishsk.com, OpenStack Development 
Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Heat] creating a stack with a config_drive

On 08/07/15 16:22, Randall Burt wrote:
config_drive: true just tells the instance to mount the drive. You pass data 
via the user_data property.

Thanks Randall that is what I was thinking.

But I am confused.

When booting an instance with nova boot, I can configure a local file/directory 
to be mounted as a config drive on the instance upon boot. I can also provide 
information and commands regularly through the user_data

Through Heat I can provide configuration through user_data. And I can also 
mount a config_drive.

Where do I define what that config_drive contains?


 Original message 
From: Maish Saidel-Keesing
Date:08/07/2015 8:08 AM (GMT-06:00)
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Heat] creating a stack with a config_drive

I have been looking for a working example to create Heat stack with a
config_drive attached.

I know it is possible to deploy a nova instance with the CLI [1]

I see that OS::Nova::Server has a config_drive property that is a
Boolean value [2]

What I cannot find is how this can be used. Where is the path defined
for the config file?
Or am I completely missing what and how this should be used?

Anyone with more info on this - I would be highly grateful.

Thanks.

[1] http://docs.openstack.org/user-guide/cli_config_drive.html
[2]
http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Nova::Server


--
Best Regards,
Maish Saidel-Keesing

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribemailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Best Regards,
Maish Saidel-Keesing
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Puppet] [kolla] Deploying OpenStack with Puppet modules on Docker with Heat

2015-08-07 Thread Ryan Hallisey

 I have a few questions:

 * when do you run puppet? before starting the container so we can
 generate a configuration file?

We would run puppet before starting the container that way puppet can
generate the config and the container absorb it.

 * so iiuc, Puppet is only here to generate OpenStack configuration files
 and we noop all other operations. Right?

Correct.  So far this is the case.

 * from a Puppet perspective, I really prefer this approach:
 https://review.openstack.org/#/c/197172/ where we assign tags to
 resources so we can easily modify/drop Puppet resources using our
 modules. What do you think (for long term)?

It's possible.  I'd have to look into this, but it's something we can
explore.

 * how do you manage multiple configuration files? (if a controller is
 running multiple nova-api containers with different configuration files?

In the heat template we could specify the exact config file to hand into
the container.  This is the same case for say neutron that has multiple
config files for a single service.

 Once I understand a bit more where we go, I'll be happy to help to make
 it happen in our modules, we already have folks deploying our modules
 with containers, I guess we can just talk and collaborate here.
 Also, I'll be interested to bringing containers support in our CI, but
 that's a next step :-)

Cool!  Will keep the thread posted on how the work progresses.
You can follow along with this patch: https://review.openstack.org/#/c/209505/1

-Ryan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] [aodh] upgrade path

2015-08-07 Thread gord chung



On 07/08/2015 3:49 AM, Chris Dent wrote:


Despite our conversation in the meeting yesterday[1] I still remain a
bit confused about the upgrade path from alarming-in-ceilometer to
alarming provided by aodh and the availability of the older code in
released liberty.

Much of my confusion can probably be resolved by knowing the answer to
this question:

If someone installs aodh on a machine that already has ceilometer on it
and turns off ceilometer-alarm-notifier and ceilometer-alarm-evaluator
(in favor of aodh-notifier and aodh-evaluator) will they be able to run
those aodh services against their existing ceilometer.conf files[2]?

What if they were, in ceilometer, using specific config for their
alarming database (alarm_connection). Can aodh see and use this
config option?

Or will they need to copy and modify the existing conf files to allow
them to carry on using their existing database config?

I know that I can go try some of this in a devstack, but that's not
really the point of the question. The point is: What are we expecting
existing deployments to do?

I had assumed that the reason for keeping alarming in ceilometer for
the liberty release was to allow a deployment to upgrade to Liberty
across the project suites without having to go through modifying
alarming config and managing an alarming migration in the same step.
That migration ought to be pretty minor (tweak a config here and
there) but unless the answer to my first question is yes it has some
significance.


following up, after speaking with Chris, a critical question was not 
just what happens to those who upgrade but what happens to those who 
choose NOT to upgrade to Aodh. to clarify, it is Ceilometer's intent to 
have Aodh as the source of alarming functionality going forward -- no 
new features have been added or will be added to the existing alarming 
code in Ceilometer. also, any new feature must be added to Aodh.


with that said, for those who choose not to upgrade and are content with 
existing alarming code, the code will exist as is for Liberty. after 
speaking with the Nova team, there has been a deprecation period between 
Cinder/Glance split before it was fully removed from packaging/code. 
Ceilometer will follow the same but will target a more aggressive 
deprecation period and the code will be removed in M* cycle.


the code removal is dependent on Aodh being gated on, released and 
packaged. it is also dependent on any upgrade requirements being documented.


the goals for a short deprecation is:
- to avoid a slow complicated divergence in code that will lead to 
difficult maintenance

- to allow time for packagers to package the new Aodh service
- to give operators, tracking the latest and greatest, the option of 
whether to upgrade to Aodh or not.


i hope that clarifies our intentions. this is our first split so if 
there are any noticeable gaps in logic, please feel free to chime in.


cheers,

--
gord


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] stable is hosed

2015-08-07 Thread Kyle Mestery
On Fri, Aug 7, 2015 at 4:09 PM, Matt Riedemann mrie...@linux.vnet.ibm.com
wrote:



 On 8/7/2015 3:52 PM, Matt Riedemann wrote:

 Well it's a Friday afternoon so you know what that means, emails about
 the stable branches being all busted to pieces in the gate.

 Tracking in the usual place:

 https://etherpad.openstack.org/p/stable-tracker

 Since things are especially fun the last two days I figured it was time
 for a notification to the -dev list.

 Both are basically Juno issues.

 1. The large ops job is busted because of some uncapped dependencies in
 python-openstackclient 1.0.1.

 https://bugs.launchpad.net/openstack-gate/+bug/1482350

 The fun thing here is g-r is capping osc=1.0.1 and there is already a
 1.0.2 version of osc, so we can't simply cap osc in a 1.0.2 and raise
 that in g-r for stable/juno (we didn't leave ourselves any room for bug
 fixes).

 We talked about an osc 1.0.1.1 but pbr=0.11 won't allow that because it
 breaks semver.

 The normal dsvm jobs are OK because they install cinder and cinder
 installs the dependencies that satisfy everything so we don't hit the
 osc issue.  The large ops job doesn't use cinder so it doesn't install it.

 Options:

 a) Somehow use a 1.0.1.post1 version for osc.  Would require input from
 lifeless.

 b) Install cinder in the large ops job on stable/juno.

 c) Disable the large ops job for stable/juno.


 2. grenade on kilo blows up because python-neutronclient 2.3.12 caps
 oslo.serialization at =1.2.0, keystonemiddleware 1.5.2 is getting
 pulled in which pulls in oslo.serialization 1.4.0 and things fall apart.

 https://bugs.launchpad.net/python-neutronclient/+bug/1482758

 I'm having a hard time unwinding this one since it's a grenade job.  I
 know the failures line up with the neutronclient 2.3.12 release which
 caps requirements on stable/juno:

 https://review.openstack.org/#/c/204654/.


 OK, the problem is that neutronclient doesn't get updated on the new kilo
 side of grenade past 2.3.12 because it satisfies the requirement for kilo:


 https://github.com/openstack/requirements/blob/stable/kilo/global-requirements.txt#L132

 python-neutronclient=2.3.11,2.5.0

 But since neutronclient 2.3.12 caps things for juno, we can't use it on
 kilo due to the conflict and then kaboom.


So, 2.3.12 was explicitely for Juno, and not for Kilo. In fact, the
existing 2.3.11 client for Juno was failing due to some other oslo library
(I'd have to dig it out). It seems we want Kilo requirements to be this:

python-neutronclient=2.4.0,2.5.0

I won't be able to submit a patch which does this for a few more hours, if
someone beats me to it, please copy me on the patch and/or reply on this
thread.

Thanks for digging this one out Matt!

Kyle



 Need some help here.


 --

 Thanks,

 Matt Riedemann


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] change of day for API subteam meeting?

2015-08-07 Thread Anne Gentle
On Fri, Aug 7, 2015 at 11:48 AM, Sean Dague s...@dague.net wrote:

 Friday's have been kind of a rough day for the Nova API subteam. It's
 already basically the weekend for folks in AP, and the weekend is right
 around the corner for everyone else.

 I'd like to suggest we shift the meeting to Monday or Tuesday in the
 same timeslot (currently 12:00 UTC). Either works for me. Having this
 earlier in the week I also hope keeps the attention on the items we need
 to be looking at over the course of the week.

 If current regular attendees could speak up about day preference, please
 do. We'll make a change if this is going to work for folks.


I'd like to see a shift as well to earlier in the week, and keeping the
12:00 UTC is fine.

Anne


 -Sean

 --
 Sean Dague
 http://dague.net

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Anne Gentle
Rackspace
Principal Engineer
www.justwriteclick.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] installation of requirements not possible because of wrong pip version

2015-08-07 Thread Christian Berendt
According to requirements.txt we require pip=6.0. Trying to install the 
requirements for nova with pip 6.1.1 is not possible at the moment 
because of the following issue:


$ virtualenv .venv
$ source .venv/bin/activate
$ pip install -r requirements.txt
You are using pip version 6.1.1, however version 7.1.0 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
Double requirement given: Routes!=2.0,=1.12.3 (from -r requirements.txt 
(line 14)) (already in Routes!=2.0,!=2.1,=1.12.3 (from -r 
requirements.txt (line 13)), name='Routes')


It looks like pip 6.1.1 cannot handle the following 2 lines in 
requirements.txt:


Routes=1.12.3,!=2.0,!=2.1;python_version=='2.7'
Routes=1.12.3,!=2.0;python_version!='2.7'

After upgrading pip to the latest available version (7.1.0) with pip 
install --upgrade pip everything is working like expected.


Does this mean that we have to require at least pip=7.1.0 in the global 
requirements?


Christian.

--
Christian Berendt
Cloud Solution Architect
Mail: bere...@b1-systems.de

B1 Systems GmbH
Osterfeldstraße 7 / 85088 Vohburg / http://www.b1-systems.de
GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] installation of requirements not possible because of wrong pip version

2015-08-07 Thread Christian Berendt

On 08/07/2015 10:52 PM, Robert Collins wrote:

I don't know why Nova has a requirement expressed on pip, since
requirements.txt is evaluated by pip its too late. Does Nova actually
consume pip itself?


pip is not listed in the requirements.txt file of nova. pip is listed in 
the global requirements:


https://github.com/openstack/requirements/blob/master/global-requirements.txt#L112

My understanding is that it is possible to use pip=6.0. Is that wrong?

Is there an other place where we documented which version of pip can be 
used with the requirement.txt files (based on global-requirements.txt)?


Christian.

--
Christian Berendt
Cloud Solution Architect
Mail: bere...@b1-systems.de

B1 Systems GmbH
Osterfeldstraße 7 / 85088 Vohburg / http://www.b1-systems.de
GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] installation of requirements not possible because of wrong pip version

2015-08-07 Thread Robert Collins
Pip is listed there because we used to use pip from within pbr. We could
raise the pip version in global requirements safely I think.
On 8 Aug 2015 09:58, Christian Berendt christ...@berendt.io wrote:

 On 08/07/2015 10:52 PM, Robert Collins wrote:

 I don't know why Nova has a requirement expressed on pip, since
 requirements.txt is evaluated by pip its too late. Does Nova actually
 consume pip itself?


 pip is not listed in the requirements.txt file of nova. pip is listed in
 the global requirements:


 https://github.com/openstack/requirements/blob/master/global-requirements.txt#L112

 My understanding is that it is possible to use pip=6.0. Is that wrong?

 Is there an other place where we documented which version of pip can be
 used with the requirement.txt files (based on global-requirements.txt)?

 Christian.

 --
 Christian Berendt
 Cloud Solution Architect
 Mail: bere...@b1-systems.de

 B1 Systems GmbH
 Osterfeldstraße 7 / 85088 Vohburg / http://www.b1-systems.de
 GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Nova API improvement plan

2015-08-07 Thread Anne Gentle
On Tue, Aug 4, 2015 at 9:40 AM, Anne Gentle annegen...@justwriteclick.com
wrote:



 On Tue, Aug 4, 2015 at 7:48 AM, Sean Dague s...@dague.net wrote:

 On the plane home from the Nova midcycle meetup I spent a chunk of time
 reading our API docs that are now in tree:

 https://github.com/openstack/nova/blob/master/doc/source/v2/2.0_server_concepts.rst
 and it got me concerned that documentation improvements don't seem to be
 the top priority on the API infrastructure side.

 The API concept guide is a really useful (though currently horribly out
 of date) document for someone trying to use the Nova API without reading
 all the Nova code. I feel like without that kind of big picture view,
 the Nova API is quite hard to sort out.

 I'd like to get updating that much higher up the priority list for the
 API subteam. I realize there is this large json home patch series out
 there to add new hotness to the API, but I feel like json home is
 premature until we've got a concept picture of the Nova API in
 documentation.

 How do we get this ball rolling? Who's up for helping? How do we get the
 concept guide back onto developer.openstack.org once it's not horribly
 out of date?

 I don't feel like I've got a plan yet in my head, but I'd really like to
 get one developed over the next couple of weeks so that we can actually
 make some real progress here. So who is up for helping, and what format
 this plan will take, are the key bits.



 I'm up for helping, and it's related to our work on
 http://specs.openstack.org/openstack/docs-specs/specs/liberty/api-site.html,
 because ideally we'll build the API dev guides on developer.openstack.org
 .

 Can you use the plan we've got now and help out? What would you adjust,
 the narrative portion publishing?


Hi all,

I've adjusted the plan a bit here: https://review.openstack.org/#/c/209669/

Basically we want to ensure we can publish great conceptual guides to
ensure users understand the capabilities with our APIs. Plus, we want to
support the microversion shifts and provide more writing enablement! So,
feel free to review the revised plan, and please do write all you like
about your API while we work hard on the reference information automation
in parallel.

Thanks,
Anne




 Thanks,
 Anne



 -Sean

 --
 Sean Dague
 http://dague.net

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Anne Gentle
 Rackspace
 Principal Engineer
 www.justwriteclick.com




-- 
Anne Gentle
Rackspace
Principal Engineer
www.justwriteclick.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [oslo] troubling passing of unit tests on broken code

2015-08-07 Thread Mike Bayer

Just a heads up that this recently merged code is wrong:

https://review.openstack.org/#/c/192760/14/nova/tests/unit/db/test_migrations.py,cm

and here it is failing tests on my local env, as it does on my CI, as 
would be expected, there's a lot more if I keep it running:


http://paste.openstack.org/show/412236/

However, utterly weirdly, all those tests *pass* with the same versions 
of everything in the gate:


http://paste.openstack.org/show/412236/


I have no idea why this is.  This might be on the oslo.db side within 
the test_migrations logic, not really sure.If someone feels like 
digging in, that would be great.


The failure occurs with both Alembic 0.7.7 and Alembic 0.8 as yet 
unreleased.  I have a feeling that releasing Alembic 0.8 may or may not 
bump this failure to be more widespread, just because of its apparent 
heisenbuggy nature, and I'm really hoping to release 0.8 next week.  It 
was supposed to be this week but I got sidetracked.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][neutron] How to give nested VM access to outside network?

2015-08-07 Thread Rich Megginson

On 08/04/2015 01:44 AM, Andreas Scheuring wrote:

Can you try answer 1 of [1]?

I've never tried it, but I heard from folks who configured it like that.
With this masquerading, your vm should be able to reach your 192.x
network. But as it's NAT it won't work the other way round (e.g.
establish a connection from outside into your vm)

The proper way would be to configure your provider network to match the
192.x subnet. In addition you would need to plug your 192.x interface
(eth0)? into the ovs br-ex. But be careful! This steps breaks
connectivity via this interface. So be sure that you're logged in via
another interface or via some vnc session.


Thanks.  This works:
1) Add this to local.conf before running stack.sh:

[[local|localrc]]
ADMIN_PASSWORD=secret
ENABLED_SERVICES=g-api,g-reg,key,n-api,n-cpu,n-sch,n-cond,n-crt,n-novnc,mysql,rabbit,dstat,quantum,q-svc,q-agt,q-dhcp,q-l3,q-meta
SERVICE_HOST=127.0.0.1
NETWORK_GATEWAY=10.0.0.1
PUBLIC_NETWORK_GATEWAY=172.24.4.1
... other config ...

[[post-config|$Q_DHCP_CONF_FILE]]
[DEFAULT]
dnsmasq_dns_servers = 192.168.122.1

NOTE: If you are adding the above from a script as e.g. a here doc, 
don't forget to escape the $ e.g. [[post-config|\$Q_DHCP_CONF_FILE]]


2) Run this command after running stack.sh and before creating a vm:
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

Now, the nested VM can ping external IP addresses, and name server 
lookups work.




If you have further questions regarding provider networks, feel free to
ask again!



[1]
https://ask.openstack.org/en/question/44266/connect-vm-in-devstack-to-external-network/


On Mo, 2015-08-03 at 22:07 -0600, Rich Megginson wrote:

I'm running devstack in a VM (Fedora 21 host, EL 7.1.x VM) with a static
IP address (because dhcp was not working):

  cat  /etc/sysconfig/network-scripts/ifcfg-eth0 EOF
DEVICE=eth0
BOOTPROTO=static
DHCPCLASS=
HWADDR=$VM_MAC
IPADDR=192.168.122.5
NETMASK=255.255.255.0
GATEWAY=192.168.122.1
ONBOOT=yes
NM_CONTROLLED=no
TYPE=Ethernet
USERCTL=yes
PEERDNS=yes
DNS1=192.168.122.1
IPV6INIT=no
EOF

with Neutron networking enabled and Nova networking disabled:

[[local|localrc]]
IP_VERSION=4
ENABLED_SERVICES=g-api,g-reg,key,n-api,n-cpu,n-sch,n-cond,n-crt,n-novnc,mysql,rabbit,dstat,quantum,q-svc,q-agt,q-dhcp,q-l3,q-meta
SERVICE_HOST=127.0.0.1
NETWORK_GATEWAY=10.0.0.1
PUBLIC_NETWORK_GATEWAY=172.24.4.1
...

I've followed this some, but I don't want to use the provider network:
http://docs.openstack.org/developer/devstack/guides/neutron.html

I've hacked the floating_ips exercise to use neutron networking commands:

http://ur1.ca/ncjm6

I can ssh into the nested VM, I can assign it a floating IP.

However, it cannot see the outside world.  From it, I can ping the
10.0.0.1 network and the 172.24.4.1 network, and even 192.168.122.5, but
not 192.168.122.1 or anything outside of the VM.

route looks like this: http://ur1.ca/ncjog

ip addr looks like this: http://ur1.ca/ncjop

Here is the entire output of stack.sh:
https://rmeggins.fedorapeople.org/stack.out

Here is the entire output of the exercise:
https://rmeggins.fedorapeople.org/exercise.out


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] A possible solution for HA Active-Active

2015-08-07 Thread Andrew Beekhof

 On 5 Aug 2015, at 1:34 am, Joshua Harlow harlo...@outlook.com wrote:
 
 Philipp Marek wrote:
 If we end up using a DLM then we have to detect when the connection to
 the DLM is lost on a node and stop all ongoing operations to prevent
 data corruption.
 
 It may not be trivial to do, but we will have to do it in any solution
 we use, even on my last proposal that only uses the DB in Volume Manager
 we would still need to stop all operations if we lose connection to the
 DB.
 
 Well, is it already decided that Pacemaker would be chosen to provide HA in
 Openstack? There's been a talk Pacemaker: the PID 1 of Openstack IIRC.
 
 I know that Pacemaker's been pushed aside in an earlier ML post, but IMO
 there's already *so much* been done for HA in Pacemaker that Openstack
 should just use it.
 
 All HA nodes needs to participate in a Pacemaker cluster - and if one node
 looses connection, all services will get stopped automatically (by
 Pacemaker) - or the node gets fenced.
 
 
 No need to invent some sloppy scripts to do exactly the tasks (badly!) that
 the Linux HA Stack has been providing for quite a few years.
 
 
 Yes, Pacemaker needs learning - but not more than any other involved
 project, and there are already quite a few here, which have to be known to
 any operator or developer already.
 
 
 (BTW, LINBIT sells training for the Linux HA Cluster Stack - and yes,
  I work for them ;)
 
 So just a piece of information, but yahoo (the company I work for, with vms 
 in the tens of thousands, baremetal in the much more than that...) hasn't 
 used pacemaker, and in all honesty this is the first project (openstack) that 
 I have heard that needs such a solution. I feel that we really should be 
 building our services better so that they can be A-A vs having to depend on 
 another piece of software to get around our 'sloppiness' (for lack of a 
 better word).

HA is a deceptively hard problem.
There is really no need for every project to attempt to solve it on their own.
Having everyone consuming/calculating a different membership list is a very 
good way to go insane.

Aside from the usual bugs, the HA space lends itself to making simplifying 
assumptions early on, only to trap you with them down the road.
Its even worse if you’re trying to bolt it on after-the-fact...

Perhaps try to think of pacemaker as a distribute finite state machine instead 
of a cluster manager.
That is part of the value we bring to projects like galera and rabbitmq.

Sure they are A-A, and once they’re up they can survive many failures, but 
bringing them up can be non-trivial.
We also provide the additional context (eg. quorum and fencing) that allow more 
kinds of failures to be safely recovered from.

Something to think about perhaps.

— Andrew

 
 Nothing against pacemaker personally... IMHO it just doesn't feel like we are 
 doing this right if we need such a product in the first place.
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] ][third-party-ci]Running custom code before tests

2015-08-07 Thread Eduard Matei
Hi,

I managed to get the jobs triggered, i read
https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers but i can't
figure out where to put the code for pre_test_hook so i can setup my
backend.

Thanks,

-- 

*Eduard Biceri Matei, Senior Software Developer*
www.cloudfounders.com
 | eduard.ma...@cloudfounders.com



*CloudFounders, The Private Cloud Software Company*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What's Up, Doc? 7 August 2015

2015-08-07 Thread Lana Brindley
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi everyone,

Welcome to August! Today we can finally announce that the Cloud Admin
Guide is now completely converted to RST! I have been doing a fair bit
of 'behind the scenes' work this week, focusing mainly on Training
Guides and our docs licensing. We also welcome another new core team
member this week, and for those of you in the APAC region, get ready for
the Docs Swarm in Brisbane, where we'll be working on restructuring the
Architecture Design Guide.

== Progress towards Liberty ==

68 days to go!

* RST conversion:
** Install Guide: Conversion is nearly done, sign up here:
https://wiki.openstack.org/wiki/Documentation/Migrate#Installation_Guide_Migration
** Cloud Admin Guide: is complete! The new version will be available on
docs.openstack.org very soon.
** HA Guide: is also nearly done. Get in touch with Meg or Matt:
https://wiki.openstack.org/wiki/Documentation/HA_Guide_Update
** Security Guide: Conversion is now underway, sign up here:
https://etherpad.openstack.org/p/sec-guide-rst

* User Guides information architecture overhaul
** Waiting on the RST conversion of the Cloud Admin Guide to be complete

* Greater focus on helping out devs with docs in their repo
** Work has stalled on the Ironic docs, we need to pick this up again.
Contact me if you want to know more, or are willing to help out.

* Improve how we communicate with and support our corporate contributors
** I have been brainstorming ideas with Foundation, watch this space!

* Improve communication with Docs Liaisons
** I'm very pleased to see liaisons getting more involved in our bugs
and reviews. Keep up the good work!

* Clearing out old bugs
** Sadly, no action on the spotlight bugs this week. Perhaps we're all
worn out from the RST conversions? I'll keep the current three bugs for
this week, to give everyone a little more time.

== RST Migration ==

With the Cloud Admin Guide complete, we are now working on the Install
Guide, HA Guide, and the Security Guide. If you would like to assist,
please get in touch with the appropriate speciality team:

* Install Guide:
** Contact Karin Levenstein karin.levenst...@rackspace.com
** Sign up here:
https://wiki.openstack.org/wiki/Documentation/Migrate#Installation_Guide_Migration

* HA Guide
** Contact Meg McRoberts dreidellh...@yahoo.com or Matt Griffin
m...@mattgriffin.com
** Blueprint:
https://blueprints.launchpad.net/openstack-manuals/+spec/improve-ha-guide

* Security Guide
** Contact Nathaniel Dillon nathaniel.dil...@hp.com
** Info: https://etherpad.openstack.org/p/sec-guide-rst

For books that are now being converted, don't forget that any change you
make to the XML must also be made to the RST version until conversion is
complete. Our lovely team of cores will be keeping an eye out to make
sure loose changes to XML don't pass the gate, but try to help them out
by pointing out both patches in your reviews.

== Training Guides ==

I've been working with the Training Guides group and the docs core team
to determine the best way to move forward with the Training Guides
project. At this stage, we're planning on breaking the project up into a
few distinct parts, and bringing Training Guides back into the
documentation group as a speciality team. If you have any opinions or
ideas on this, feel free to contact me so I can make sure we're
considering all the options.

== APAC Docs Swarm ==

We're less than a week away from the APAC doc swarm! This time we'll be
working on the Architecture Design Guide. It's to be held at the Red Hat
office in Brisbane, on 13-14 August. Check out
http://openstack-swarm.rhcloud.com/ for all the info and to RSVP.

== Core Team Changes ==

This month, we welcome KATO Tomoyuki on to our docs core team. Thanks
for all your hard work Tomoyuki-san, and welcome to the team!

== Doc team meeting ==

The APAC meeting was held this week. Read the minutes here:
https://wiki.openstack.org/wiki/Documentation/MeetingLogs#2015-08-05

The next meetings are:
US: Wednesday 12 August, 14:00:00 UTC
APAC: Wednesday 19 August, 00:30:00 UTC

Please go ahead and add any agenda items to the meeting page here:
https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting#Agenda_for_next_meeting

== Spotlight bugs for this week ==

Let's give these three a little more oxygen:

https://bugs.launchpad.net/openstack-manuals/+bug/1257018 VPNaaS isn't
documented in cloud admin

https://bugs.launchpad.net/openstack-manuals/+bug/1257656 VMware: add
support for VM diagnostics

https://bugs.launchpad.net/openstack-manuals/+bug/1261969 Document nova
server package

- --

Remember, if you have content you would like to add to this newsletter,
or you would like to be added to the distribution list, please email me
directly at openst...@lanabrindley.com, or visit:
https://wiki.openstack.org/w/index.php?title=Documentation/WhatsUpDoc

Keep on doc'ing!

Lana

- -- 
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com

Re: [openstack-dev] [Ceilometer][AODH] Timeout Event Alarms

2015-08-07 Thread Ryota Mibu
Hi,


Sorry for my late response and my absent in weekly meetings...

I'm not sure whether I captured your idea correctly, but I prefer the second 
approach now.

I agreed the point Igor and liusheng mentioned that the second approach enables 
end users to have configurable expire-time.

In another point of view, the first approach may affect pipeline performance 
since it have to keep event sequence state or have to access DB for state 
querying when each event received. This is just my concern, but I think event 
pipeline should be simplest and limited to have only common features between 
event data storage, event alarming and other receiver like audit system.


Thanks,
Ryota

 -Original Message-
 From: liusheng [mailto:liusheng1...@126.com]
 Sent: Wednesday, August 05, 2015 1:12 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Ceilometer][AODH] Timeout Event Alarms
 
 Hi,
 
 Maybe the event transformer is needed in some use cases to generate new 
 events or do transformations like the samples
 handling.  but for this timeout event alarming requirement,  the 'timeout' of 
 alarms will be various, it not a good idea
 of changing event_pipeline.yaml to generate new events based on events 
 timeout when we need an event-timeout alarm. and
 also, the access of event pipeline definitions to users is inadvisable. I 
 personally think it'd better to implement the
 second option and based on Ryota's proposal.
 
 Best Regards
 Liusheng
 
 
 在 2015/8/5 3:36, gord chung 写道:
 
 
   hi Igor,
 
   i would suggest you go with second option as i believe your 
 implementation will overlap and reuse some of the
 functionality Ryota would code for his alarm spec [1]. also, since Aodh is 
 working on an independent release cycle, it'll
 give you some more time as i don't think we'd be able to get this into 
 Liberty if we went the pipeline route.
 
   [1] 
 http://specs.openstack.org/openstack/ceilometer-specs/specs/liberty/event-alarm-evaluator.html
 
 
   On 04/08/2015 10:00 AM, Igor Degtiarov wrote:
 
 
   Hi folks,
 
 
   On our meatup we agreed to add timeout event alarms 
 [1](Event-Base Alarming part).
   In ToDo task Сhoose the optimal way for timeout alerting 
 implementation
 
   Now we have two proposition for implementation:
 
- first is to add timeout param in event pipeline (transformer 
 part) [2]
  -- weakness of this approach is that we cannot allow user 
 change config files, so only administrator
 will be able to set rules for timeout events alarms, and that is not what we 
 are expecting from alarms.
 
- second is additional optional parameters in event alarms 
 description like sequence of required events
 and timeout threshold. Event alarm evaluator looks thru getting events and 
 evaluates alarm if even one event from required
 sequence isn't received in set timeout.[3]
 
 
   It seems that second approach is better it doesn't have 
 restrictions for end user.
 
   Hope for your help in choosing optimal way for implementation.
   (In specs review there is silence now)
 
 
   [1] 
 https://wiki.openstack.org/wiki/Meetings/Ceilometer/Liberty_Virtual_Mid-Cycle
   [2] https://review.openstack.org/#/c/162167
   [3] https://review.openstack.org/#/c/199005
 
 
   Igor Degtiarov
   Software Engineer
   Mirantis Inc.
   www.mirantis.com
 
 
 
   
 __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
   --
   --
   gord
 
 
 
   
 __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Nova][Cinder] glance_store and glance

2015-08-07 Thread Kuvaja, Erno
Hi,

Flagged Nova and Cinder into this discussion as they were the first intended 
adopters iirc.

I don't have big religious view about this topic. I wasn't huge fan of the idea 
separating it in the first place and I'm not huge fan of keeping it separate 
either.

After couple of cycles we have so far witnessed only the downside of 
glance_store being on it's own. We break even our own gate with our own lib 
releases, we have one extra bug tracker to look after and even not huge but it 
just increases the load on the release and stable teams as well.

In my understanding the interest within Nova to consume glance_store directly 
has pretty much died off since we separated it, please do correct me if I'm 
wrong.
I haven't heard anyone expressing any interest to consume glance_store directly 
within Cinder either.
So far I have failed to see use-case for glance_store alone apart from Glance 
API Server and the original intended use-cases/consumers have either not 
expressed interest what so ever or directly expressed being not interested.

Do we have any reason what so ever keeping doing the extra work to keep these 
two components separate? I'm more than happy to do so or at least extend this 
discussion for a cycle if there is projects out there planning to utilize it. I 
don't want to be in middle of separating it again next cycle because someone 
wanted to consume and forked out the old tree because we decided to kill it but 
I'm not keen to take the overhead of it either without reason.

- Erno

 -Original Message-
 From: Nikhil Komawar [mailto:nik.koma...@gmail.com]
 Sent: Friday, August 07, 2015 6:21 AM
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [Glance] glance_store and glance
 
 Hi,
 
 During the mid-cycle we had another proposal that wanted to put back the
 glance_store library back into the Glance repo and not leave it is as a
 separate repo/project.
 
 The questions outstanding are: what are the use cases that want it as a
 separate library?
 
 The original use cases that supported a separate lib have not had much
 progress or adoption yet. There have been complaints about overhead of
 maintaining it as a separate lib and version tracking without much gain.
 The proposals for the re-factor of the library is also a worrysome topic in
 terms of the stability of the codebase.
 
 The original use cases from my memory are:
 1. Other projects consuming glance_store -- this has become less likely to be
 useful.
 2. another upload path for users for the convenience of tasks -- not
 preferable as we don't want to expose this library to users.
 3. ease of addition of newer drivers for the developers -- drivers are only
 being removed since.
 4. cleaner api / more methods that support backend store capabilities - a
 separate library is not necessarily needed, smoother re-factor is possible
 within Glance codebase.
 
 Also, the authN/Z complexities and ACL restrictions on the back-end stores
 can be potential security loopholes with the library and Glance evolution
 separately.
 
 In order to move forward smoothly on this topic in Liberty, I hereby request
 input from all concerned developer parties. The decision to keep this as a
 separate library will remain in effect if we do not come to resolution within 
 2
 weeks from now. However, if there aren't any significant use cases we may
 consider a port back of the same.
 
 Please find some corresponding discussion from the latest Glance weekly
 meeting:
 http://eavesdrop.openstack.org/meetings/glance/2015/glance.2015-08-06-
 14.03.log.html#l-21
 
 --
 
 Thanks,
 Nikhil
 
 
 __
 
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: OpenStack-dev-
 requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] OS::Neutron::Port fails to set security group by name, no way to retrieve group ID from Neutron::SecurityGroup

2015-08-07 Thread TIANTIAN
1) OS::Neutron::Port does not seem to recognize security groups by name

-- 
https://github.com/openstack/heat/blob/stable/kilo/heat/engine/resources/openstack/neutron/port.py#L303

https://github.com/openstack/heat/blob/stable/kilo/heat/engine/clients/os/neutron.py#L111
we can recognize group name 
2) OS::Neutron::SecurityGroup has no attributes so it can not return a security 
group ID  
-- 
https://github.com/openstack/heat/blob/stable/kilo/heat/engine/resources/openstack/neutron/neutron.py#L133
we can get the resource id (security group id) by function 
'get_resource'
So what do you want? And what's the problems?


At 2015-08-07 11:10:37, jason witkowski jwit...@gmail.com wrote:

Hey All,


I am having issues on the Kilo branch creating an auto-scaling template that 
builds a security group and then adds instances to it.  I have tried every 
various method I could think of with no success.  My issues are as such:


1) OS::Neutron::Port does not seem to recognize security groups by name  

2) OS::Neutron::SecurityGroup has no attributes so it can not return a security 
group ID


These issues combined find me struggling to automate the building of a security 
group and instances in one heat stack.  I have read and looked at every example 
online and they all seem to use either the name of the security group or the 
get_resource function to return the security group object itself.  Neither of 
these work for me.


Here are my heat template files:


autoscaling.yaml - http://paste.openstack.org/show/412143/

redirector.yaml - http://paste.openstack.org/show/412144/

env.yaml - http://paste.openstack.org/show/412145/



Heat Client: 0.4.1

Heat-Manage: 2015.1.1


Any help would be greatly appreciated.


Best Regards,


Jason
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][dvr] Removing fip namespace when restarting L3 agent.

2015-08-07 Thread Korzeniewski, Artur
Bug submitted:
https://bugs.launchpad.net/neutron/+bug/1482521

Thanks,
Artur

From: Oleg Bondarev [mailto:obonda...@mirantis.com]
Sent: Thursday, August 6, 2015 5:18 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][dvr] Removing fip namespace when 
restarting L3 agent.



On Thu, Aug 6, 2015 at 5:23 PM, Korzeniewski, Artur 
artur.korzeniew...@intel.commailto:artur.korzeniew...@intel.com wrote:
Thanks Kevin for that hint.
But it does not resolve the connectivity problem, it is just not removing the 
namespace when it is asked to.
The real question is, why do we invoke the 
/neutron/neutron/agent/l3/dvr_fip_ns.py FipNamespace.delete() method in the 
first place?

I’ve captured the traceback for this situation:
2015-08-06 06:35:28.469 DEBUG neutron.agent.linux.utils [-] Unable to access 
/opt/openstack/data/neutron/external/pids/8223e12e-837b-49d4-9793-63603fccbc9f.pid
 from (pid=70216) get_value_from_file 
/opt/openstack/neutron/neutron/agent/linux/utils.py:222
2015-08-06 06:35:28.469 DEBUG neutron.agent.linux.utils [-] Unable to access 
/opt/openstack/data/neutron/external/pids/8223e12e-837b-49d4-9793-63603fccbc9f.pid
 from (pid=70216) get_value_from_file 
/opt/openstack/neutron/neutron/agent/linux/utils.py:222
2015-08-06 06:35:28.469 DEBUG neutron.agent.linux.external_process [-] No 
process started for 8223e12e-837b-49d4-9793-63603fccbc9f from (pid=70216) 
disable /opt/openstack/neutron/neutron/agent/linux/external_process.py:113
Traceback (most recent call last):
 File /usr/local/lib/python2.7/dist-packages/eventlet/queue.py, line 117, in 
switch
self.greenlet.switch(value)
  File /usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py, line 
214, in main
result = function(*args, **kwargs)
  File /usr/local/lib/python2.7/dist-packages/oslo_service/service.py, line 
612, in run_service
service.start()
  File /opt/openstack/neutron/neutron/service.py, line 233, in start
self.manager.after_start()
  File /opt/openstack/neutron/neutron/agent/l3/agent.py, line 641, in 
after_start
self.periodic_sync_routers_task(self.context)
  File /opt/openstack/neutron/neutron/agent/l3/agent.py, line 519, in 
periodic_sync_routers_task
self.fetch_and_sync_all_routers(context, ns_manager)
  File /opt/openstack/neutron/neutron/agent/l3/namespace_manager.py, line 91, 
in __exit__
self._cleanup(_ns_prefix, ns_id)
  File /opt/openstack/neutron/neutron/agent/l3/namespace_manager.py, line 
140, in _cleanup
ns.delete()
  File /opt/openstack/neutron/neutron/agent/l3/dvr_fip_ns.py, line 147, in 
delete
raise TypeError(ss)
TypeError: ss

It seems that the fip namespace is not processed at startup of L3 agent, and 
the cleanup is removing the namespace…
It is also removing the interface to local dvr router connection so… VM has no 
internet access with floating IP:
Command: ['ip', 'netns', 'exec', 'fip-8223e12e-837b-49d4-9793-63603fccbc9f', 
'ip', 'link', 'del', u'fpr-fe517b4b-d']

If the interface inside the fip namespace is not deleted, the VM has full 
internet access without any downtime.

Ca we consider it a bug? I guess it is something in startup/full-sync logic 
since the log is saying:
/opt/openstack/data/neutron/external/pids/8223e12e-837b-49d4-9793-63603fccbc9f.pid

I think yes, we can consider it a bug. Can you please file one? I can take and 
probably fix it.


And after finishing the sync loop, the fip namespace is deleted…

Regards,
Artur

From: Kevin Benton [mailto:blak...@gmail.commailto:blak...@gmail.com]
Sent: Thursday, August 6, 2015 7:40 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][dvr] Removing fip namespace when 
restarting L3 agent.

Can you try setting the following to False:
https://github.com/openstack/neutron/blob/dc0944f2d4e347922054bba679ba7f5d1ae6ffe2/etc/l3_agent.ini#L97

On Wed, Aug 5, 2015 at 3:36 PM, Korzeniewski, Artur 
artur.korzeniew...@intel.commailto:artur.korzeniew...@intel.com wrote:
Hi all,
During testing of Neutron upgrades, I have found that restarting the L3 agent 
in DVR mode is causing the VM network downtime for configured floating IP.
The lockdown is visible when pinging the VM from external network, 2-3 pings 
are lost.
The responsible place in code is:
DVR: destroy fip ns: fip-8223e12e-837b-49d4-9793-63603fccbc9f from (pid=156888) 
delete /opt/openstack/neutron/neutron/agent/l3/dvr_fip_ns.py:164

Can someone explain why the fip namespace is deleted? Can we workout the 
situation, when there is no downtime of VM access?

Artur Korzeniewski

Intel Technology Poland sp. z o.o.
KRS 101882
ul. Slowackiego 173, 80-298 Gdansk


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [ceilometer] [aodh] upgrade path

2015-08-07 Thread Julien Danjou
On Fri, Aug 07 2015, Chris Dent wrote:

 If someone installs aodh on a machine that already has ceilometer on it
 and turns off ceilometer-alarm-notifier and ceilometer-alarm-evaluator
 (in favor of aodh-notifier and aodh-evaluator) will they be able to run
 those aodh services against their existing ceilometer.conf files[2]?

Yes, because none of the option has been changed, and those who have
been changed have been deprecated, e.g. deprecated_group=foobar.

 What if they were, in ceilometer, using specific config for their
 alarming database (alarm_connection). Can aodh see and use this
 config option?

That's the only option where maybe I cleaned-up a little bit too much as
I think I remove the alarm_connection from Aodh. I'll cook a patch to
fix that.

 Or will they need to copy and modify the existing conf files to allow
 them to carry on using their existing database config?

Well ultimately as soon as they start using aodh, they could copy
ceilometer.conf, remove what's ceilometer only related and voilà. That
can be achieved by comparing with the default aodh.conf I guess.

We could test that upgrade path using Grenade maybe?

I hope I made things clearer!

-- 
Julien Danjou
-- Free Software hacker
-- http://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] OS::Neutron::Port fails to set security group by name, no way to retrieve group ID from Neutron::SecurityGroup

2015-08-07 Thread Kairat Kushaev
Hello Jason,
Agree with TianTian. It would be good if you provide more details about the
error you have.
Additionally, it would be perfect if you'll use heat IRC channel: #heat or
ask.openstack.org to resolve such kind of questions.

Best regards,
Kairat Kushaev
Software Engineer, Mirantis

On Fri, Aug 7, 2015 at 9:43 AM, TIANTIAN tiantian...@163.com wrote:

 1) OS::Neutron::Port does not seem to recognize security groups by name
 --
 https://github.com/openstack/heat/blob/stable/kilo/heat/engine/resources/openstack/neutron/port.py#L303

 https://github.com/openstack/heat/blob/stable/kilo/heat/engine/clients/os/neutron.py#L111
 we can recognize group name
 2) OS::Neutron::SecurityGroup has no attributes so it can not return a
 security group ID
 --
 https://github.com/openstack/heat/blob/stable/kilo/heat/engine/resources/openstack/neutron/neutron.py#L133
 https://github.com/openstack/heat/blob/stable/kilo/heat/engine/resources/openstack/neutron/neutron.py#L133,
 we can get the resource id (security group id) by function
 'get_resource'
 So what do you want? And what's the problems?


 At 2015-08-07 11:10:37, jason witkowski jwit...@gmail.com wrote:

 Hey All,

 I am having issues on the Kilo branch creating an auto-scaling template
 that builds a security group and then adds instances to it.  I have tried
 every various method I could think of with no success.  My issues are as
 such:

 1) OS::Neutron::Port does not seem to recognize security groups by name
 2) OS::Neutron::SecurityGroup has no attributes so it can not return a
 security group ID

 These issues combined find me struggling to automate the building of a
 security group and instances in one heat stack.  I have read and looked at
 every example online and they all seem to use either the name of the
 security group or the get_resource function to return the security group
 object itself.  Neither of these work for me.

 Here are my heat template files:

 autoscaling.yaml - http://paste.openstack.org/show/412143/
 redirector.yaml - http://paste.openstack.org/show/412144/
 env.yaml - http://paste.openstack.org/show/412145/

 Heat Client: 0.4.1
 Heat-Manage: 2015.1.1

 Any help would be greatly appreciated.

 Best Regards,

 Jason


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [CI] nodepool.DiskImageUpdater: Broken pipe error

2015-08-07 Thread Xie, Xianshan
Hi, everyone,

  The Broken pipe errors were encountered as follows, when I ran nodepoold 
command:

2015-08-08 12:31:27,360 INFO nodepool.DiskImageUpdater: Uploading dib image id: 
6755 from /opt/nodepool_dib/dpc-1439004811 for 
dpc-1439008287.template.openstack.org in local_01
2015-08-08 13:06:17,598 ERROR nodepool.DiskImageUpdater: Exception updating 
image dpc in local_01:
Traceback (most recent call last):
  File /usr/local/lib/python2.7/dist-packages/nodepool/nodepool.py, line 987, 
in _run
self.updateImage(session)
  File /usr/local/lib/python2.7/dist-packages/nodepool/nodepool.py, line 
1031, in updateImage
self.image.meta)
  …
  File /usr/local/lib/python2.7/dist-packages/glanceclient/v1/images.py, line 
360, in update
resp, body = self.client.put(url, headers=hdrs, data=image_data)
  File /usr/local/lib/python2.7/dist-packages/glanceclient/common/http.py, 
line 282, in put
return self._request('PUT', url, **kwargs)
  File /usr/local/lib/python2.7/dist-packages/glanceclient/common/http.py, 
line 218, in _request
resp = self.session.request(method,
CommunicationError: Error finding address for 
http://10.124.196.222:9292/v1/images/909eddee-e8d4-43a8-b8cf-617156b7a1ba: 
[Errno 32] Broken pipe


From the stack trace, it seems that the glanceclient can`t connect to the 
glanceserver.
And, in my case all connections are based on the proxy, so, I have made the 
following attempts to remove the error:

1.   Setup the http_proxy environment variable on the terminal.

2.   Individually, add session.proxies argument to the request method.
But none of them works.


And then, I tried to access the glanceserver with glance cli:

1.   export OS_AUTH_URL=http:// 10.124.196.222:35357/v2.0

2.   and glance image-update(which also accesses the glance server with 
“PUT” method).
It works fine.

So, do you have any tips for eliminating this error?
Thanks in advance.

Xiexs
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] [aodh] upgrade path

2015-08-07 Thread Chris Dent


Despite our conversation in the meeting yesterday[1] I still remain a
bit confused about the upgrade path from alarming-in-ceilometer to
alarming provided by aodh and the availability of the older code in
released liberty.

Much of my confusion can probably be resolved by knowing the answer to
this question:

If someone installs aodh on a machine that already has ceilometer on it
and turns off ceilometer-alarm-notifier and ceilometer-alarm-evaluator
(in favor of aodh-notifier and aodh-evaluator) will they be able to run
those aodh services against their existing ceilometer.conf files[2]?

What if they were, in ceilometer, using specific config for their
alarming database (alarm_connection). Can aodh see and use this
config option?

Or will they need to copy and modify the existing conf files to allow
them to carry on using their existing database config?

I know that I can go try some of this in a devstack, but that's not
really the point of the question. The point is: What are we expecting
existing deployments to do?

I had assumed that the reason for keeping alarming in ceilometer for
the liberty release was to allow a deployment to upgrade to Liberty
across the project suites without having to go through modifying
alarming config and managing an alarming migration in the same step.
That migration ought to be pretty minor (tweak a config here and
there) but unless the answer to my first question is yes it has some
significance.

[1] 
http://eavesdrop.openstack.org/meetings/ceilometer/2015/ceilometer.2015-08-06-15.01.log.html#l-110

[2] directly as in aodh-notifier --config-file /etc/ceilometer/ceilometer.conf

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] [aodh] upgrade path

2015-08-07 Thread Chris Dent

On Fri, 7 Aug 2015, Julien Danjou wrote:


On Fri, Aug 07 2015, Chris Dent wrote:


If someone installs aodh on a machine that already has ceilometer on it
and turns off ceilometer-alarm-notifier and ceilometer-alarm-evaluator
(in favor of aodh-notifier and aodh-evaluator) will they be able to run
those aodh services against their existing ceilometer.conf files[2]?


Yes, because none of the option has been changed, and those who have
been changed have been deprecated, e.g. deprecated_group=foobar.


Excellent, glad to hear it.


What if they were, in ceilometer, using specific config for their
alarming database (alarm_connection). Can aodh see and use this
config option?


That's the only option where maybe I cleaned-up a little bit too much as
I think I remove the alarm_connection from Aodh. I'll cook a patch to
fix that.


Glad we flushed that out[1].


We could test that upgrade path using Grenade maybe?


Yes[2]. We'd have to instrument how wanted to manage both the
installation of aodh and the config, but one of the advantages of
having grenade as a plugin would be that we can do what we need to
do (to some extent).


I hope I made things clearer!


Yes, thanks very much. IRC is dismal enough for reasoned discussion and
resolution but even worse for ensuring decisions have lasting and visible
effect.

[1] https://review.openstack.org/#/c/210286/
[2] Assuming we can get the various pieces to fall into place so
that we have working grenade plugins in all the right places.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance][api][tc] Response when a illegal body is sent

2015-08-07 Thread Bunting, Niall
 Excerpts from Ian Cordasco's message of 2015-07-24 11:22:33 -0700:
 
  On 7/24/15, 13:16, Clint Byrum cl...@fewbar.com wrote:
 
  Excerpts from Ian Cordasco's message of 2015-07-24 08:58:06 -0700:
  
   On 7/23/15, 19:38, michael mccune m...@redhat.com wrote:
  
   On 07/23/2015 12:43 PM, Ryan Brown wrote:
On 07/23/2015 12:13 PM, Jay Pipes wrote:
On 07/23/2015 10:53 AM, Bunting, Niall wrote:
Hi,
   
Currently when a body is passed to an API operation that explicitly
does not allow bodies Glance throws a 500.
   
Such as in this bug report:
https://bugs.launchpad.net/glance/+bug/1475647 This is an example
  of
a GET however this also applies to other requests.
   
What should Glance do rather than throwing a 500, should it return
  a
400 as the user provided an illegal body
   
Yep, this.
   
+1, this should be a 400. It would also be acceptable (though less
preferable) to ignore any body on GET requests and execute the
  request
as normal.
   
Best,
-jay
   
   i'm also +1 on the 400 band wagon
  
   400 feels right for when Glance is operating without anything in front
  of
   it. However, let me present a hypothetical situation:
  
   Company X is operating Glance behind a load-balancing proxy. Most users
   talk to Glance behind the LB. If someone writes a quick script to send a
   GET and (for whatever reason) includes a body, they'll get a 200 with
  the
   data that would otherwise have been sent if they didn't include a body.
   This is because most such proxies will strip the body on a GET (even
   though RFC 7231 allows for bodies on a GET and explicitly refuses to
   define semantic meaning for them). If later that script is updated to
  work
   behind the load balancer it will be broken, because Glance is choosing
  to
   error instead of ignoring it.
  
   Note: I'm not arguing that the user is correct in sending a body when
   there shouldn't be one sent, just that we're going to confuse a lot of
   people with this.
  
   I'm also fine with either a 400 or a 200.
  
  
  Nice succinct description of an interesting corner case.
  
  This is indeed one of those scenarios that should be defended against
  at the edges, but it's worth considering what will make things simplest
  for users.
  
  If we believe in Postel's robustness principle[1], then Glance would
  probably just drop the body as something we liberally accept because
  it doesn't harm anything to do so. If we don't believe thats a good
  principle, then 400 or maybe 413 would be the right codes I think.
  
  So the real question is, do we follow Postel's principle or not? That
  might even be something to add to OpenStack's design principles... which
  I seem to remember at one time we had written down somewhere.
  
  [1] https://en.wikipedia.org/wiki/Robustness_principle
 
  Just to throw a monkey-wrench in,
  https://tools.ietf.org/html/draft-thomson-postel-was-wrong-00
 
 To be clear, I agree with Thomson, and think that's the way to go.
 
 However, I believe we haven't stated either in our principles (and if
 somebody has a link to those principles, or a clear assertion that we
 do not have them and why we don't have them, that would be helpful).
 
 Adding tc to bump the people most likely to respond to that.

It may not always be possible to check whether a body exists, as the has body 
can sometimes end up being ignored depending in on the HTTP method being used 
when using chunked encoding. Unless anyone knows how to always check for a 
body, as webobs implementation is to use the HTTP method to make an informed 
guess it appears.

If we try and return a 400. This could lead to different results such as a body 
with a non chunked encoding returning a 400, and a body with a chunked encoding 
not returning a 400. Therefore would it be better to ignore the body in all 
cases, as that would mean the results will always be the same with different 
encodings.

Niall
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Nova][Cinder] glance_store and glance

2015-08-07 Thread Jesse Cook
I largely agree with the points made in the messages by Nikhil and Erno. A
few additional points.

One of the biggest use cases that I heard for glance_store (true, false,
or otherwise) was that Glance is a bottleneck and an unnecessary proxy to
the stores and consumers should be able to interface with the stores
directly. A few lessons learned from creating and subsequently killing
Glance Bypass internally (bypass Glance to interface directly with the
store i.e. Swift in our case):

1. The proxy is not free, but it¹s not the bottleneck (assuming you have
decent networking on your Glance API nodes)
2. Maintaining the code to interface directly with the object store is
expensive and reinvents what Glance already does killing the value of
moving Glance out of Nova tree
3. Loses the abstraction provided by Glance
4. Allows retrying uploads (this is being fixed in Glance in Liberty along
with other reliability and performance improvements)

My current perception is that glance_store raises confidence issues
outside the community with the Glance and causes confusion about what and
how to consume the project. It¹s a leaky abstraction and leads to a path
of maintaining multiple APIs and circular dependencies. Glance should
provide a single clean API that ensures data consistency, is performant,
and has reliability guarantees.

One other thought: After seeing some of the discussion on IRC I want to
remind people that the sunken cost fallacy can strongly influence one¹s
position, so please think carefully about the use cases and value outside
the cost already put into splitting it out.

Thanks,

Jesse




On 8/7/15, 3:56 AM, Kuvaja, Erno kuv...@hp.com wrote:

Hi,

Flagged Nova and Cinder into this discussion as they were the first
intended adopters iirc.

I don't have big religious view about this topic. I wasn't huge fan of
the idea separating it in the first place and I'm not huge fan of keeping
it separate either.

After couple of cycles we have so far witnessed only the downside of
glance_store being on it's own. We break even our own gate with our own
lib releases, we have one extra bug tracker to look after and even not
huge but it just increases the load on the release and stable teams as
well.

In my understanding the interest within Nova to consume glance_store
directly has pretty much died off since we separated it, please do
correct me if I'm wrong.
I haven't heard anyone expressing any interest to consume glance_store
directly within Cinder either.
So far I have failed to see use-case for glance_store alone apart from
Glance API Server and the original intended use-cases/consumers have
either not expressed interest what so ever or directly expressed being
not interested.

Do we have any reason what so ever keeping doing the extra work to keep
these two components separate? I'm more than happy to do so or at least
extend this discussion for a cycle if there is projects out there
planning to utilize it. I don't want to be in middle of separating it
again next cycle because someone wanted to consume and forked out the old
tree because we decided to kill it but I'm not keen to take the overhead
of it either without reason.

- Erno

 -Original Message-
 From: Nikhil Komawar [mailto:nik.koma...@gmail.com]
 Sent: Friday, August 07, 2015 6:21 AM
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [Glance] glance_store and glance
 
 Hi,
 
 During the mid-cycle we had another proposal that wanted to put back the
 glance_store library back into the Glance repo and not leave it is as a
 separate repo/project.
 
 The questions outstanding are: what are the use cases that want it as a
 separate library?
 
 The original use cases that supported a separate lib have not had much
 progress or adoption yet. There have been complaints about overhead of
 maintaining it as a separate lib and version tracking without much gain.
 The proposals for the re-factor of the library is also a worrysome
topic in
 terms of the stability of the codebase.
 
 The original use cases from my memory are:
 1. Other projects consuming glance_store -- this has become less likely
to be
 useful.
 2. another upload path for users for the convenience of tasks -- not
 preferable as we don't want to expose this library to users.
 3. ease of addition of newer drivers for the developers -- drivers are
only
 being removed since.
 4. cleaner api / more methods that support backend store capabilities -
a
 separate library is not necessarily needed, smoother re-factor is
possible
 within Glance codebase.
 
 Also, the authN/Z complexities and ACL restrictions on the back-end
stores
 can be potential security loopholes with the library and Glance
evolution
 separately.
 
 In order to move forward smoothly on this topic in Liberty, I hereby
request
 input from all concerned developer parties. The decision to keep this
as a
 separate library will remain in effect if we do not come to resolution
within 2
 weeks from now. However, if there 

[openstack-dev] [kolla][tripleo] Deprecating config-internal

2015-08-07 Thread Steven Dake (stdake)
James and Dan,

During the ansible-multi spec process that James Slagle reviewed, there was a 
serious commitment by the Kolla core team to maintain config-internal, pretty 
much for the tripleo use case.  We didn’t want to leave our partner projects in 
the lurch and at the time Ryan/Ian’s implementation of tripleo containers were 
based upon config-internal.  It would be immensely helpful for Kolla if we 
could deprecate that model during l3, and I think Dan’s judgement is to use 
config-external (with some additional beefing up of some of the containers like 
snmp+ceilometer compute plus possibly some other minor solveable requirements).

Can I get a general ack from the tripleo community that deprecating 
config-internal is a-ok so we can just remove it before being stuck with it for 
Liberty?  I don’t want to deprecate something we committed to supporting if 
there is still requirement from the tripleo community to maintain it, but it 
would make our lives a lot easier and thus far the config-internal case is 
really only for TripleO.

Comments welcome.

Thanks!
-steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Nova][Cinder] glance_store and glance

2015-08-07 Thread Jesse Cook

On 8/7/15, 8:46 AM, Matt Riedemann mrie...@linux.vnet.ibm.com wrote:



On 8/7/2015 3:56 AM, Kuvaja, Erno wrote:
 Hi,

 Flagged Nova and Cinder into this discussion as they were the first
intended adopters iirc.

 I don't have big religious view about this topic. I wasn't huge fan of
the idea separating it in the first place and I'm not huge fan of
keeping it separate either.

 After couple of cycles we have so far witnessed only the downside of
glance_store being on it's own. We break even our own gate with our own
lib releases, we have one extra bug tracker to look after and even not
huge but it just increases the load on the release and stable teams as
well.

 In my understanding the interest within Nova to consume glance_store
directly has pretty much died off since we separated it, please do
correct me if I'm wrong.
 I haven't heard anyone expressing any interest to consume glance_store
directly within Cinder either.
 So far I have failed to see use-case for glance_store alone apart from
Glance API Server and the original intended use-cases/consumers have
either not expressed interest what so ever or directly expressed being
not interested.

 Do we have any reason what so ever keeping doing the extra work to keep
these two components separate? I'm more than happy to do so or at least
extend this discussion for a cycle if there is projects out there
planning to utilize it. I don't want to be in middle of separating it
again next cycle because someone wanted to consume and forked out the
old tree because we decided to kill it but I'm not keen to take the
overhead of it either without reason.

 - Erno

 -Original Message-
 From: Nikhil Komawar [mailto:nik.koma...@gmail.com]
 Sent: Friday, August 07, 2015 6:21 AM
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [Glance] glance_store and glance

 Hi,

 During the mid-cycle we had another proposal that wanted to put back
the
 glance_store library back into the Glance repo and not leave it is as a
 separate repo/project.

 The questions outstanding are: what are the use cases that want it as a
 separate library?

 The original use cases that supported a separate lib have not had much
 progress or adoption yet. There have been complaints about overhead of
 maintaining it as a separate lib and version tracking without much
gain.
 The proposals for the re-factor of the library is also a worrysome
topic in
 terms of the stability of the codebase.

 The original use cases from my memory are:
 1. Other projects consuming glance_store -- this has become less
likely to be
 useful.
 2. another upload path for users for the convenience of tasks -- not
 preferable as we don't want to expose this library to users.
 3. ease of addition of newer drivers for the developers -- drivers are
only
 being removed since.
 4. cleaner api / more methods that support backend store capabilities
- a
 separate library is not necessarily needed, smoother re-factor is
possible
 within Glance codebase.

 Also, the authN/Z complexities and ACL restrictions on the back-end
stores
 can be potential security loopholes with the library and Glance
evolution
 separately.

 In order to move forward smoothly on this topic in Liberty, I hereby
request
 input from all concerned developer parties. The decision to keep this
as a
 separate library will remain in effect if we do not come to resolution
within 2
 weeks from now. However, if there aren't any significant use cases we
may
 consider a port back of the same.

 Please find some corresponding discussion from the latest Glance weekly
 meeting:
 http://eavesdrop.openstack.org/meetings/glance/2015/glance.2015-08-06-
 14.03.log.html#l-21

 --

 Thanks,
 Nikhil


 __
 
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: OpenStack-dev-
 requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
_
_
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


As far as I know no one is actively trying to integrate glance_store
into nova like what the cinder team did with os-brick.  I'm not entirely
sure how glance_store drops into nova either.  The os-brick integration
was pretty seamless since it was mostly duplicate code.

I thought glance_store somehow got nova closer to using glance v2 but it
seems that's not the case?

I would agree. It is not the case.


And now there is a separate proposal to work on a new thing in nova's
tree that's not python-glanceclient but gets nova to using glance v2
(and v3?), which seems like more splintering.

I need to update that spec (maybe I¹ll do that now). The goal of ³the
seam² is not to create yet another thing to 

Re: [openstack-dev] [kolla][tripleo] Deprecating config-internal

2015-08-07 Thread Dan Prince
On Fri, 2015-08-07 at 14:21 +, Steven Dake (stdake) wrote:
 James and Dan,
 
 During the ansible-multi spec process that James Slagle reviewed, 
 there was a serious commitment by the Kolla core team to maintain 
 config-internal, pretty much for the tripleo use case.  We didn’t 
 want to leave our partner projects in the lurch and at the time 
 Ryan/Ian’s implementation of tripleo containers were based upon 
 config-internal.  It would be immensely helpful for Kolla if we could 
 deprecate that model during l3, and I think Dan’s judgement is to use 
 config-external (with some additional beefing up of some of the 
 containers like snmp+ceilometer compute plus possibly some other 
 minor solveable requirements).

Correct. I'm heavily leaning towards using config-external assuming we
can make it support use of multiple config files, and then have a way
to tie that into starting the service with the same files (neutron ml2
agent for example uses multiple configs)

 
 Can I get a general ack from the tripleo community that deprecating 
 config-internal is a-ok so we can just remove it before being stuck 
 with it for Liberty?

++ from me

   I don’t want to deprecate something we committed to supporting if 
 there is still requirement from the tripleo community to maintain it, 
 but it would make our lives a lot easier and thus far the config
 -internal case is really only for TripleO.
 
 Comments welcome.
 
 Thanks!
 -steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] [Heat] [kolla] Deploying containerized services with Heat

2015-08-07 Thread Jeff Peeler
This email is loosely related to the recent thread of docker, puppet, 
and Heat here:

http://lists.openstack.org/pipermail/openstack-dev/2015-August/071492.html

I'd really like to get some feedback about work done to use Heat for 
deploying Kolla containers. Ultimately, the hope is to use this work for 
usage in tripleO. Here is an example of what the undercloud could look 
like [1]. The repo is currently messy and out of date, but illustrates 
bootstrapping Heat directly on a host with nothing installed except 
Docker. The idea was to utilize Kolla containers without any changes 
(and was also developed before the config external functionality came 
along).


A very incomplete template example for the undercloud 
(deploy-undercloud.yaml) exists in the heat-standalone directory.  
However, I've been struggling with the perceived template simplicity vs 
supportability due to the usage of the Heat docker driver.


A very similar approach of a containerized overcloud [2] was attempted 
on master [3] for the undercloud. My understanding was that it still has 
a problem with the os-*-config tools signaling back to Heat. I apologize 
I can't elaborate further on that. Note that is has been a primary 
objective to not use Nova for deploying containerized services (with 
both approaches) and I believe that contributed to the signaling problem 
mentioned before.


Thoughts on the above would be most welcome. Instead of further 
developing on something that ultimately goes nowhere, I'd rather consult 
the tripleO experts! Since there has been some contention for usage of 
the Heat docker driver, please also evaluate the Heat fat container for 
deployment usage.


Apologies in advance if any of this is unclear.

Jeff

[1] 
https://github.com/jpeeler/kolla/tree/694df62d47cf6d930bf04231386c917f5cb4da58$

[2] https://review.openstack.org/#/c/178840/
[3] https://github.com/jpeeler/kolla/commits/master$

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Heat] [kolla] Deploying containerized services with Heat

2015-08-07 Thread Jeff Peeler

On Fri, Aug 07, 2015 at 11:31:52AM -0400, Jeff Peeler wrote:
[1] 
https://github.com/jpeeler/kolla/tree/694df62d47cf6d930bf04231386c917f5cb4da58

[2] https://review.openstack.org/#/c/178840/
[3] https://github.com/jpeeler/kolla/commits/master


If it's not obvious, please note that links 1 and 3 in the previous 
message need to have the terminating '$' character removed (modified 
links correctly here).


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova][ci] Tintri Cinder CI failures after Nova change

2015-08-07 Thread Matt Riedemann



On 8/7/2015 8:38 AM, Matt Riedemann wrote:



On 8/6/2015 3:30 PM, Skyler Berg wrote:

After the change cleanup NovaObjectDictCompat from virtual_interface
[1] was merged into Nova on the morning of August 5th, Tintri's CI for
Cinder started failing 13 test cases that involve a volume being
attached to an instance [2].

I have verified that the tests fail with the above mentioned change and
pass when running against the previous commit.

If anyone knows why this patch is causing an issue or is experiencing
similar problems, please let me know.

In the meantime, expect Tintri's CI to be either down or reporting
failures until a solution is found.

[1] https://review.openstack.org/#/c/200823/
[2] http://openstack-ci.tintri.com/tintri/refs-changes-06-201406-35/



 From the n-cpu logs this is the TypeError:

2015-08-05 06:34:54.826 8 ERROR nova.compute.manager Traceback (most
recent call last):
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager   File
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py,
line 142, in _dispatch_and_reply
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager
executor_callback))
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager   File
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py,
line 186, in _dispatch
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager executor_callback)
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager   File
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py,
line 129, in _do_dispatch
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager result =
func(ctxt, **new_args)
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager   File
/opt/stack/nova/nova/network/floating_ips.py, line 113, in
allocate_for_instance
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager **kwargs)
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager   File
/opt/stack/nova/nova/network/manager.py, line 496, in
allocate_for_instance
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager context,
instance_uuid)
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager   File
/usr/lib/python2.7/site-packages/oslo_utils/excutils.py, line 119, in
__exit__
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager
six.reraise(self.type_, self.value, self.tb)
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager   File
/opt/stack/nova/nova/network/manager.py, line 490, in
allocate_for_instance
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager networks,
macs)
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager   File
/opt/stack/nova/nova/network/manager.py, line 755, in
_allocate_mac_addresses
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager network['id'])
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager   File
/opt/stack/nova/nova/network/manager.py, line 774, in
_add_virtual_interface
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager vif.create()
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager   File
/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py, line
205, in wrapper
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager self[key] =
field.from_primitive(self, key, value)
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager TypeError:
'VirtualInterface' object does not support item assignment
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager
2015-08-05 06:34:54.826 8 ERROR nova.compute.manager

It looks like you're missing this change in whatever version of
oslo.versionedobjects you have in your CI:

https://review.openstack.org/#/c/202200/

That should be in o.vo 0.6.0, latest is 0.7.0.  What version of
oslo.versionedobjects is on the this system?  It would be helpful to
have pip freeze output.



I proposed a change to global-requirements to raise the minimum required 
oslo.versionedobjects to = 0.6.0 here:


https://review.openstack.org/#/c/210445/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Bug *Review* Day - Liberty

2015-08-07 Thread Markus Zoeller
As freshly crowned bug czar I'd like to advertise the bug review day
which takes place next Wednesday, August the 12th [1].
The bug triage day last week did a good job to set the priorities of
undecided bugs [2].
We can use [3] to get an overview of the current reviews for bugs. When
[4] is merged, the list will be sortable by type, so that we can focus
on the bug reviews with a high priority first.

If you have questions, contact me (markus_z) or another member of the
nova bug team on IRC #openstack-nova.

Regards,
Markus Zoeller (markus_z)

[1] 
https://wiki.openstack.org/wiki/Nova/Liberty_Release_Schedule#Special_review_days
[2] 
http://lists.openstack.org/pipermail/openstack-dev/2015-August/071552.html
[3] http://status.openstack.org/reviews/#nova
[4] https://review.openstack.org/#/c/210481/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][tripleo] Deprecating config-internal

2015-08-07 Thread Ryan Hallisey


- Original Message -
From: James Slagle james.sla...@gmail.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Friday, August 7, 2015 11:20:59 AM
Subject: Re: [openstack-dev] [kolla][tripleo] Deprecating config-internal

On Fri, Aug 7, 2015 at 11:08 AM, Dan Prince dpri...@redhat.com wrote:
 On Fri, 2015-08-07 at 14:21 +, Steven Dake (stdake) wrote:
 James and Dan,

 During the ansible-multi spec process that James Slagle reviewed,
 there was a serious commitment by the Kolla core team to maintain
 config-internal, pretty much for the tripleo use case.  We didn’t
 want to leave our partner projects in the lurch and at the time
 Ryan/Ian’s implementation of tripleo containers were based upon
 config-internal.  It would be immensely helpful for Kolla if we could
 deprecate that model during l3, and I think Dan’s judgement is to use
 config-external (with some additional beefing up of some of the
 containers like snmp+ceilometer compute plus possibly some other
 minor solveable requirements).

 Correct. I'm heavily leaning towards using config-external assuming we
 can make it support use of multiple config files, and then have a way
 to tie that into starting the service with the same files (neutron ml2
 agent for example uses multiple configs)


 Can I get a general ack from the tripleo community that deprecating
 config-internal is a-ok so we can just remove it before being stuck
 with it for Liberty?

 ++ from me

I think using external config works well. The existing puppet recipes
are very advanced so it would provide more config options available to use
with the containerized services.

+1
-Ryan


   I don’t want to deprecate something we committed to supporting if
 there is still requirement from the tripleo community to maintain it,
 but it would make our lives a lot easier and thus far the config
 -internal case is really only for TripleO.

 Comments welcome.

 Thanks!
 -steve

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][tripleo] Deprecating config-internal

2015-08-07 Thread Steven Dake (stdake)


On 8/7/15, 8:08 AM, Dan Prince dpri...@redhat.com wrote:

On Fri, 2015-08-07 at 14:21 +, Steven Dake (stdake) wrote:
 James and Dan,
 
 During the ansible-multi spec process that James Slagle reviewed,
 there was a serious commitment by the Kolla core team to maintain
 config-internal, pretty much for the tripleo use case.  We didn¹t
 want to leave our partner projects in the lurch and at the time
 Ryan/Ian¹s implementation of tripleo containers were based upon
 config-internal.  It would be immensely helpful for Kolla if we could
 deprecate that model during l3, and I think Dan¹s judgement is to use
 config-external (with some additional beefing up of some of the
 containers like snmp+ceilometer compute plus possibly some other
 minor solveable requirements).

Correct. I'm heavily leaning towards using config-external assuming we
can make it support use of multiple config files, and then have a way
to tie that into starting the service with the same files (neutron ml2
agent for example uses multiple configs)

Not sure if you missed my response to your earlier post about gaps in
kolla related to TripleO but we already have multiple file config feature.
 Its a a little clunky (our config-external script copies each file
individually into the container from the bindmount) but preserves
immutability which is criticial imo :)

A more detailed response is in my response to your other email regarding
Kolla.


 
 Can I get a general ack from the tripleo community that deprecating
 config-internal is a-ok so we can just remove it before being stuck
 with it for Liberty?

++ from me

Nice thanks!


   I don¹t want to deprecate something we committed to supporting if
 there is still requirement from the tripleo community to maintain it,
 but it would make our lives a lot easier and thus far the config
 -internal case is really only for TripleO.
 
 Comments welcome.
 
 Thanks!
 -steve


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Cinder] [Glance] glance_store and glance

2015-08-07 Thread Jay Pipes
Hi Nik, some comments inline, but tl;dr I am strongly against returning 
the glance_store library to the Glance source repository. Explanations 
inline...


On 08/07/2015 01:21 AM, Nikhil Komawar wrote:

Hi,

During the mid-cycle we had another proposal that wanted to put back the
glance_store library back into the Glance repo and not leave it is as a
separate repo/project.

The questions outstanding are: what are the use cases that want it as a
separate library?

The original use cases that supported a separate lib have not had much
progress or adoption yet.


This is really only due to a lack of time to replace the current 
nova/image/download/* stuff with calls to the glance_store library. It's 
not that the use case has gone away; it's just a lack of time to work on it.


 There have been complaints about overhead of

maintaining it as a separate lib and version tracking without much gain.


I don't really see much overhead in maintaining a separate lib, 
especially when it represents functionality that can be used by Cinder 
and Nova directly.



The proposals for the re-factor of the library is also a worrysome topic
in terms of the stability of the codebase.


You have a link for this? I'm not familiar with this proposal and would 
like to read the spec...



The original use cases from my memory are:
1. Other projects consuming glance_store -- this has become less likely
to be useful.


How has this become less likely to be useful?


2. another upload path for users for the convenience of tasks -- not
preferable as we don't want to expose this library to users.


What do you mean by convenience of tasks above? Also, by expose this 
library to users, you are referring to normal tenants as users, right? 
Not administrative or service users, yes?



3. ease of addition of newer drivers for the developers -- drivers are
only being removed since.


I don't think this has anything to do with glance_store being a separate 
code repository.



4. cleaner api / more methods that support backend store capabilities -
a separate library is not necessarily needed, smoother re-factor is
possible within Glance codebase.


So, here's the crux of the issue. Nova and Cinder **do not want to speak 
the Glance REST API** to either upload or download image bits from 
storage. Streaming image bits through the Glance API endpoint is a 
needless and inefficient step, and Nova and Cinder would like to 
communicate directly with the backend storage systems.


glance_store IS the library that would enable Nova and Cinder to 
communicate directly with the backend storage systems. The Glance API 
will only be used by Nova and Cinder to get information *about* the 
images in backend storage, not the image bits themselves.


This is why I was hopeful that the Artifact Repository API would allow 
Glance to just focus on being an excellent repository for metadata, and 
get out of the business of transferring, transforming, uploading, or 
downloading image bits.


I'm a little disappointed that this does not seem to be the direction 
that the Glance team is moving, and would like to know a bit more about 
what the future direction of the Glance project is.



Also, the authN/Z complexities and ACL restrictions on the back-end
stores can be potential security loopholes with the library and Glance
evolution separately.


Sure, I understand that concern, but I believe that if the glance_store 
library interface is seen as essentially a privileged system library, 
and you prevent all tenant-facing usage of it (pretty easy to do), then 
we'll be fine.



In order to move forward smoothly on this topic in Liberty, I hereby
request input from all concerned developer parties. The decision to keep
this as a separate library will remain in effect if we do not come to
resolution within 2 weeks from now. However, if there aren't any
significant use cases we may consider a port back of the same.


Honestly, I'm a little perplexed why this is even being brought up. 
Aren't there quite a few high priority items in the Glance roadmap that 
would take precedence over this kind of move?


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][keystone] oslo_config and wsgi middlewares

2015-08-07 Thread Michael Krotscheck
On Thu, Aug 6, 2015 at 10:08 AM Mehdi Abaakouk sil...@sileht.net wrote:


 Yes, but you can't use oslo.config without hardcode the loading the
 middleware to pass the oslo.config object into the application.


Yes, and that is intentional, because the use of global variables of any
sort is bad. They're unconstrained, there's no access control to guarantee
the thing you want hasn't been modified, and in the case of oslo.config,
they require initialization before they can be used.

Writing any kind of logic that assumes that a magic global instance has
been initialized is brittle. The pastedeploy wsgi chain is a perfect
example, because the application is only created after the middleware chain
has been executed. This leaves you with - at best - a malfunctioning piece
of middleware that breaks because the global oslo.config object isn't
ready. At worst it's a security vulnerability that permits bypassing things
like keystone.

Passing the config object is a _good_ thing, because it doesn't rely on
magic. Magic is bad. If someone looks at the code and says: I wonder how
this piece of middleware gets its values, and they don't see the config
object being passed, they have to dig into the middleware itself to figure
out what's going on.


 I'm clearly on the operator side too, and I just try to find a solution to
 be able to use all middlewares without having to write code for each
 in each application and use oslo.config. Zaqar, Gnocchi and Aodh are
 the first projects that do to not use cfg.CONF and can't load many
 middlewares without writing code for each. When middleware should be just
 something that deployer enabled and configuration. Our middleware looks
 more like a lib than a middleware)


Sorry, but you're talking from the point of view of someone who wants to
not have to write code for each. That's a developer. It's our job as
developers to write code until it's as easy as possible, and passing in a
config object is _dead simple_ in your application initialization.

Here's the thing. If the middleware is _optional_ like keystone auth, then
including it via paste.ini makes way more sense. In fact, keystone auth has
gone to great lengths to have no dependencies for that very same reason.
If, instead, the middleware is a feature that should ship with the service
- like CORS, or a simple caching layer - then it should be baked into your
application initialization directly.

Michael
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] change of day for API subteam meeting?

2015-08-07 Thread Sean Dague
Friday's have been kind of a rough day for the Nova API subteam. It's
already basically the weekend for folks in AP, and the weekend is right
around the corner for everyone else.

I'd like to suggest we shift the meeting to Monday or Tuesday in the
same timeslot (currently 12:00 UTC). Either works for me. Having this
earlier in the week I also hope keeps the attention on the items we need
to be looking at over the course of the week.

If current regular attendees could speak up about day preference, please
do. We'll make a change if this is going to work for folks.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] change of day for API subteam meeting?

2015-08-07 Thread Jay Pipes

On 08/07/2015 12:48 PM, Sean Dague wrote:

Friday's have been kind of a rough day for the Nova API subteam. It's
already basically the weekend for folks in AP, and the weekend is right
around the corner for everyone else.

I'd like to suggest we shift the meeting to Monday or Tuesday in the
same timeslot (currently 12:00 UTC). Either works for me. Having this
earlier in the week I also hope keeps the attention on the items we need
to be looking at over the course of the week.

If current regular attendees could speak up about day preference, please
do. We'll make a change if this is going to work for folks.


+1 from me.

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] Do we have test coverage goals?

2015-08-07 Thread Chris Dent


The recent split of the unit and functional tests in ceilometer
shows some interesting test coverage data. With just what are now
called the unit tests coverage is a meek 58%. With both the unit
and functional (what used to be the standard coverage run) the
coverage is a still kind of meek 84%.

Do we have, or do we want to have, a particular coverage target for
the code. Or if not all the code then at least some sections of it?

Do we want that metric of coverage to be against just the unit tests
or unit and functional?

Note that these numbers are thrown off quite a bit by various
artifacts like database migration files so are not a super accurate
overview of true test coverage, just a (potentially) useful indicator.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][dvr] Removing fip namespace when restarting L3 agent.

2015-08-07 Thread Oleg Bondarev
On Fri, Aug 7, 2015 at 10:24 AM, Korzeniewski, Artur 
artur.korzeniew...@intel.com wrote:

 Bug submitted:

 https://bugs.launchpad.net/neutron/+bug/1482521


Ok, here is the fix: https://review.openstack.org/210539
Thanks!

Oleg




 Thanks,

 Artur



 *From:* Oleg Bondarev [mailto:obonda...@mirantis.com]
 *Sent:* Thursday, August 6, 2015 5:18 PM

 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [neutron][dvr] Removing fip namespace when
 restarting L3 agent.







 On Thu, Aug 6, 2015 at 5:23 PM, Korzeniewski, Artur 
 artur.korzeniew...@intel.com wrote:

 Thanks Kevin for that hint.

 But it does not resolve the connectivity problem, it is just not removing
 the namespace when it is asked to.

 The real question is, why do we invoke the 
 /neutron/neutron/agent/l3/dvr_fip_ns.py
 FipNamespace.delete() method in the first place?



 I’ve captured the traceback for this situation:

 2015-08-06 06:35:28.469 DEBUG neutron.agent.linux.utils [-] Unable to
 access
 /opt/openstack/data/neutron/external/pids/8223e12e-837b-49d4-9793-63603fccbc9f.pid
 from (pid=70216) get_value_from_file
 /opt/openstack/neutron/neutron/agent/linux/utils.py:222

 2015-08-06 06:35:28.469 DEBUG neutron.agent.linux.utils [-] Unable to
 access
 /opt/openstack/data/neutron/external/pids/8223e12e-837b-49d4-9793-63603fccbc9f.pid
 from (pid=70216) get_value_from_file
 /opt/openstack/neutron/neutron/agent/linux/utils.py:222

 2015-08-06 06:35:28.469 DEBUG neutron.agent.linux.external_process [-] No
 process started for 8223e12e-837b-49d4-9793-63603fccbc9f from (pid=70216)
 disable /opt/openstack/neutron/neutron/agent/linux/external_process.py:113

 Traceback (most recent call last):

  File /usr/local/lib/python2.7/dist-packages/eventlet/queue.py, line
 117, in switch

 self.greenlet.switch(value)

   File /usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py,
 line 214, in main

 result = function(*args, **kwargs)

   File /usr/local/lib/python2.7/dist-packages/oslo_service/service.py,
 line 612, in run_service

 service.start()

   File /opt/openstack/neutron/neutron/service.py, line 233, in start

 self.manager.after_start()

   File /opt/openstack/neutron/neutron/agent/l3/agent.py, line 641, in
 after_start

 self.periodic_sync_routers_task(self.context)

   File /opt/openstack/neutron/neutron/agent/l3/agent.py, line 519, in
 periodic_sync_routers_task

 self.fetch_and_sync_all_routers(context, ns_manager)

   File /opt/openstack/neutron/neutron/agent/l3/namespace_manager.py,
 line 91, in __exit__

 self._cleanup(_ns_prefix, ns_id)

   File /opt/openstack/neutron/neutron/agent/l3/namespace_manager.py,
 line 140, in _cleanup

 ns.delete()

   File /opt/openstack/neutron/neutron/agent/l3/dvr_fip_ns.py, line 147,
 in delete

 raise TypeError(ss)

 TypeError: ss



 It seems that the fip namespace is not processed at startup of L3 agent,
 and the cleanup is removing the namespace…

 It is also removing the interface to local dvr router connection so… VM
 has no internet access with floating IP:

 Command: ['ip', 'netns', 'exec',
 'fip-8223e12e-837b-49d4-9793-63603fccbc9f', 'ip', 'link', 'del',
 u'fpr-fe517b4b-d']



 If the interface inside the fip namespace is not deleted, the VM has full
 internet access without any downtime.



 Ca we consider it a bug? I guess it is something in startup/full-sync
 logic since the log is saying:


 /opt/openstack/data/neutron/external/pids/8223e12e-837b-49d4-9793-63603fccbc9f.pid



 I think yes, we can consider it a bug. Can you please file one? I can take
 and probably fix it.





 And after finishing the sync loop, the fip namespace is deleted…



 Regards,

 Artur



 *From:* Kevin Benton [mailto:blak...@gmail.com]
 *Sent:* Thursday, August 6, 2015 7:40 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [neutron][dvr] Removing fip namespace when
 restarting L3 agent.



 Can you try setting the following to False:


 https://github.com/openstack/neutron/blob/dc0944f2d4e347922054bba679ba7f5d1ae6ffe2/etc/l3_agent.ini#L97



 On Wed, Aug 5, 2015 at 3:36 PM, Korzeniewski, Artur 
 artur.korzeniew...@intel.com wrote:

 Hi all,

 During testing of Neutron upgrades, I have found that restarting the L3
 agent in DVR mode is causing the VM network downtime for configured
 floating IP.

 The lockdown is visible when pinging the VM from external network, 2-3
 pings are lost.

 The responsible place in code is:

 DVR: destroy fip ns: fip-8223e12e-837b-49d4-9793-63603fccbc9f from
 (pid=156888) delete
 /opt/openstack/neutron/neutron/agent/l3/dvr_fip_ns.py:164



 Can someone explain why the fip namespace is deleted? Can we workout the
 situation, when there is no downtime of VM access?



 Artur Korzeniewski

 

 Intel Technology Poland sp. z o.o.

 KRS 101882

 ul. Slowackiego 173, 80-298 Gdansk




 

Re: [openstack-dev] [fuel][puppet] The state of collaboration: 7 weeks

2015-08-07 Thread Jay Pipes
Dmitry, just a quick note to say I'm very pleased to see the progress 
from the Fuel team in collaborating with the Puppet OpenStack upstream 
team. Great to see puppet-librarian-simple starting to reduce the 
duplication and forking of Puppet modules in Fuel.


Kudos.

Best,
-jay

On 08/03/2015 10:19 PM, Dmitry Borodaenko wrote:

Two weeks ago we had a discussion of where things stand in the
collaboration
between Fuel and Puppet OpenStack projects [0].

[0]
http://lists.openstack.org/pipermail/openstack-dev/2015-July/069925.html

Things that were good at that point:
- number of proposed patch sets

Things that needed further improvement:
- proposed patch sets to merged commits ratio
- stuck commits
- quality of code reviews
- participation in weekly IRC meetings

The patch sets metric has continued to improve, the share of patch
sets pushed by Fuel developers has increased from 11.5% to 17.4%.

The patch sets to commits ratio doesn't look that good: only two
commits were merged last week. This number is too small to be
statistically significant, but it does increase the ratio from 13.5
to 19, which is a large change in the wrong direction. Average for
Puppet OpenStack last week was 7.6, that's what we should be aiming
at.

The stuck commits problem was addressed by introducing the
Disagreement section into the review inbox [1] and bringing up the
problematic commits in the weekly meetings. Since last week, there
were no commits from Fuel team that were held back by disagreements
in review for more than a few days.

[1]
http://lists.openstack.org/pipermail/openstack-dev/2015-July/070072.html

This means that it's now up to Fuel team to push higher quality
patch sets that can be merged faster.

The number of reviews done for commits in Puppet OpenStack by Fuel team has
jumped from 6.4% to 21.8% over the past two weeks. Comparing the +/- and
disagrements ratios of top Mirantis reviewers over 90 and 30 days also
shows
consistent improvement:

Bogdan Dobrelia: 64.5% - 67.2% (disagreements 9.2% - 4.9%)
Denis Egorenko: 97.7% - 97% (disagreements 16.3% - 12.1%)
Alex Schultz: 81.2% - 80% (disagreements 25% - 20%)
Sergey Kolekonov: 95.5% - 91.7% (disagreements 13.6% - 8.3%)
Sergii Golovatiuk: 100% - 100% (disagreements 36.4% - 33.3%)
Ivan Berezovskiy: 100% - 100% (disagreements 15.8% - 0%)
Vasyl Saienko: 100% - 100% (disagreements 20% - 16.7%)

Bogdan is setting an excellent example with his #6 position at 61
reviews in last 30 days. It will take some time for others to catch
up, but at least they're all moving in the right direction (more -1's
with less disagreements).

As I already mentioned, participation in weekly IRC meetings has also
improved:

Jul-14: 1 of 16 participants, 10 of 295 lines
Jul-21: 5 of 17 participants, 89 of 291 lines
Jul-28: 7 of 18 participants, 26 of 193 lines

Finally, this weeek we've also made huge progress on getting rid of forked
copies of upstream modules [2]. We've landed the initial support for
puppet-librarian-simple and dropped in-place forks of 3 modules (stdlib,
concat, inifile), with 7 more modules lined up [3].

[2]
http://lists.openstack.org/pipermail/openstack-dev/2015-July/069906.html
[3]
http://lists.openstack.org/pipermail/openstack-dev/2015-July/071106.html

Kudos to Alex for preparing this patch series and writing an
excellent guide on how to work with modules managed by
puppet-librarian-simple [4].

[4] https://wiki.openstack.org/wiki/Fuel/Library_and_Upstream_Modules

To sum up, Fuel team has made a lot of progress over the past two
weeks in most areas, however patch sets to commits ratio remains the
most important problem and has seen no improvement so far.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][tripleo] Deprecating config-internal

2015-08-07 Thread James Slagle
On Fri, Aug 7, 2015 at 11:08 AM, Dan Prince dpri...@redhat.com wrote:
 On Fri, 2015-08-07 at 14:21 +, Steven Dake (stdake) wrote:
 James and Dan,

 During the ansible-multi spec process that James Slagle reviewed,
 there was a serious commitment by the Kolla core team to maintain
 config-internal, pretty much for the tripleo use case.  We didn’t
 want to leave our partner projects in the lurch and at the time
 Ryan/Ian’s implementation of tripleo containers were based upon
 config-internal.  It would be immensely helpful for Kolla if we could
 deprecate that model during l3, and I think Dan’s judgement is to use
 config-external (with some additional beefing up of some of the
 containers like snmp+ceilometer compute plus possibly some other
 minor solveable requirements).

 Correct. I'm heavily leaning towards using config-external assuming we
 can make it support use of multiple config files, and then have a way
 to tie that into starting the service with the same files (neutron ml2
 agent for example uses multiple configs)


 Can I get a general ack from the tripleo community that deprecating
 config-internal is a-ok so we can just remove it before being stuck
 with it for Liberty?

 ++ from me

I'm going to defer to others on this one and support their consensus,
given that I haven't been able to be as closely involved with it as I
would have liked. I'll leave it up to Dan, Ryan, and Ian to provide
the right input here. From a cursory glance, config-external sounds
like the right move to me.


   I don’t want to deprecate something we committed to supporting if
 there is still requirement from the tripleo community to maintain it,
 but it would make our lives a lot easier and thus far the config
 -internal case is really only for TripleO.

 Comments welcome.

 Thanks!
 -steve

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] installation of requirements not possible because of wrong pip version

2015-08-07 Thread Robert Collins
I don't know why Nova has a requirement expressed on pip, since
requirements.txt is evaluated by pip its too late. Does Nova actually
consume pip itself?
On 8 Aug 2015 8:31 am, Christian Berendt christ...@berendt.io wrote:

 According to requirements.txt we require pip=6.0. Trying to install the
 requirements for nova with pip 6.1.1 is not possible at the moment because
 of the following issue:

 $ virtualenv .venv
 $ source .venv/bin/activate
 $ pip install -r requirements.txt
 You are using pip version 6.1.1, however version 7.1.0 is available.
 You should consider upgrading via the 'pip install --upgrade pip' command.
 Double requirement given: Routes!=2.0,=1.12.3 (from -r requirements.txt
 (line 14)) (already in Routes!=2.0,!=2.1,=1.12.3 (from -r requirements.txt
 (line 13)), name='Routes')

 It looks like pip 6.1.1 cannot handle the following 2 lines in
 requirements.txt:

 Routes=1.12.3,!=2.0,!=2.1;python_version=='2.7'
 Routes=1.12.3,!=2.0;python_version!='2.7'

 After upgrading pip to the latest available version (7.1.0) with pip
 install --upgrade pip everything is working like expected.

 Does this mean that we have to require at least pip=7.1.0 in the global
 requirements?

 Christian.

 --
 Christian Berendt
 Cloud Solution Architect
 Mail: bere...@b1-systems.de

 B1 Systems GmbH
 Osterfeldstraße 7 / 85088 Vohburg / http://www.b1-systems.de
 GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] stable is hosed

2015-08-07 Thread Matt Riedemann
Well it's a Friday afternoon so you know what that means, emails about 
the stable branches being all busted to pieces in the gate.


Tracking in the usual place:

https://etherpad.openstack.org/p/stable-tracker

Since things are especially fun the last two days I figured it was time 
for a notification to the -dev list.


Both are basically Juno issues.

1. The large ops job is busted because of some uncapped dependencies in 
python-openstackclient 1.0.1.


https://bugs.launchpad.net/openstack-gate/+bug/1482350

The fun thing here is g-r is capping osc=1.0.1 and there is already a 
1.0.2 version of osc, so we can't simply cap osc in a 1.0.2 and raise 
that in g-r for stable/juno (we didn't leave ourselves any room for bug 
fixes).


We talked about an osc 1.0.1.1 but pbr=0.11 won't allow that because it 
breaks semver.


The normal dsvm jobs are OK because they install cinder and cinder 
installs the dependencies that satisfy everything so we don't hit the 
osc issue.  The large ops job doesn't use cinder so it doesn't install it.


Options:

a) Somehow use a 1.0.1.post1 version for osc.  Would require input from 
lifeless.


b) Install cinder in the large ops job on stable/juno.

c) Disable the large ops job for stable/juno.


2. grenade on kilo blows up because python-neutronclient 2.3.12 caps 
oslo.serialization at =1.2.0, keystonemiddleware 1.5.2 is getting 
pulled in which pulls in oslo.serialization 1.4.0 and things fall apart.


https://bugs.launchpad.net/python-neutronclient/+bug/1482758

I'm having a hard time unwinding this one since it's a grenade job.  I 
know the failures line up with the neutronclient 2.3.12 release which 
caps requirements on stable/juno:


https://review.openstack.org/#/c/204654/.

Need some help here.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Puppet] [kolla] Deploying OpenStack with Puppet modules on Docker with Heat

2015-08-07 Thread Emilien Macchi


On 08/05/2015 02:33 PM, Ryan Hallisey wrote:
 Tagging kolla so the kolla community also sees it.
 Pardon the top posting.
 
 -Ryan
 
 - Original Message -
 From: Dan Prince dpri...@redhat.com
 To: openstack-dev openstack-dev@lists.openstack.org
 Sent: Wednesday, August 5, 2015 2:29:13 PM
 Subject: [openstack-dev] [TripleO] [Puppet] Deploying OpenStack with Puppet 
 modules on Docker with Heat
 
 Hi,
 
 There is a lot of interest in getting support for container based
 deployment within TripleO and many different ideas and opinions on how
 to go about doing that.
 
 One idea on the table is to use Heat to help orchestrate the deployment
 of docker containers. This would work similar to our tripleo-heat
 -templates implementation except that when using docker you would swap
 in a nested stack template that would configure containers on
 baremetal. We've even got a nice example that shows what a
 containerized TripleO overcloud might look like here [1]. The approach
 outlines how you might use kolla docker containers alongside of the
 tripleo-heat-templates to do this sort of deployment.
 
 This is all cool stuff but one area of concern is how we do the actual
 configuration of the containers. The above implementation relies on
 passing environment variables into kolla built docker containers which
 then self configure all the required config files and start the
 service. This sounds like a start... but creating (and maintaining)
 another from scratch OpenStack configuration tool isn't high on my list
 of things to spend time on. Sure there is already a kolla community
 helping to build and maintain this configuration tooling (mostly
 thinking config files here) but this sounds a bit like what tripleo
 -image-elements initially tried to do and it turns out there are much
 more capable configuration tools out there.
 
 Since we are already using a good bit of Puppet in tripleo-heat
 -templates the idea came up that we would try to configure Docker
 containers using Puppet. Again, here there are several ideas in the
 Puppet community with regards to how docker might best be configured
 with Puppet. Keeping those in mind we've been throwing some ideas out
 on an etherpad here [2] that describes using Heat for orchestration,
 Puppet for configuration, and Kolla docker images for containers.
 
 A quick outline of the approach is:
 
 -Extend the heat-container-agent [3] that runs os-collect-config and
 all the required hooks we require for deployment. This includes docker
 -compute, bash scripts, and Puppet. NOTE: As described in the etherpad
 I've taken to using DIB to build this container. I found this to be
 faster from a TripleO development baseline.
 
 -To create config files the heat-container-agent would run a puppet
 manifest for a given role and generate a directory tree of config files
 (/var/lib/etc-data for example).

I have a few questions:

* when do you run puppet? before starting the container so we can
generate a configuration file?
* so iiuc, Puppet is only here to generate OpenStack configuration files
and we noop all other operations. Right?
* from a Puppet perspective, I really prefer this approach:
https://review.openstack.org/#/c/197172/ where we assign tags to
resources so we can easily modify/drop Puppet resources using our
modules. What do you think (for long term)?
* how do you manage multiple configuration files? (if a controller is
running multiple nova-api containers with different configuration files?

Once I understand a bit more where we go, I'll be happy to help to make
it happen in our modules, we already have folks deploying our modules
with containers, I guess we can just talk and collaborate here.
Also, I'll be interested to bringing containers support in our CI, but
that's a next step :-)

Thanks Dan for this work,

 
 -We then run a docker-compose software deployment that mounts those
 configuration file(s) into a read only volume and uses them to start
 the containerized service.
 
 The approach could look something like this [4]. This nice thing about
 this is that it requires no modification to OpenStack Puppet modules.
 We can use those today, as-is. Additionally, although Puppet runs in
 the agent container we've created a mechanism to set all the resources
 to noop mode except for those that generate config files. And lastly,
 we can use exactly the same role manifest for docker that we do for
 baremetal. Lots of re-use here... and although we are disabling a lot
 of Puppet functionality in setting all the non-config resources to noop
 the Kolla containers already do some of that stuff for us (starting
 services, etc.).
 
 
 
 All that said (and trying to keep this short) we've still got a bit of
 work to do around wiring up externally created config files to kolla
 build docker containers. A couple of issues are:
 
 -The external config file mechanism for Kolla containers only seems to
 support a single config file. Some services (Neutron) can have 

Re: [openstack-dev] [puppet][keystone] To always use or not use domain name?

2015-08-07 Thread Rich Megginson

On 08/05/2015 07:48 PM, Gilles Dubreuil wrote:


On 06/08/15 10:16, Jamie Lennox wrote:


- Original Message -

From: Adam Young ayo...@redhat.com
To: openstack-dev@lists.openstack.org
Sent: Thursday, August 6, 2015 1:03:55 AM
Subject: Re: [openstack-dev] [puppet][keystone] To always use or not use domain 
name?

On 08/05/2015 08:16 AM, Gilles Dubreuil wrote:

While working on trust provider for the Keystone (V3) puppet module, a
question about using domain names came up.

Shall we allow or not to use names without specifying the domain name in
the resource call?

I have this trust case involving a trustor user, a trustee user and a
project.

For each user/project the domain can be explicit (mandatory):

trustor_name::domain_name

or implicit (optional):

trustor_name[::domain_name]

If a domain isn't specified the domain name can be assumed (intuited)
from either the default domain or the domain of the corresponding
object, if unique among all domains.

If you are specifying project by name, you must specify domain either
via name or id.  If you specify proejct by ID, you run the risk of
conflicting if you provide a domain speciffiedr (ID or name).


Although allowing to not use the domain might seems easier at first, I
believe it could lead to confusion and errors. The latter being harder
for the user to detect.

Therefore it might be better to always pass the domain information.

Probably a good idea, as it will catch if you are making some
assumption.  IE, I say  DomainX  ProejctQ  but I mean DomainQ ProjectQ.

Agreed. Like it or not domains are a major part of using the v3 api and if you 
want to use project names and user names we should enforce that domains are 
provided.
Particularly at the puppet level (dealing with users who should understand this 
stuff) anything that tries to guess what the user means is a bad idea and going 
to lead to confusion when it breaks.


I totally agree.

Thanks for participating


Would someone who actually has to deploy/maintain puppet manifests and 
supporting code chime in here?  How do you feel about having to ensure 
that every domain scoped Keystone resource name must end in ::domain?  
At the very least, if not using domains, and not changing the default 
domain, you would have to ensure something::Default _everywhere_ - and 
I do mean everywhere - every user and tenant name use, including in 
keystone_user_role, and in other, higher level classes/defines that 
refer to keystone users and tenants.


Anyone?

I also wonder how the Ansible folks are handling this, as they move to 
support domains and other Keystone v3 features in openstack-ansible code?






I believe using the full domain name approach is better.
But it's difficult to tell because in puppet-keystone and
puppet-openstacklib now rely on python-openstackclient (OSC) to
interface with Keystone. Because we can use OSC defaults
(OS_DEFAULT_DOMAIN or equivalent to set the default domain) doesn't
necessarily makes it the best approach. For example hard coded value [1]
makes it flaky.

[1]
https://github.com/openstack/python-openstackclient/blob/master/openstackclient/shell.py#L40

To help determine the approach to use, any feedback will be appreciated.

Thanks,
Gilles


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] stable is hosed

2015-08-07 Thread Matt Riedemann



On 8/7/2015 3:52 PM, Matt Riedemann wrote:

Well it's a Friday afternoon so you know what that means, emails about
the stable branches being all busted to pieces in the gate.

Tracking in the usual place:

https://etherpad.openstack.org/p/stable-tracker

Since things are especially fun the last two days I figured it was time
for a notification to the -dev list.

Both are basically Juno issues.

1. The large ops job is busted because of some uncapped dependencies in
python-openstackclient 1.0.1.

https://bugs.launchpad.net/openstack-gate/+bug/1482350

The fun thing here is g-r is capping osc=1.0.1 and there is already a
1.0.2 version of osc, so we can't simply cap osc in a 1.0.2 and raise
that in g-r for stable/juno (we didn't leave ourselves any room for bug
fixes).

We talked about an osc 1.0.1.1 but pbr=0.11 won't allow that because it
breaks semver.

The normal dsvm jobs are OK because they install cinder and cinder
installs the dependencies that satisfy everything so we don't hit the
osc issue.  The large ops job doesn't use cinder so it doesn't install it.

Options:

a) Somehow use a 1.0.1.post1 version for osc.  Would require input from
lifeless.

b) Install cinder in the large ops job on stable/juno.

c) Disable the large ops job for stable/juno.


2. grenade on kilo blows up because python-neutronclient 2.3.12 caps
oslo.serialization at =1.2.0, keystonemiddleware 1.5.2 is getting
pulled in which pulls in oslo.serialization 1.4.0 and things fall apart.

https://bugs.launchpad.net/python-neutronclient/+bug/1482758

I'm having a hard time unwinding this one since it's a grenade job.  I
know the failures line up with the neutronclient 2.3.12 release which
caps requirements on stable/juno:

https://review.openstack.org/#/c/204654/.


OK, the problem is that neutronclient doesn't get updated on the new 
kilo side of grenade past 2.3.12 because it satisfies the requirement 
for kilo:


https://github.com/openstack/requirements/blob/stable/kilo/global-requirements.txt#L132

python-neutronclient=2.3.11,2.5.0

But since neutronclient 2.3.12 caps things for juno, we can't use it on 
kilo due to the conflict and then kaboom.




Need some help here.



--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] [Horizon] Federated Login

2015-08-07 Thread Adam Young

On 08/06/2015 07:09 PM, Dolph Mathews wrote:


On Thu, Aug 6, 2015 at 11:25 AM, Lance Bragstad lbrags...@gmail.com 
mailto:lbrags...@gmail.com wrote:




On Thu, Aug 6, 2015 at 10:47 AM, Dolph Mathews
dolph.math...@gmail.com mailto:dolph.math...@gmail.com wrote:


On Wed, Aug 5, 2015 at 6:54 PM, Jamie Lennox
jamielen...@redhat.com mailto:jamielen...@redhat.com wrote:



- Original Message -
 From: David Lyle dkly...@gmail.com
mailto:dkly...@gmail.com
 To: OpenStack Development Mailing List (not for usage
questions) openstack-dev@lists.openstack.org
mailto:openstack-dev@lists.openstack.org
 Sent: Thursday, August 6, 2015 5:52:40 AM
 Subject: Re: [openstack-dev] [Keystone] [Horizon]
Federated Login

 Forcing Horizon to duplicate Keystone settings just makes 
everything much
 harder to configure and much more fragile. Exposing
whitelisted, or all,
 IdPs makes much more sense.

 On Wed, Aug 5, 2015 at 1:33 PM, Dolph Mathews 
dolph.math...@gmail.com mailto:dolph.math...@gmail.com 
 wrote:



 On Wed, Aug 5, 2015 at 1:02 PM, Steve Martinelli 
steve...@ca.ibm.com mailto:steve...@ca.ibm.com 
 wrote:





 Some folks said that they'd prefer not to list all
associated idps, which i
 can understand.
 Why?

So the case i heard and i think is fairly reasonable is
providing corporate logins to a public cloud. Taking the
canonical coke/pepsi example if i'm coke, i get asked to
login to this public cloud i then have to scroll though
all the providers to find the COKE.COM http://COKE.COM
domain and i can see for example that PEPSI.COM
http://PEPSI.COM is also providing logins to this cloud.
Ignoring the corporate privacy implications this list has
the potential to get long. Think about for example how you
can do a corporate login to gmail, you certainly don't
pick from a list of auth providers for gmail - there would
be thousands.

My understanding of the usage then would be that coke
would have been provided a (possibly branded) dedicated
horizon that backed onto a public cloud and that i could
then from horizon say that it's only allowed access to the
COKE.COM http://COKE.COM domain (because the UX for
inputting a domain at login is not great so per customer
dashboards i think make sense) and that for this instance
of horizon i want to show the 3 or 4 login providers that
COKE.COM http://COKE.COM is going to allow.

Anyway you want to list or whitelist that in keystone is
going to involve some form of IdP tagging system where we
have to say which set of idps we want in this case and i
don't think we should.


That all makes sense, and I was admittedly only thinking of
the private cloud use case. So, I'd like to discuss the public
and private use cases separately:

In a public cloud, is there a real use case for revealing
*any* IdPs publicly? If not, the entire list should be made
private using policy.json, which we already support today.


The user would be required to know the id of the IdP in which they
want to federate with, right?


As a federated end user in a public cloud, I'd be happy to have a 
custom URL / bookmark for my IdP / domain (like 
http://customer-x.cloud.example.com/ or 
http://cloud.example.com/customer-x) that I need to know to kickoff 
the correct federated handshake with my IdP using a single button 
press (Login).



Are we going about this backwards?  Wouldn't it make more sense to tell 
a new customer:


you get https://coke.cloudprovider.net

And have that hard coded to a UI.

For larger organizations, I suspect it would make more sense that the UI 
should be owned by Coke, and run on a server managed by Coke, and talk 
to multiple OpenStack instances.


The UI should not be Provider specific, but consumer specific.






In a private cloud, is there a real use case for fine-grained
public/private attributes per IdP? (The stated use case was
for a public cloud.) It seems the default behavior should be
that horizon fetches the entire list from keystone.


@David - when you add a new IdP to the university network
are you having to provide a new mapping each time? I know
the CERN answer to this with websso was to essentially
group many IdPs behind 

Re: [openstack-dev] [ceilometer] [aodh] upgrade path

2015-08-07 Thread gord chung



On 07/08/2015 3:49 AM, Chris Dent wrote:


Despite our conversation in the meeting yesterday[1] I still remain a
bit confused about the upgrade path from alarming-in-ceilometer to
alarming provided by aodh and the availability of the older code in
released liberty.

Much of my confusion can probably be resolved by knowing the answer to
this question:

If someone installs aodh on a machine that already has ceilometer on it
and turns off ceilometer-alarm-notifier and ceilometer-alarm-evaluator
(in favor of aodh-notifier and aodh-evaluator) will they be able to run
those aodh services against their existing ceilometer.conf files[2]?


it also depends on how you're consuming ceilometer. if you're installing 
ceilometer services via packages, there will never be aodh or 
ceilometer-alarm... just one alarming service. once we get aodh 
released/packaged, from a package pov, the current, deprecated code will 
be inaccessible.


http://eavesdrop.openstack.org/irclogs/%23openstack-meeting/%23openstack-meeting.2015-08-06.log.html#t2015-08-06T15:46:17 



--
gord


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] What are problems of Distributed SNAT?

2015-08-07 Thread Miyagishi, Takanori
Hi, Brian Haley

 This is a huge increase in IP consumption from today though, which is only
 [number of tenants], I'm not sure most deployers have [tenants * compute
 nodes] IPs at their disposal.  And in the worst-case this becomes Assign
 a Floating IP to all VMs.

Before suggest my proposal, I considered following proposal:
 * True Distributed SNAT
 * Carrier Grade NAT
 
These proposal listed on etherpad. These can solve IP consumption problem.
However, these need configure on upstream router to send to the correct node.
It is no precedent in Neutron.
And these have other problems:
 * True Distributed SNAT
   I don't understand how to distinguish l3-agents of same IP address.
   If using BGP, in my understanding, BGP can't establish peers of same IP 
address.
 
 * Carrier Grade NAT
   This proposal use ISP shared address on external network.
   There is also affect on floating IP.
   
Therefore, I suggested my proposal.
In ideal case, this proposal can be reduction of IP consumption.
However, in some case, this proposal occur high IP consumption as you mentioned.
So I also think this proposal is not good.
And I can't think best proposal right away...

I was considered network performance and avoid single point of failure.
Then, I considered case of operate only using floating IP.
If can tuning default value of enable_snat we can limit SNAT.
Therefore we can operate only using floatingIP.

Alternative proposal of Distributed SNAT consider again on or after Mitaka,
then I'd like to add tuning parameter of SNAT(enable_snat) in liberty.


Best regards,
Takanori Miyagishi

 -Original Message-
 From: Brian Haley [mailto:brian.ha...@hp.com]
 Sent: Saturday, July 25, 2015 2:56 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron] What are problems of Distributed
 SNAT?
 
 On 07/24/2015 08:17 AM, Miyagishi, Takanori wrote:
  Dear Carl,
 
  Thank you for your information!
 
  I checked the etherpad, and I propose a new idea of Distributed SNAT
 implementation.
  Following figure is my idea, Shared SNAT Address per Tenant per Compute
 Node.
 
 I think this is the One IP Address per Router per Host item in the etherpad
 since each distributed router will consume one IP.
 
 +---+---+--+
 |   |eth|   TenantA : TenantB  |
 |   +-+-+   :  external-network|
 | | :  10.0.0.0/24 |
 | +--++--- |
 ||  : ||
 ||10.0.0.100: |10.0.0.101  |
 |+---+ +-+-+ +---+  :  +--+---+ +---+  |
 ++
 || R | |   SNAT| | R |  :  | SNAT | | R |  |  |
 |
 |+-+-+ +-+---+-+ +-+-+  :  +--+---+ +-+-+  |  ... |
 |
 |  | |   | |: |   ||  |
 |
 |  | |   | |: |   ||
 ++
 |   ---+--+--+--   --+--+--+--- :  ---+---+--- |  Compute
 Node N
 | | |   : ||
 |  +--+--+   +--+--+:  +--+--+ |
 |  | VM1 |   | VM2 |:  | VM3 | |
 |  +-+   +-+:  +-+ |
 +--+
 Compute Node 1
 
  * R = Router
 
 This picture doesn't look right, there should only be one Router for
 TenantA even with two VMs on a compute node.  You can verify this by looking
 at how many qrouter namespaces are created, but I only see one on my system.
 
  In this idea, SNAT will be created for each tenant.
  So, IP consumption of this case is:
[number of tenant] * [number of compute node]
 
  Therefore, this idea can be reduction in IP consumption than create per
 router per compute node.
 
 This is a huge increase in IP consumption from today though, which is only
 [number of tenants], I'm not sure most deployers have [tenants * compute
 nodes] IPs at their disposal.  And in the worst-case this becomes Assign
 a Floating IP to all VMs.
 
 -Brian
 
  And, can be avoid security concerns because don't share SNAT between
 tenant.
 
  I'd like to implement SNAT for Liberty cycle with this idea.
 
  Best regards,
  Takanori Miyagishi
 
  -Original Message-
  From: Carl Baldwin [mailto:c...@ecbaldwin.net]
  Sent: Tuesday, July 07, 2015 2:29 AM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [Neutron] What are problems of
  Distributed SNAT?
 
  Hi,
 
  There was some discussion a while back on this subject.  Some
  alternatives were captured on etherpad [1] with pros and cons.  Sorry
  for the delay in responding.  The initial implementation of DVR went
  with centralized SNAT to reduce the scope of the effort and because
  of a lack consensus around which alternative to choose.
 
  Carl
 
  

Re: [openstack-dev] [TripleO] Moving instack upstream

2015-08-07 Thread Dan Prince
On Thu, 2015-08-06 at 16:54 -0400, James Slagle wrote:
 On Thu, Aug 6, 2015 at 8:12 AM, Dan Prince dpri...@redhat.com 
 wrote:
 
  
  One more side effect is that I think it also means we no longer 
  have
  the capability to test arbitrary Zuul refspecs for projects like 
  Heat,
  Neutron, Nova, or Ironic in our undercloud CI jobs. We've relied on 
  the
  source-repositories element to do this for us in the undercloud and
  since most of the instack stuff uses packages I think we would 
  loose
  this capability.
  
  I'm all for testing with packages mind you... would just like to 
  see us
  build packages for any projects that have Zuul refspecs inline, 
  create
  a per job repo, and then use that to build out the resulting 
  instack
  undercloud.
  
  This to me is the biggest loss in our initial switch to instack
  undercloud for CI. Perhaps there is a middle ground here where 
  instack
  (which used to support tripleo-image-elements itself) could still
  support use of the source-repositories element in one CI job until 
  we
  get our package building processes up to speed?
 
 Isn't this what's happening at line 89 in
 https://review.openstack.org/#/c/185151/6/toci_devtest_instack.sh ?
 
 Or would $ZUUL_CHANGES not be populated when check-experimental is 
 run?

Oh. Cool. I missed that one... so we are probably good here then. Thank
s for pointing it out James.

Dan

 
 
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] glance_store and glance

2015-08-07 Thread Nicolas Trangez
On Fri, 2015-08-07 at 01:21 -0400, Nikhil Komawar wrote:
 3. ease of addition of newer drivers for the developers -- drivers 
 are
 only being removed since.

The OpenStack team at Scality developed a Glance Store driver for RING,
currently out-of-tree to get to a first fully-working version (
https://github.com/scality/scality-glance-store), but with the
intention from day 1 to propose this driver for inclusion in the
upstream glance-store project, following the standard OpenStack
processes.

Whilst as you say new drivers haven't been proposed since the split,
this could be explained by the fact the way Glance is designed now
explicitly supports out-of-tree drivers (in glance-store or elsewhere)?

We have a couple of questions related to this proposal:

- Would folding glance-store back into glance have any impact on the
process (or reluctance) to include new third-party drivers?

- Will the glance-store core reviewer team be merged back into glance,
focusing on the store drivers?

Nicolas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova][ci] Tintri Cinder CI failures after Nova change

2015-08-07 Thread Skyler Berg
As Matt found, it the problem was with out of date requirements. Going
forward I would advise any third party CI that is not spinning up a new
VM for every job to purge all python packages after each run. This will
make devstack reinstall everything, avoid this type of problem. Though
the problem was with global requirements, only our CI was failing
because everyone else was getting the newest version of each package
each time.

We are still failing on one test (test_ec2_instance_run.InstanceRunTest)
and we are not sure what the cause is.

Here is a log from a recent run:
http://openstack-ci.tintri.com/tintri/refs-changes-88-210588-1/

Here is the failing test:

ft1.280: setUpClass
(tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest)_StringException:
Traceback (most recent call last):
  File tempest/test.py, line 272, in setUpClass
six.reraise(etype, value, trace)
  File tempest/test.py, line 265, in setUpClass
cls.resource_setup()
  File tempest/thirdparty/boto/test_ec2_instance_run.py, line 91, in
  resource_setup
state = wait.state_wait(_state, available)
  File tempest/thirdparty/boto/utils/wait.py, line 51, in state_wait
(dtime, final_set, status))
AssertionError: State change timeout exceeded!(196s) While waitingfor
set(['available']) at failed

From n-crt log:

2015-08-07 15:21:58.237 ERROR oslo_messaging.rpc.dispatcher
[req-d4ce0001-0754-461f-8fdc-57908baf88f7
tempest-InstanceRunTest-1110235717 tempest-InstanceRunTest-357311946]
Exception during message handling:
2015-08-07 15:21:58.237 32745 ERROR oslo_messaging.rpc.dispatcher
Traceback (most recent call last):
2015-08-07 15:21:58.237 32745 ERROR oslo_messaging.rpc.dispatcher   File
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py,
line 142, in _dispatch_and_reply
2015-08-07 15:21:58.237 32745 ERROR oslo_messaging.rpc.dispatcher
executor_callback))
2015-08-07 15:21:58.237 32745 ERROR oslo_messaging.rpc.dispatcher   File
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py,
line 186, in _dispatch
2015-08-07 15:21:58.237 32745 ERROR oslo_messaging.rpc.dispatcher
executor_callback)
2015-08-07 15:21:58.237 32745 ERROR oslo_messaging.rpc.dispatcher   File
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py,
line 129, in _do_dispatch
2015-08-07 15:21:58.237 32745 ERROR oslo_messaging.rpc.dispatcher
result = func(ctxt, **new_args)
2015-08-07 15:21:58.237 32745 ERROR oslo_messaging.rpc.dispatcher   File
/opt/stack/nova/nova/cert/manager.py, line 70, in decrypt_text
2015-08-07 15:21:58.237 32745 ERROR oslo_messaging.rpc.dispatcher
return crypto.decrypt_text(project_id, base64.b64decode(text))
2015-08-07 15:21:58.237 32745 ERROR oslo_messaging.rpc.dispatcher   File
/opt/stack/nova/nova/crypto.py, line 200, in decrypt_text
2015-08-07 15:21:58.237 32745 ERROR oslo_messaging.rpc.dispatcher
return priv_key.decrypt(text, padding.PKCS1v15())
2015-08-07 15:21:58.237 32745 ERROR oslo_messaging.rpc.dispatcher   File
/usr/lib/python2.7/site-packages/cryptography/hazmat/backends/openssl/rsa.py,
line 536, in decrypt
2015-08-07 15:21:58.237 32745 ERROR oslo_messaging.rpc.dispatcher
return _enc_dec_rsa(self._backend, self, ciphertext, padding)
2015-08-07 15:21:58.237 32745 ERROR oslo_messaging.rpc.dispatcher   File
/usr/lib/python2.7/site-packages/cryptography/hazmat/backends/openssl/rsa.py,
line 76, in _enc_dec_rsa
2015-08-07 15:21:58.237 32745 ERROR oslo_messaging.rpc.dispatcher
return _enc_dec_rsa_pkey_ctx(backend, key, data, padding_enum)
2015-08-07 15:21:58.237 32745 ERROR oslo_messaging.rpc.dispatcher   File
/usr/lib/python2.7/site-packages/cryptography/hazmat/backends/openssl/rsa.py,
line 105, in _enc_dec_rsa_pkey_ctx
2015-08-07 15:21:58.237 32745 ERROR oslo_messaging.rpc.dispatcher
_handle_rsa_enc_dec_error(backend, key)
2015-08-07 15:21:58.237 32745 ERROR oslo_messaging.rpc.dispatcher   File
/usr/lib/python2.7/site-packages/cryptography/hazmat/backends/openssl/rsa.py,
line 145, in _handle_rsa_enc_dec_error
2015-08-07 15:21:58.237 32745 ERROR oslo_messaging.rpc.dispatcher
assert errors[0].reason in decoding_errors
2015-08-07 15:21:58.237 32745 ERROR oslo_messaging.rpc.dispatcher
AssertionError
2015-08-07 15:21:58.237 32745 ERROR oslo_messaging.rpc.dispatcher
2

The 08/07/2015 09:01, Matt Riedemann wrote:
 
 
 On 8/7/2015 8:38 AM, Matt Riedemann wrote:
 
 
 On 8/6/2015 3:30 PM, Skyler Berg wrote:
 After the change cleanup NovaObjectDictCompat from virtual_interface
 [1] was merged into Nova on the morning of August 5th, Tintri's CI for
 Cinder started failing 13 test cases that involve a volume being
 attached to an instance [2].
 
 I have verified that the tests fail with the above mentioned change and
 pass when running against the previous commit.
 
 If anyone knows why this patch is causing an issue or is experiencing
 similar problems, please let me know.
 
 In the meantime, expect Tintri's CI to be either down or reporting
 failures until a solution is found.
 
 [1] 

[openstack-dev] [murano] Questions on creating and importing HOT packages

2015-08-07 Thread Vahid S Hashemian
Hello,

These two subjects have probably been discussed already, but I couldn't 
find any info on them, so very much appreciate if someone could clarify.

Why is the HOT template that is fed to package-create command renamed to 
template.yaml? Any issue with keeping the original name?

Why HOT syntax validation is deferred until package import time? Why not 
do it when creating the package?

Thanks.

Regards,
-
Vahid Hashemian, Ph.D.
Advisory Software Engineer, IBM Cloud Labs


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] 7.0 Soft Code Freeze in action

2015-08-07 Thread Eugene Bogdanov

Hello everyone,

I'd like to inform you that Soft Code Freeze[1] for 7.0 release is now 
officially effective. Since now on we stop accepting fixes for Medium 
priority bugs and focus on Critical and High priority bugs. There are 
plenty of them now, we need your help with fixing them so we approach 
Hard Code Freeze in good shape.


With Soft Code Freeze effective, we still have 7.0 blueprints[2] that 
are not implemented. Feature Leads, Component leads and Core Reviewers - 
please help with sorting this out:


1. Please ensure that status of your blueprints is up to date
2. Those blueprints that are not implementedshould be moved to the next
   milestone.
3. If a blueprint is obsolete, the right update is to set definition to
   Obsolete with milestone target nothing selected.

Thank you for your continued contributions.

--
EugeneB

[1] Soft Code Freeze definition: 
https://wiki.openstack.org/wiki/Fuel/Soft_Code_Freeze

[2] https://blueprints.launchpad.net/fuel/7.0.x

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][ptl] ATC passes for big tent projects wont be generated if repos are predictively listed

2015-08-07 Thread Steven Dake (stdake)
Hey folks,

I just wanted to give a heads up to fellow new members to the big tent (and the 
TC) that don’t have the correct repo listed today in the projects.yaml file in 
the governance repo that according  to fungi [1] those projects contributors 
will not receive ATC passes.  According to clarkb in the same log, no rename is 
planned at the moment.  My solution to this temporary problem is to correct 
projects.yaml to reference the existing Kolla repository, and not predictively 
expect a rename to happen ahead of the ATC pass generation.  I hope the 
foundation can take into account a rename and ATC pass generation so [2] can be 
reverted with appropriate review time for the TC.

The specific quote I am speaking of is

[15:46:26]  fungi sdake_: clarkb: yes, it has everything to do with repos 
being listed (correctly, not predictively) in the governance repo

[15:48:00]  fungi so if it's called stackforge/foo and your tc-recognized 
project team has stackforge/foo listed in the governance repo in a deliverable 
it will count. if it's stackforge/foo in gerrit and the governance repo has it 
listed as openstack/foo because you're expecting it to eventually be correct 
after the repo is renamed, that won't help


[1] 
http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2015-08-07.log.html
[2] https://review.openstack.org/210636

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [oslo] troubling passing of unit tests on broken code

2015-08-07 Thread Davanum Srinivas
Mike,

I edited my nova tox.ini like so:
http://paste.openstack.org/show/412245/

and it seems to be working for me:
http://paste.openstack.org/show/412246/

-- dims

On Fri, Aug 7, 2015 at 6:42 PM, Mike Bayer mba...@redhat.com wrote:

 Just a heads up that this recently merged code is wrong:


 https://review.openstack.org/#/c/192760/14/nova/tests/unit/db/test_migrations.py,cm

 and here it is failing tests on my local env, as it does on my CI, as
 would be expected, there's a lot more if I keep it running:

 http://paste.openstack.org/show/412236/

 However, utterly weirdly, all those tests *pass* with the same versions of
 everything in the gate:

 http://paste.openstack.org/show/412236/


 I have no idea why this is.  This might be on the oslo.db side within the
 test_migrations logic, not really sure.If someone feels like digging
 in, that would be great.

 The failure occurs with both Alembic 0.7.7 and Alembic 0.8 as yet
 unreleased.  I have a feeling that releasing Alembic 0.8 may or may not
 bump this failure to be more widespread, just because of its apparent
 heisenbuggy nature, and I'm really hoping to release 0.8 next week.  It was
 supposed to be this week but I got sidetracked.



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Davanum Srinivas :: https://twitter.com/dims
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] change of day for API subteam meeting?

2015-08-07 Thread Alex Xu

 在 2015年8月8日,上午12:48,Sean Dague s...@dague.net 写道:
 
 Friday's have been kind of a rough day for the Nova API subteam. It's
 already basically the weekend for folks in AP, and the weekend is right
 around the corner for everyone else.
 
 I'd like to suggest we shift the meeting to Monday or Tuesday in the
 same timeslot (currently 12:00 UTC). Either works for me. Having this
 earlier in the week I also hope keeps the attention on the items we need
 to be looking at over the course of the week.

Either works for me.

 
 If current regular attendees could speak up about day preference, please
 do. We'll make a change if this is going to work for folks.
 
   -Sean
 
 -- 
 Sean Dague
 http://dague.net
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] stable is hosed

2015-08-07 Thread Matt Riedemann



On 8/7/2015 5:27 PM, Kyle Mestery wrote:

On Fri, Aug 7, 2015 at 4:09 PM, Matt Riedemann
mrie...@linux.vnet.ibm.com mailto:mrie...@linux.vnet.ibm.com wrote:



On 8/7/2015 3:52 PM, Matt Riedemann wrote:

Well it's a Friday afternoon so you know what that means, emails
about
the stable branches being all busted to pieces in the gate.

Tracking in the usual place:

https://etherpad.openstack.org/p/stable-tracker

Since things are especially fun the last two days I figured it
was time
for a notification to the -dev list.

Both are basically Juno issues.

1. The large ops job is busted because of some uncapped
dependencies in
python-openstackclient 1.0.1.

https://bugs.launchpad.net/openstack-gate/+bug/1482350

The fun thing here is g-r is capping osc=1.0.1 and there is
already a
1.0.2 version of osc, so we can't simply cap osc in a 1.0.2 and
raise
that in g-r for stable/juno (we didn't leave ourselves any room
for bug
fixes).

We talked about an osc 1.0.1.1 but pbr=0.11 won't allow that
because it
breaks semver.

The normal dsvm jobs are OK because they install cinder and cinder
installs the dependencies that satisfy everything so we don't
hit the
osc issue.  The large ops job doesn't use cinder so it doesn't
install it.

Options:

a) Somehow use a 1.0.1.post1 version for osc.  Would require
input from
lifeless.

b) Install cinder in the large ops job on stable/juno.

c) Disable the large ops job for stable/juno.


2. grenade on kilo blows up because python-neutronclient 2.3.12 caps
oslo.serialization at =1.2.0, keystonemiddleware 1.5.2 is getting
pulled in which pulls in oslo.serialization 1.4.0 and things
fall apart.

https://bugs.launchpad.net/python-neutronclient/+bug/1482758

I'm having a hard time unwinding this one since it's a grenade
job.  I
know the failures line up with the neutronclient 2.3.12 release
which
caps requirements on stable/juno:

https://review.openstack.org/#/c/204654/.


OK, the problem is that neutronclient doesn't get updated on the new
kilo side of grenade past 2.3.12 because it satisfies the
requirement for kilo:


https://github.com/openstack/requirements/blob/stable/kilo/global-requirements.txt#L132

python-neutronclient=2.3.11,2.5.0

But since neutronclient 2.3.12 caps things for juno, we can't use it
on kilo due to the conflict and then kaboom.


So, 2.3.12 was explicitely for Juno, and not for Kilo. In fact, the
existing 2.3.11 client for Juno was failing due to some other oslo
library (I'd have to dig it out). It seems we want Kilo requirements to
be this:

python-neutronclient=2.4.0,2.5.0


adam_g and I talked about this in IRC as a solution, but I want to avoid 
raising the minimum required version of a library in stable, mostly in 
case that screws up packagers that are frozen for stable releases and 
aren't shipping newer versions of libraries as long as the old minimum 
version satisfied the code dependencies.


Since there are no code issues requiring bumping the minimum required 
version on stable/kilo, just our dep processing issues, I'd really like 
to avoid that.


However, at this point I'm not sure what other alternatives there are - 
kind of fried from looking at this stuff for two days.




I won't be able to submit a patch which does this for a few more hours,
if someone beats me to it, please copy me on the patch and/or reply on
this thread.

Thanks for digging this one out Matt!

Kyle



Need some help here.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] stable is hosed

2015-08-07 Thread Matt Riedemann



On 8/7/2015 7:41 PM, Matt Riedemann wrote:



On 8/7/2015 5:27 PM, Kyle Mestery wrote:

On Fri, Aug 7, 2015 at 4:09 PM, Matt Riedemann
mrie...@linux.vnet.ibm.com mailto:mrie...@linux.vnet.ibm.com wrote:



On 8/7/2015 3:52 PM, Matt Riedemann wrote:

Well it's a Friday afternoon so you know what that means, emails
about
the stable branches being all busted to pieces in the gate.

Tracking in the usual place:

https://etherpad.openstack.org/p/stable-tracker

Since things are especially fun the last two days I figured it
was time
for a notification to the -dev list.

Both are basically Juno issues.

1. The large ops job is busted because of some uncapped
dependencies in
python-openstackclient 1.0.1.

https://bugs.launchpad.net/openstack-gate/+bug/1482350

The fun thing here is g-r is capping osc=1.0.1 and there is
already a
1.0.2 version of osc, so we can't simply cap osc in a 1.0.2 and
raise
that in g-r for stable/juno (we didn't leave ourselves any room
for bug
fixes).

We talked about an osc 1.0.1.1 but pbr=0.11 won't allow that
because it
breaks semver.

The normal dsvm jobs are OK because they install cinder and
cinder
installs the dependencies that satisfy everything so we don't
hit the
osc issue.  The large ops job doesn't use cinder so it doesn't
install it.

Options:

a) Somehow use a 1.0.1.post1 version for osc.  Would require
input from
lifeless.

b) Install cinder in the large ops job on stable/juno.

c) Disable the large ops job for stable/juno.


2. grenade on kilo blows up because python-neutronclient
2.3.12 caps
oslo.serialization at =1.2.0, keystonemiddleware 1.5.2 is
getting
pulled in which pulls in oslo.serialization 1.4.0 and things
fall apart.

https://bugs.launchpad.net/python-neutronclient/+bug/1482758

I'm having a hard time unwinding this one since it's a grenade
job.  I
know the failures line up with the neutronclient 2.3.12 release
which
caps requirements on stable/juno:

https://review.openstack.org/#/c/204654/.


OK, the problem is that neutronclient doesn't get updated on the new
kilo side of grenade past 2.3.12 because it satisfies the
requirement for kilo:


https://github.com/openstack/requirements/blob/stable/kilo/global-requirements.txt#L132


python-neutronclient=2.3.11,2.5.0

But since neutronclient 2.3.12 caps things for juno, we can't use it
on kilo due to the conflict and then kaboom.


So, 2.3.12 was explicitely for Juno, and not for Kilo. In fact, the
existing 2.3.11 client for Juno was failing due to some other oslo
library (I'd have to dig it out). It seems we want Kilo requirements to
be this:

python-neutronclient=2.4.0,2.5.0


adam_g and I talked about this in IRC as a solution, but I want to avoid
raising the minimum required version of a library in stable, mostly in
case that screws up packagers that are frozen for stable releases and
aren't shipping newer versions of libraries as long as the old minimum
version satisfied the code dependencies.

Since there are no code issues requiring bumping the minimum required
version on stable/kilo, just our dep processing issues, I'd really like
to avoid that.

However, at this point I'm not sure what other alternatives there are -
kind of fried from looking at this stuff for two days.



I won't be able to submit a patch which does this for a few more hours,
if someone beats me to it, please copy me on the patch and/or reply on
this thread.

Thanks for digging this one out Matt!

Kyle



Need some help here.


--

Thanks,

Matt Riedemann



__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





What I do know is we need to be better about bumping the minor version 
in a release rather than the patch version all of the time - we've kind 
of painted ourselves into a corner a few times here with leaving no 
wiggle room for patch releases on stable branches.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [nova] change of day for API subteam meeting?

2015-08-07 Thread Ken'ichi Ohmichi
Nice idea :-)

2015年8月8日(土) 9:05 Alex Xu hejie...@intel.com:


  在 2015年8月8日,上午12:48,Sean Dague s...@dague.net 写道:
 
  Friday's have been kind of a rough day for the Nova API subteam. It's
  already basically the weekend for folks in AP, and the weekend is right
  around the corner for everyone else.
 
  I'd like to suggest we shift the meeting to Monday or Tuesday in the
  same timeslot (currently 12:00 UTC). Either works for me. Having this
  earlier in the week I also hope keeps the attention on the items we need
  to be looking at over the course of the week.

 Either works for me.

 
  If current regular attendees could speak up about day preference, please
  do. We'll make a change if this is going to work for folks.
 
-Sean
 
  --
  Sean Dague
  http://dague.net
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [oslo] troubling passing of unit tests on broken code

2015-08-07 Thread Mike Bayer



On 8/7/15 8:00 PM, Davanum Srinivas wrote:

Mike,

I edited my nova tox.ini like so:
http://paste.openstack.org/show/412245/

and it seems to be working for me:
http://paste.openstack.org/show/412246/
OK I can see why the gate passes, the error is exposed only by Alembic 
0.8, because the Column() in the remove operation has no Table 
associated with it.


But the code is still wrong and should be fixed.





-- dims

On Fri, Aug 7, 2015 at 6:42 PM, Mike Bayer mba...@redhat.com 
mailto:mba...@redhat.com wrote:


Just a heads up that this recently merged code is wrong:


https://review.openstack.org/#/c/192760/14/nova/tests/unit/db/test_migrations.py,cm

and here it is failing tests on my local env, as it does on my CI,
as would be expected, there's a lot more if I keep it running:

http://paste.openstack.org/show/412236/

However, utterly weirdly, all those tests *pass* with the same
versions of everything in the gate:

http://paste.openstack.org/show/412236/


I have no idea why this is.  This might be on the oslo.db side
within the test_migrations logic, not really sure. If someone
feels like digging in, that would be great.

The failure occurs with both Alembic 0.7.7 and Alembic 0.8 as yet
unreleased.  I have a feeling that releasing Alembic 0.8 may or
may not bump this failure to be more widespread, just because of
its apparent heisenbuggy nature, and I'm really hoping to release
0.8 next week.  It was supposed to be this week but I got sidetracked.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Davanum Srinivas :: https://twitter.com/dims


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [oslo] troubling passing of unit tests on broken code

2015-08-07 Thread Davanum Srinivas
Mike,

Sounds great! +1 to fix the code.

-- dims

On Fri, Aug 7, 2015 at 9:55 PM, Mike Bayer mba...@redhat.com wrote:



 On 8/7/15 8:00 PM, Davanum Srinivas wrote:

 Mike,

 I edited my nova tox.ini like so:
 http://paste.openstack.org/show/412245/

 and it seems to be working for me:
 http://paste.openstack.org/show/412246/

 OK I can see why the gate passes, the error is exposed only by Alembic
 0.8, because the Column() in the remove operation has no Table associated
 with it.

 But the code is still wrong and should be fixed.




 -- dims

 On Fri, Aug 7, 2015 at 6:42 PM, Mike Bayer mba...@redhat.com wrote:

 Just a heads up that this recently merged code is wrong:


 https://review.openstack.org/#/c/192760/14/nova/tests/unit/db/test_migrations.py,cm

 and here it is failing tests on my local env, as it does on my CI, as
 would be expected, there's a lot more if I keep it running:

 http://paste.openstack.org/show/412236/

 However, utterly weirdly, all those tests *pass* with the same versions
 of everything in the gate:

 http://paste.openstack.org/show/412236/


 I have no idea why this is.  This might be on the oslo.db side within the
 test_migrations logic, not really sure.If someone feels like digging
 in, that would be great.

 The failure occurs with both Alembic 0.7.7 and Alembic 0.8 as yet
 unreleased.  I have a feeling that releasing Alembic 0.8 may or may not
 bump this failure to be more widespread, just because of its apparent
 heisenbuggy nature, and I'm really hoping to release 0.8 next week.  It was
 supposed to be this week but I got sidetracked.



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Davanum Srinivas :: https://twitter.com/dimshttps://twitter.com/dims


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Davanum Srinivas :: https://twitter.com/dims
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Merge back of QoS and pecan branches

2015-08-07 Thread Kyle Mestery
As we're beginning to wind down Liberty-3 in a few weeks, I'd like to
present the rough, high level plan to merge back the QoS and pecan branches
into Neutron. Ihar has been doing a great job shepherding the QoS work, and
I believe once we're done landing the final patches this weekend [1], we
can look to merge this branch back next week.

The pecan branch [2] has a few patches left, but it's also my understanding
from prior patches we'll need some additional testing done. Kevin, what
else is left here? I'd like to see if we could merge this branch back the
following week. I'd also like to hear your comments on enabling the pecan
WSGI layer by default for Liberty and what additional testing is needed (if
any) to make that happen.

Thanks!
Kyle

[1]
https://review.openstack.org/#/q/project:openstack/neutron+branch:feature/qos+status:open,n,z
[2]
https://review.openstack.org/#/q/project:openstack/neutron+branch:feature/pecan+status:open,n,z
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][ptl] ATC passes for big tent projects wont be generated if repos are predictively listed

2015-08-07 Thread Jeremy Stanley
On 2015-08-07 23:35:33 + (+), Steven Dake (stdake) wrote:
 I just wanted to give a heads up to fellow new members to the big
 tent (and the TC) that don’t have the correct repo listed today in
 the projects.yaml file in the governance repo that according to
 fungi [1] those projects contributors will not receive ATC passes.
 According to clarkb in the same log, no rename is planned at the
 moment. My solution to this temporary problem is to correct
 projects.yaml to reference the existing Kolla repository, and not
 predictively expect a rename to happen ahead of the ATC pass
 generation. I hope the foundation can take into account a rename
 and ATC pass generation so [2] can be reverted with appropriate
 review time for the TC.
[...]

I've worked out a fix to remap repos which only differ by namespace,
which is probably the majority. https://review.openstack.org/210685

I'll attempt to check proposed renames which also change project
shortnames and work around them manually, but that's no guarantee.
If you have contributions only to projects which are misnamed in the
governance repo, and as such don't receive a code in this coming
week's batch, please let me know so I can try to solve whatever
additional mappings may be missing.

With over 750 repositories in our Gerrit now and at least some in
the middle of being renamed at any given point in time, it's likely
there will be some gaps. Thanks for your patience and understanding!
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Congress] Mid-cycle sprint summary

2015-08-07 Thread Tim Hinrichs
Hi all,

We just finished up a great 2 day sprint focusing on a new distributed
architecture for Congress.  Details can be found in the etherpad:

https://etherpad.openstack.org/p/congress-liberty-sprint

Here's the summary.

1. Architecture.  Each datasource driver will run in its own process; each
policy engine will run in its own process; the API will run in its own
process.  All processes will communicate using oslo-messaging.  We decided
to remove the functionality for creating/deleting datasources, since that
functionality will be handled by the operating system.

2. Blueprints.  The blueprints we created all start with dist-.  Please
sign up if you're interested.  If you attended the sprint and volunteered
for a blueprint but did not sign up, please do so.

https://blueprints.launchpad.net/congress

3. Code commits.  We're making changes in place on master.  The plan is to
make most of the changes without breaking existing functionality.  Then
there will be one or two changes at the end that swap out the current,
single-process architecture for the new, multi-process architecture.

4. Timelines.  We're hoping to have the basic functionality in place by
Tokyo.  We will release the current architecture for liberty and the new
architecture for M.

Thanks for a great sprint, everyone!

Let me know if you have questions.
Tim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev