Re: [openstack-dev] [Glance][Oslo] Pulling glance.store out of glance. Where should it live?

2013-12-23 Thread Flavio Percoco

On 21/12/13 00:41 -0500, Jay Pipes wrote:

On 12/20/2013 10:42 AM, Flavio Percoco wrote:

Greetings,

In the last Glance meeting, it was proposed to pull out glance's
stores[0] code into its own package. There are a couple of other
scenarios where using this code is necessary and it could also be
useful for other consumers outside OpenStack itself.

That being said, it's not clear where this new library should live in:

   1) Oslo: it's the place for common code, incubation, although this
   code has been pretty stable in the last release.

   2) glance.stores under Image program: As said in #1, the API has
   been pretty stable - and it falls perfectly into what Glance's
   program covers.


What about:

3) Cinder

Cinder is for block storage. Images are just a bunch of blocks, and 
all the store drivers do is take a chunked stream of input blocks and 
store them to disk/swift/s3/rbd/toaster and stream those blocks back 
out again.


So, perhaps the most appropriate place for this is in Cinder-land.


This is an interesting suggestion.

I wouldn't mind putting it there, although I still prefer it to be
under glance for historical reasons and because Glance team knows that
code.

How would it work if this lib falls under Block Storage program?

Should the glance team be added as core contributors of this project?
or Just some of them interested in contributing / reviewing those
patches?

Thanks for the suggestion. I'd like John and Mark to weigh in too.

Cheers,
FF

--
@flaper87
Flavio Percoco


pgpoUXdfJuR51.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel-dev] [Fuel] [Oslo] Add APP-NAME (RFC5424) for Oslo syslog logging

2013-12-23 Thread Bogdan Dobrelya
On 12/21/2013 11:20 AM, Sergey Vasilenko wrote:
 Do you offer patch system python in solution #2?
 
In solution 2, as you can see from
https://review.openstack.org/#/c/63094/11/openstack/common/log.py, we
define class
RFCSysLogHandler(logging.handlers.SysLogHandler) which overrides
 its base __init__ and format methods, by adding prefix to MSG part of
the message (the prefix represents APP-NAME field according to RFC5424):
msg = self.binary_name + ' ' + msg

Thus, w/o any Python system modifications, that change would backport
'ident' solution from Python 3.3
(http://hg.python.org/cpython/rev/6baa90fa2b6d), but for Oslo logging
only. And I don't see any
compatibility issues here.

In solution 1, we use context adapter instead, to allow user extending
log_format by %(binary_name)s as well. I'm not sure this method is good
though, because there might be some issues with middlewares (e.g. this
trace: http://paste.openstack.org/show/55519/), but probably it would be
OK for IceHouse, gonna test it on DevStack as well.

Stackers, please don't hesitate to discuss, which method we should use
to ensure RFC5424 would be honored, at least (and at last) for syslog
handler.

-- 
Best regards,
Bogdan Dobrelya,
Researcher TechLead, Mirantis, Inc.
+38 (066) 051 07 53
Skype bogdando_at_yahoo.com
Irc #bogdando
38, Lenina ave.
Kharkov, Ukraine
www.mirantis.com
www.mirantis.ru
bdobre...@mirantis.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Oslo] Pulling glance.store out of glance. Where should it live?

2013-12-23 Thread Thierry Carrez
Flavio Percoco wrote:
 On 21/12/13 00:41 -0500, Jay Pipes wrote:
 Cinder is for block storage. Images are just a bunch of blocks, and
 all the store drivers do is take a chunked stream of input blocks and
 store them to disk/swift/s3/rbd/toaster and stream those blocks back
 out again.

 So, perhaps the most appropriate place for this is in Cinder-land.
 
 This is an interesting suggestion.
 
 I wouldn't mind putting it there, although I still prefer it to be
 under glance for historical reasons and because Glance team knows that
 code.
 
 How would it work if this lib falls under Block Storage program?
 
 Should the glance team be added as core contributors of this project?
 or Just some of them interested in contributing / reviewing those
 patches?
 
 Thanks for the suggestion. I'd like John and Mark to weigh in too.

Programs are a team of people on a specific mission. If the stores code
is maintained by a completely separate group (glance devs), then it
doesn't belong in the Block Storage program... unless the Cinder devs
intend to adopt it over the long run (and therefore the contributors of
the Block Storage program form a happy family rather than two separate
groups).

Depending on the exact nature of the couple of other scenarios where
using this code is necessary, I think it would either belong in Glance
or in Oslo.

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] changes to interacting with Tempest config

2013-12-23 Thread Sean Dague
One of the longer standing issues we've had in Tempest was the Tempest
is a set of tests without it's own UI (we just us tox to call testr),
and *need* a config file. The provides some interesting chicken and egg
issues about when we get the config file. There is a long set of
evolution here, but this kept causing issues as we refactored to cleaner
ways to integrate with testr.

As of Friday we're taking a different approach, which will hopefully
make everything much simpler. We've implemented a proxy class in front
our oslo config object which lazy evaluates the config file the first
time someone asks for an attribute. This lets us get the parsing
entirely out of the class hierarchy of the tests.

For tempest developers and reviewers, here are the things you should be
looking for.

1) No one should call TempestConfig() themselves any more.

The correct way to get the tempest config is:

from tempest import config

CONF = config.CONF

 inside some class ...
if CONF.service_available.neutron:

CONF is actually callable a lot more places now because of the
evaluation order, which means we're going to be able to move a lot more
of the skip logic into decorators. Yay!

2) Classes should stop having self.config or cls.config

Because of the way setupClass was working, there was a ton of setting
self.config / cls.config in class setup. We should stop that. The
cleanups here are good low hanging fruit for new contributors.

I'm sure there are optimizations to make our proxy object pattern better
-
https://github.com/openstack/tempest/blob/master/tempest/config.py#L738
comments welcomed there. However it has at least decoupled config from
class hierarchy now, so will be easier to work on the problems
separately instead of coupled.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Support for Django 1.6

2013-12-23 Thread Thomas Goirand
On 12/20/2013 04:39 PM, Matthias Runge wrote:
 On 12/19/2013 04:45 PM, Thomas Goirand wrote:
 Hi,

 Sid has Django 1.6. Is it planned to add support for it? I currently
 don't know what to do with the Horizon package, as it's currently
 broken... :(

 Thomas
 Yes, there are two patches available, one for horizon[1] and one for
 django_openstack_auth[2]
 
 If both are in, we can start gating on django-1.6 as well.
 
 [1] https://review.openstack.org/#/c/58947/
 [2] https://review.openstack.org/#/c/58561/
 
 Matthias

Hi Matthias,

Thanks a lot for these pointers. I tried patching openstack-auth. While
it did work in Wheezy (with Django 1.4), all the 80 unit tests are
failing in Sid, with the following error:

ImportError: No module named defaults

while trying to do:

from django.conf.urls.defaults import patterns, url

Is there anything that I missed? Maybe a missing Django python module?

Cheers,

Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest] negative tests

2013-12-23 Thread Anna Kamyshnikova
Hello!

I'm working on creating tests in tempest according to this etherpad page
https://etherpad.openstack.org/p/icehouse-summit-qa-neutron.

Here is mentioned that we should be add negative tests, for example, for
floating ips, but as I understand (according to comment to
https://bugs.launchpad.net/bugs/1262113) negative tests will be added
automatically. In this case, is work on such tests as
- Negative: create a floating ip specifying a non public network
- Negative: create a floating ip specifying a floating ip address out of
the external network subnet range

- Negative: create a floating ip specifying a floating ip address that is
in use

- Negative: create / update a floating ip address specifying an invalid
internal port

- Negative: create / update a floating ip address specifying an internal
port with no ip address

- Negative: create / update a floating ip with an internal port with
multiple ip addresses, specifying an invalid

- Negative create /assciate a floating ip with an internal port with
multiple ip addresses, when the ip address

- Negative: delete an invalid floating ip

- Negative: show non existing floating ip
 needed or not?

Ann.


On Mon, Dec 23, 2013 at 2:56 PM, Sean Dague s...@dague.net wrote:

 Please take this to a public list

 On 12/23/2013 03:42 AM, Anna Kamyshnikova wrote:
  Hello!
 
  I'm working on creating tests in tempest according to this etherpad
  page https://etherpad.openstack.org/p/icehouse-summit-qa-neutron.
 
  Here is mentioned that we should be add negative tests, for example, for
  floating ips, but as I understand (according to your comment
  to https://bugs.launchpad.net/bugs/1262113) negative tests will be added
  automatically. In this case, is work on such tests as
  - Negative: create a floating ip specifying a non public network
  - Negative: create a floating ip specifying a floating ip address out of
  the external network subnet range
 
  - Negative: create a floating ip specifying a floating ip address that
  is in use
 
  - Negative: create / update a floating ip address specifying an invalid
  internal port
 
  - Negative: create / update a floating ip address specifying an internal
  port with no ip address
 
  - Negative: create / update a floating ip with an internal port with
  multiple ip addresses, specifying an invalid
 
  - Negative create /assciate a floating ip with an internal port with
  multiple ip addresses, when the ip address
 
  - Negative: delete an invalid floating ip
 
  - Negative: show non existing floating ip
 
   needed or not?
 
  Ann.


 --
 Sean Dague
 http://dague.net


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Nomination for heat-core

2013-12-23 Thread Bartosz Górski

Hi all,

I would like to thank you for the nomination, yours votes and trust you 
gave me.

I know that with great power comes big responsibility.
I will do my best and I will not let you down.

Thanks,
Bartosz

On 12/19/2013 03:21 AM, Steve Baker wrote:
I would like to nominate Bartosz Górski to be a heat-core reviewer. 
His reviews to date have been valuable and his other contributions to 
the project have shown a sound understanding of how heat works.


Here is his review history:
https://review.openstack.org/#/q/reviewer:bartosz.gorski%2540ntti3.com+project:openstack/heat,n,z

If you are heat-core please reply with your vote.

cheers


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Oslo] Pulling glance.store out of glance. Where should it live?

2013-12-23 Thread Jay Pipes

On 12/23/2013 05:42 AM, Thierry Carrez wrote:

Flavio Percoco wrote:

On 21/12/13 00:41 -0500, Jay Pipes wrote:

Cinder is for block storage. Images are just a bunch of blocks, and
all the store drivers do is take a chunked stream of input blocks and
store them to disk/swift/s3/rbd/toaster and stream those blocks back
out again.

So, perhaps the most appropriate place for this is in Cinder-land.


This is an interesting suggestion.

I wouldn't mind putting it there, although I still prefer it to be
under glance for historical reasons and because Glance team knows that
code.

How would it work if this lib falls under Block Storage program?

Should the glance team be added as core contributors of this project?
or Just some of them interested in contributing / reviewing those
patches?

Thanks for the suggestion. I'd like John and Mark to weigh in too.


Programs are a team of people on a specific mission. If the stores code
is maintained by a completely separate group (glance devs), then it
doesn't belong in the Block Storage program... unless the Cinder devs
intend to adopt it over the long run (and therefore the contributors of
the Block Storage program form a happy family rather than two separate
groups).


Understood. The reason I offered this up as a suggestion is that 
currently Cinder uses the Glance REST API to store and retrieve volume 
snapshots, and it would be more efficient to just give Cinder the 
ability to directly retrieve the blocks from one of the underlying store 
drivers (same goes for Nova's use of Glance). ...and, since the 
glance.store drivers are dealing with blocks, I thought it made more 
sense in Cinder.



Depending on the exact nature of the couple of other scenarios where
using this code is necessary, I think it would either belong in Glance
or in Oslo.


Perhaps something in olso then. oslo.blockstream? oslo.blockstore?

Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Availability of external testing logs

2013-12-23 Thread Jay Pipes

On 12/22/2013 06:30 AM, Salvatore Orlando wrote:

Hi,

The patch: https://review.openstack.org/#/c/63558/ failed mellanox
external testing.
Subsequent patch sets have not been picked up by the mellanox testing
system.

I would like to see why the patch failed the job; if it breaks mellanox
plugin for any reason, I would be happy to fix it. However, the logs are
not publicly accessible.

I would suggest that external jobs should not vote until logs are
publicly accessible, otherwise contributors would have no reason to
understand where the negative vote came from.


+1!

-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Oslo] Pulling glance.store out of glance. Where should it live?

2013-12-23 Thread Flavio Percoco

On 23/12/13 07:57 -0500, Jay Pipes wrote:

On 12/23/2013 05:42 AM, Thierry Carrez wrote:

Flavio Percoco wrote:

On 21/12/13 00:41 -0500, Jay Pipes wrote:

Cinder is for block storage. Images are just a bunch of blocks, and
all the store drivers do is take a chunked stream of input blocks and
store them to disk/swift/s3/rbd/toaster and stream those blocks back
out again.

So, perhaps the most appropriate place for this is in Cinder-land.


This is an interesting suggestion.

I wouldn't mind putting it there, although I still prefer it to be
under glance for historical reasons and because Glance team knows that
code.

How would it work if this lib falls under Block Storage program?

Should the glance team be added as core contributors of this project?
or Just some of them interested in contributing / reviewing those
patches?

Thanks for the suggestion. I'd like John and Mark to weigh in too.


Programs are a team of people on a specific mission. If the stores code
is maintained by a completely separate group (glance devs), then it
doesn't belong in the Block Storage program... unless the Cinder devs
intend to adopt it over the long run (and therefore the contributors of
the Block Storage program form a happy family rather than two separate
groups).


Understood. The reason I offered this up as a suggestion is that 
currently Cinder uses the Glance REST API to store and retrieve volume 
snapshots, and it would be more efficient to just give Cinder the 
ability to directly retrieve the blocks from one of the underlying 
store drivers (same goes for Nova's use of Glance). ...and, since the 
glance.store drivers are dealing with blocks, I thought it made more 
sense in Cinder.



Depending on the exact nature of the couple of other scenarios where
using this code is necessary, I think it would either belong in Glance
or in Oslo.


Perhaps something in olso then. oslo.blockstream? oslo.blockstore?


What about just oslo.store or oslo.objstore ?

I'm leaning towards Oslo as well. I know Mark preferred Glance so I'd
like him to chime in too.

In order to do this, though, we'll need to add some Glance developers
to the group of reviewers of this library at least during the Ith
release cycle. This will help with providing enough reviews. It'll
also help with sharing the knowledge / history about this package.

Cheers,
FF

--
@flaper87
Flavio Percoco


pgp5cjcfgpfkO.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Oslo] Pulling glance.store out of glance. Where should it live?

2013-12-23 Thread Mark Washenberger
On Mon, Dec 23, 2013 at 12:11 AM, Flavio Percoco fla...@redhat.com wrote:

 On 21/12/13 00:41 -0500, Jay Pipes wrote:

 On 12/20/2013 10:42 AM, Flavio Percoco wrote:

 Greetings,

 In the last Glance meeting, it was proposed to pull out glance's
 stores[0] code into its own package. There are a couple of other
 scenarios where using this code is necessary and it could also be
 useful for other consumers outside OpenStack itself.

 That being said, it's not clear where this new library should live in:

1) Oslo: it's the place for common code, incubation, although this
code has been pretty stable in the last release.

2) glance.stores under Image program: As said in #1, the API has
been pretty stable - and it falls perfectly into what Glance's
program covers.


 What about:

 3) Cinder

 Cinder is for block storage. Images are just a bunch of blocks, and all
 the store drivers do is take a chunked stream of input blocks and store
 them to disk/swift/s3/rbd/toaster and stream those blocks back out again.

 So, perhaps the most appropriate place for this is in Cinder-land.


 This is an interesting suggestion.

 I wouldn't mind putting it there, although I still prefer it to be
 under glance for historical reasons and because Glance team knows that
 code.

 How would it work if this lib falls under Block Storage program?

 Should the glance team be added as core contributors of this project?
 or Just some of them interested in contributing / reviewing those
 patches?

 Thanks for the suggestion. I'd like John and Mark to weigh in too.


I think Jay's suggestion makes a lot of sense. I don't know if the Cinder
folks want to take it on, however. I think its going to be easier in a
process sense to just keep it in the Glance/Images program. Oslo doesn't
seem like the right fit to me, just because this already has a clear owner,
and as you said, it doesn't really need an unstable api cleanup phase (I
know you were not proposing it start out in copy-around mode.)




 Cheers,
 FF

 --
 @flaper87
 Flavio Percoco

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Oslo] Pulling glance.store out of glance. Where should it live?

2013-12-23 Thread Mark Washenberger
On Mon, Dec 23, 2013 at 4:57 AM, Jay Pipes jaypi...@gmail.com wrote:

 On 12/23/2013 05:42 AM, Thierry Carrez wrote:

 Flavio Percoco wrote:

 On 21/12/13 00:41 -0500, Jay Pipes wrote:

 Cinder is for block storage. Images are just a bunch of blocks, and
 all the store drivers do is take a chunked stream of input blocks and
 store them to disk/swift/s3/rbd/toaster and stream those blocks back
 out again.

 So, perhaps the most appropriate place for this is in Cinder-land.


 This is an interesting suggestion.

 I wouldn't mind putting it there, although I still prefer it to be
 under glance for historical reasons and because Glance team knows that
 code.

 How would it work if this lib falls under Block Storage program?

 Should the glance team be added as core contributors of this project?
 or Just some of them interested in contributing / reviewing those
 patches?

 Thanks for the suggestion. I'd like John and Mark to weigh in too.


 Programs are a team of people on a specific mission. If the stores code
 is maintained by a completely separate group (glance devs), then it
 doesn't belong in the Block Storage program... unless the Cinder devs
 intend to adopt it over the long run (and therefore the contributors of
 the Block Storage program form a happy family rather than two separate
 groups).


 Understood. The reason I offered this up as a suggestion is that currently
 Cinder uses the Glance REST API to store and retrieve volume snapshots, and
 it would be more efficient to just give Cinder the ability to directly
 retrieve the blocks from one of the underlying store drivers (same goes for
 Nova's use of Glance). ...and, since the glance.store drivers are dealing
 with blocks, I thought it made more sense in Cinder.


True, Cinder and Nova should be talking more directly to the underlying
stores--however their direct interface should probably be through
glanceclient. (Glanceclient could evolve to use the glance.store code I
imagine.)



  Depending on the exact nature of the couple of other scenarios where
 using this code is necessary, I think it would either belong in Glance
 or in Oslo.


 Perhaps something in olso then. oslo.blockstream? oslo.blockstore?


 Best,
 -jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Oslo] Pulling glance.store out of glance. Where should it live?

2013-12-23 Thread Jay Pipes

On 12/23/2013 08:48 AM, Mark Washenberger wrote:




On Mon, Dec 23, 2013 at 4:57 AM, Jay Pipes jaypi...@gmail.com
mailto:jaypi...@gmail.com wrote:

On 12/23/2013 05:42 AM, Thierry Carrez wrote:

Flavio Percoco wrote:

On 21/12/13 00:41 -0500, Jay Pipes wrote:

Cinder is for block storage. Images are just a bunch of
blocks, and
all the store drivers do is take a chunked stream of
input blocks and
store them to disk/swift/s3/rbd/toaster and stream those
blocks back
out again.

So, perhaps the most appropriate place for this is in
Cinder-land.


This is an interesting suggestion.

I wouldn't mind putting it there, although I still prefer it
to be
under glance for historical reasons and because Glance team
knows that
code.

How would it work if this lib falls under Block Storage program?

Should the glance team be added as core contributors of this
project?
or Just some of them interested in contributing / reviewing
those
patches?

Thanks for the suggestion. I'd like John and Mark to weigh
in too.


Programs are a team of people on a specific mission. If the
stores code
is maintained by a completely separate group (glance devs), then it
doesn't belong in the Block Storage program... unless the Cinder
devs
intend to adopt it over the long run (and therefore the
contributors of
the Block Storage program form a happy family rather than two
separate
groups).


Understood. The reason I offered this up as a suggestion is that
currently Cinder uses the Glance REST API to store and retrieve
volume snapshots, and it would be more efficient to just give Cinder
the ability to directly retrieve the blocks from one of the
underlying store drivers (same goes for Nova's use of Glance).
...and, since the glance.store drivers are dealing with blocks, I
thought it made more sense in Cinder.


True, Cinder and Nova should be talking more directly to the underlying
stores--however their direct interface should probably be through
glanceclient. (Glanceclient could evolve to use the glance.store code I
imagine.)


Hmm, that is a very interesting suggestion. glanceclient containing the 
store drivers. I like it. Will be a bit weird, though, having the 
glanceclient call the Glance API server to get the storage location 
details, which then calls the glanceclient code to store/retrieve the 
blocks :)


-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Oslo] Pulling glance.store out of glance. Where should it live?

2013-12-23 Thread Flavio Percoco

On 23/12/13 09:00 -0500, Jay Pipes wrote:

On 12/23/2013 08:48 AM, Mark Washenberger wrote:




On Mon, Dec 23, 2013 at 4:57 AM, Jay Pipes jaypi...@gmail.com
mailto:jaypi...@gmail.com wrote:

   On 12/23/2013 05:42 AM, Thierry Carrez wrote:

   Flavio Percoco wrote:

   On 21/12/13 00:41 -0500, Jay Pipes wrote:

   Cinder is for block storage. Images are just a bunch of
   blocks, and
   all the store drivers do is take a chunked stream of
   input blocks and
   store them to disk/swift/s3/rbd/toaster and stream those
   blocks back
   out again.

   So, perhaps the most appropriate place for this is in
   Cinder-land.


   This is an interesting suggestion.

   I wouldn't mind putting it there, although I still prefer it
   to be
   under glance for historical reasons and because Glance team
   knows that
   code.

   How would it work if this lib falls under Block Storage program?

   Should the glance team be added as core contributors of this
   project?
   or Just some of them interested in contributing / reviewing
   those
   patches?

   Thanks for the suggestion. I'd like John and Mark to weigh
   in too.


   Programs are a team of people on a specific mission. If the
   stores code
   is maintained by a completely separate group (glance devs), then it
   doesn't belong in the Block Storage program... unless the Cinder
   devs
   intend to adopt it over the long run (and therefore the
   contributors of
   the Block Storage program form a happy family rather than two
   separate
   groups).


   Understood. The reason I offered this up as a suggestion is that
   currently Cinder uses the Glance REST API to store and retrieve
   volume snapshots, and it would be more efficient to just give Cinder
   the ability to directly retrieve the blocks from one of the
   underlying store drivers (same goes for Nova's use of Glance).
   ...and, since the glance.store drivers are dealing with blocks, I
   thought it made more sense in Cinder.


True, Cinder and Nova should be talking more directly to the underlying
stores--however their direct interface should probably be through
glanceclient. (Glanceclient could evolve to use the glance.store code I
imagine.)


Hmm, that is a very interesting suggestion. glanceclient containing 
the store drivers. I like it. Will be a bit weird, though, having the 
glanceclient call the Glance API server to get the storage location 
details, which then calls the glanceclient code to store/retrieve the 
blocks :)


Exactly. This is part of the original idea. Allow Glance, nova,
glanceclient and cinder to interact with the store code.


--
@flaper87
Flavio Percoco


pgpc0yiev4T5j.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] packet forwarding

2013-12-23 Thread Abbass MAROUNI
Hello Ian,

Found some anti-spoofing rules in the ebtables (ebtables -t nat -L) of the
compute-host where my router VM is located. These rules are automatically
generated by libvirt for each VM and are usually generated from a preset of
rules (anti-ip-spoofing.xml). Disabling this rule didn't help as I found
later that there are some iptables chains also on the compute host that did
some anti-spoofing filtering (iptables -t filter -L).
So one need to disable the libvirt anti-ip-spoofing and the iptables
anti-spoofing.
I disabled the libvirt anti-ip-spoofing by removing the filter from
nova-base (virsh nwfilter-edit nova-base) and manually added a rule to
iptables.

Thanks a lot.
Abbass.



 Randy has it spot on.  The antispoofing rules prevent you from doing this
 in Neutron.  Clearly a router transmits traffic that isn't from it, and
 receives traffic that isn't addressed to it - and the port filtering
 discards them.

 You can disable them for the entire cloud by judiciously tweaking the Nova
 config settings, or if you're using the Nicira plugin you'll find it has
 extensions for modifying firewall behaviour (they could do with porting
 around, or even becoming core, but at the moment they're Nicira-specific).
 --
 Ian.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] All I want for Christmas is one more +2 ...

2013-12-23 Thread Matt Riedemann



On 12/12/2013 8:22 AM, Day, Phil wrote:

Hi Cores,

The “Stop, Rescue, and Delete should give guest a chance to shutdown”
change https://review.openstack.org/#/c/35303/ was approved a couple of
days ago, but failed to merge because the RPC version had moved on.
Its rebased and sitting there with one +2 and a bunch of +1s  -would be
really nice if it could land before it needs another rebase please ?

Thanks

Phil



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Since this is happening to others that are requesting reviews in the 
mailing list, even on patches with several +1s and a +2, and it's way 
after the fact, I'm going to link this:


http://lists.openstack.org/pipermail/openstack-dev/2013-September/015264.html

Maybe we should update the blurb here also to say 'in IRC' to nix any 
confusion about the mailing list.


https://wiki.openstack.org/wiki/ReviewChecklist#Notes_for_Non-Core_Developers

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Oslo] Pulling glance.store out of glance. Where should it live?

2013-12-23 Thread Zhi Yan Liu
On Mon, Dec 23, 2013 at 10:26 PM, Flavio Percoco fla...@redhat.com wrote:
 On 23/12/13 09:00 -0500, Jay Pipes wrote:

 On 12/23/2013 08:48 AM, Mark Washenberger wrote:




 On Mon, Dec 23, 2013 at 4:57 AM, Jay Pipes jaypi...@gmail.com
 mailto:jaypi...@gmail.com wrote:

On 12/23/2013 05:42 AM, Thierry Carrez wrote:

Flavio Percoco wrote:

On 21/12/13 00:41 -0500, Jay Pipes wrote:

Cinder is for block storage. Images are just a bunch of
blocks, and
all the store drivers do is take a chunked stream of
input blocks and
store them to disk/swift/s3/rbd/toaster and stream those
blocks back
out again.

So, perhaps the most appropriate place for this is in
Cinder-land.


This is an interesting suggestion.

I wouldn't mind putting it there, although I still prefer it
to be
under glance for historical reasons and because Glance team
knows that
code.

How would it work if this lib falls under Block Storage
 program?

Should the glance team be added as core contributors of this
project?
or Just some of them interested in contributing / reviewing
those
patches?

Thanks for the suggestion. I'd like John and Mark to weigh
in too.


Programs are a team of people on a specific mission. If the
stores code
is maintained by a completely separate group (glance devs), then
 it
doesn't belong in the Block Storage program... unless the Cinder
devs
intend to adopt it over the long run (and therefore the
contributors of
the Block Storage program form a happy family rather than two
separate
groups).


Understood. The reason I offered this up as a suggestion is that
currently Cinder uses the Glance REST API to store and retrieve
volume snapshots, and it would be more efficient to just give Cinder
the ability to directly retrieve the blocks from one of the
underlying store drivers (same goes for Nova's use of Glance).
...and, since the glance.store drivers are dealing with blocks, I
thought it made more sense in Cinder.


 True, Cinder and Nova should be talking more directly to the underlying
 stores--however their direct interface should probably be through
 glanceclient. (Glanceclient could evolve to use the glance.store code I
 imagine.)


 Hmm, that is a very interesting suggestion. glanceclient containing the
 store drivers. I like it. Will be a bit weird, though, having the
 glanceclient call the Glance API server to get the storage location details,
 which then calls the glanceclient code to store/retrieve the blocks :)


 Exactly. This is part of the original idea. Allow Glance, nova,
 glanceclient and cinder to interact with the store code.


Actually I consider this Glance store stuff can be packaged to a
dedicated common lib belongs to Glance, maybe we can put it into
glanceclient if we don't like create a new sub-lib, IMO it worked just
like current Cinder's brick lib IMO, in sort term.

In long term we can move those stuff all to oslo when they stable
enough (if we can see that day ;) ) and don't organize them by
project's POV but storage type: oslo.blockstore (or other name) for
block storage backend handling, and oslo.objectstore for object
storage, and upper layer project just delegate all real storage device
operation requests to those lib, like mount/attach, unmoun/detach,
read/write..

zhiyan


 --
 @flaper87
 Flavio Percoco

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Oslo] Pulling glance.store out of glance. Where should it live?

2013-12-23 Thread Flavio Percoco

On 23/12/13 22:46 +0800, Zhi Yan Liu wrote:

On Mon, Dec 23, 2013 at 10:26 PM, Flavio Percoco fla...@redhat.com wrote:

On 23/12/13 09:00 -0500, Jay Pipes wrote:


On 12/23/2013 08:48 AM, Mark Washenberger wrote:





On Mon, Dec 23, 2013 at 4:57 AM, Jay Pipes jaypi...@gmail.com
mailto:jaypi...@gmail.com wrote:

   On 12/23/2013 05:42 AM, Thierry Carrez wrote:

   Flavio Percoco wrote:

   On 21/12/13 00:41 -0500, Jay Pipes wrote:

   Cinder is for block storage. Images are just a bunch of
   blocks, and
   all the store drivers do is take a chunked stream of
   input blocks and
   store them to disk/swift/s3/rbd/toaster and stream those
   blocks back
   out again.

   So, perhaps the most appropriate place for this is in
   Cinder-land.


   This is an interesting suggestion.

   I wouldn't mind putting it there, although I still prefer it
   to be
   under glance for historical reasons and because Glance team
   knows that
   code.

   How would it work if this lib falls under Block Storage
program?

   Should the glance team be added as core contributors of this
   project?
   or Just some of them interested in contributing / reviewing
   those
   patches?

   Thanks for the suggestion. I'd like John and Mark to weigh
   in too.


   Programs are a team of people on a specific mission. If the
   stores code
   is maintained by a completely separate group (glance devs), then
it
   doesn't belong in the Block Storage program... unless the Cinder
   devs
   intend to adopt it over the long run (and therefore the
   contributors of
   the Block Storage program form a happy family rather than two
   separate
   groups).


   Understood. The reason I offered this up as a suggestion is that
   currently Cinder uses the Glance REST API to store and retrieve
   volume snapshots, and it would be more efficient to just give Cinder
   the ability to directly retrieve the blocks from one of the
   underlying store drivers (same goes for Nova's use of Glance).
   ...and, since the glance.store drivers are dealing with blocks, I
   thought it made more sense in Cinder.


True, Cinder and Nova should be talking more directly to the underlying
stores--however their direct interface should probably be through
glanceclient. (Glanceclient could evolve to use the glance.store code I
imagine.)



Hmm, that is a very interesting suggestion. glanceclient containing the
store drivers. I like it. Will be a bit weird, though, having the
glanceclient call the Glance API server to get the storage location details,
which then calls the glanceclient code to store/retrieve the blocks :)



Exactly. This is part of the original idea. Allow Glance, nova,
glanceclient and cinder to interact with the store code.



Actually I consider this Glance store stuff can be packaged to a
dedicated common lib belongs to Glance, maybe we can put it into
glanceclient if we don't like create a new sub-lib, IMO it worked just
like current Cinder's brick lib IMO, in sort term.


I don't like the idea of having it in the client. I'd prefer the
client to just consume it.

IMHO, glance.store sounds like the way to go here.



In long term we can move those stuff all to oslo when they stable
enough (if we can see that day ;) ) and don't organize them by
project's POV but storage type: oslo.blockstore (or other name) for
block storage backend handling, and oslo.objectstore for object
storage, and upper layer project just delegate all real storage device
operation requests to those lib, like mount/attach, unmoun/detach,
read/write..



mhh, not sure. That sounds like way more of what the lib should do.
IMHO, this lib shouldn't take care of any admin operation, it should
be just about getting / putting data into those stores.

--
@flaper87
Flavio Percoco


pgpxQokq35Y0G.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Support for Django 1.6

2013-12-23 Thread Tim Schnell
On 12/23/13 5:02 AM, Thomas Goirand z...@debian.org wrote:


On 12/20/2013 04:39 PM, Matthias Runge wrote:
 On 12/19/2013 04:45 PM, Thomas Goirand wrote:
 Hi,

 Sid has Django 1.6. Is it planned to add support for it? I currently
 don't know what to do with the Horizon package, as it's currently
 broken... :(

 Thomas
 Yes, there are two patches available, one for horizon[1] and one for
 django_openstack_auth[2]
 
 If both are in, we can start gating on django-1.6 as well.
 
 [1] https://review.openstack.org/#/c/58947/
 [2] https://review.openstack.org/#/c/58561/
 
 Matthias

Hi Matthias,

Thanks a lot for these pointers. I tried patching openstack-auth. While
it did work in Wheezy (with Django 1.4), all the 80 unit tests are
failing in Sid, with the following error:

ImportError: No module named defaults

while trying to do:

from django.conf.urls.defaults import patterns, url

Is there anything that I missed? Maybe a missing Django python module?

It looks like the defaults module has been removed in Django 1.6. It was
deprecated in Django 1.4. You should be able to just change these imports
to:

from django.conf.urls import patterns, url

https://docs.djangoproject.com/en/dev/releases/1.4/#django-conf-urls-defaul
ts


-Tim

Cheers,

Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo.messaging] bug 1257293: QPID broadcast RPC requests to all servers for a given topic

2013-12-23 Thread Ihar Hrachyshka
Hi all,

I'm new to openstack and qpid, trying to get more insight into messaging. I've 
attempted to verify the fix for the bug 1257293 in LP, but the scenario fails 
for me, even though oslo.messaging fix reached github master I used for my 
verification attempt.

Briefly, I've repeated actions from the bug, and I still see that topic 
messages with multiple listening servers are handed to all topic subscribers, 
when using qpid' topology=2, as if it's fanout. For topology=1 case, it works 
as expected (messages are handled by subscribers in turn, in a round-robin 
fashion). [The same behaviour was observed by the reporter before the fix.]

The bug in question: https://bugs.launchpad.net/oslo/+bug/1257293
My latest comment with more details on my verification steps: 
https://bugs.launchpad.net/oslo/+bug/1257293/comments/31

If you know what I could miss, or have other comments on the matter, please 
comment.

Thanks,
/Ihar

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2013-12-23 Thread Jose Gavine Cueto
Hi,

I would just like to share my idea on somehow managing sr-iov networking
attributes in neutron (e.g. mac addr, ip addr, vlan).  I've had experience
implementing this and that was before pci-passthrough feature in nova
existed.  Basically, nova still did the plugging and the unplugging of vifs
and neutron did all the provisioning of networking attributes.  At that
time, the best hack I can do was to treat sr-iov nics as ordinary vifs that
were distinguishable by nova and neutron.  So to implement that, when
booting an instance in nova, a certain sr-iov-vf-specific extra_spec was
used (e.g. vfs := 1) indicating the number of sr-iov vfs to create and
eventually represented as mere vif objects in nova.  In nova, the sr-iov
vfs were represented as vifs but a special exception was made wherein
sr-iov vfs aren't really plugged, because of course it isn't necessary.  In
effect, the vifs that represent the vfs were accounted in the db including
its ip and mac addresses, and vlan tags.  With respect to l2 isolation, the
vlan tags were retrieved when booting the instance through neutron api and
were applied in libvirt xml.  To summarize, the networking attributes such
as ip and mac addresses and vlan tags were applied normally to vfs and thus
preserved the normal OS way of managing these like ordinary vifs.
 However, since its just a hack, some consequences and issues surfaced such
as, proper migration of these networking attributes weren't tested,
 libvirt seems to mistakenly swap the mac addresses when rebooting the
instances, and most importantly the vifs that represented the vfs lack
passthrough-specific information.  Since today OS already has this concept
of PCI-passthrough, I'm thinking this could be combined with the idea of a
vf that is represented by a vif to have a complete abstraction of a
manageable sr-iov vf.  I have not read thoroughly the preceeding replies,
so this idea might be redundant or irrelevant already.

Cheers,
Pepe


On Thu, Oct 17, 2013 at 4:32 AM, Irena Berezovsky ire...@mellanox.comwrote:

  Hi,

 As one of the next steps for PCI pass-through I would like to discuss is
 the support for PCI pass-through vNIC.

 While nova takes care of PCI pass-through device resources  management and
 VIF settings, neutron should manage their networking configuration.

 I would like to register a summit proposal to discuss the support for PCI
 pass-through networking.

 I am not sure what would be the right topic to discuss the PCI
 pass-through networking, since it involve both nova and neutron.

 There is already a session registered by Yongli on nova topic to discuss
 the PCI pass-through next steps.

 I think PCI pass-through networking is quite a big topic and it worth to
 have a separate discussion.

 Is there any other people who are interested to discuss it and share their
 thoughts and experience?



 Regards,

 Irena



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
To stop learning is like to stop loving.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Ironic] Get power and temperature via IPMI

2013-12-23 Thread laserjetyang
I think it could bring more general discussion on how to collect physical
equipment information, and which to be collected?
right now, ceilometer only tracking the VM level, and when we use ironic,
we expect the ironic can bring us some good information on the deployed
physical machine images.


On Mon, Dec 23, 2013 at 10:17 AM, Gao, Fengqian fengqian@intel.comwrote:

  Hi, Pradipta,

 From personal experience,  I think lm-sensors is not good as IPMI. I have
 to configure it manually and the sensor data it could get also less than
 IPMI.

 So, I prefer to use IPMI. Did you use it before? Maybe you can share your
 experience.



 Best wishes



 --fengqian



 *From:* Pradipta Banerjee [mailto:bprad...@yahoo.com]
 *Sent:* Friday, December 20, 2013 10:52 PM
 *To:* openstack-dev@lists.openstack.org

 *Subject:* Re: [openstack-dev] [Nova] [Ironic] Get power and temperature
 via IPMI



 On 12/19/2013 12:30 AM, Devananda van der Veen wrote:

   On Tue, Dec 17, 2013 at 10:00 PM, Gao, Fengqian fengqian@intel.com
 wrote:

  Hi, all,

 I am planning to extend bp
 https://blueprints.launchpad.net/nova/+spec/utilization-aware-schedulingwith 
 power and temperature. In other words, power and temperature can be
 collected and used for nova-scheduler just as CPU utilization.

   This is a good idea and have definite use cases where one might want to
 optimize provisioning based on power consumption

 I have a question here. As you know, IPMI is used to get power and
 temperature and baremetal implements IPMI functions in Nova. But baremetal
 driver is being split out of nova, so if I want to change something to the
 IPMI, which part should I choose now? Nova or Ironic?





 Hi!



 A few thoughts... Firstly, new features should be geared towards Ironic,
 not the nova baremetal driver as it will be deprecated soon (
 https://blueprints.launchpad.net/nova/+spec/deprecate-baremetal-driver).
 That being said, I actually don't think you want to use IPMI for what
 you're describing at all, but maybe I'm wrong.



 When scheduling VMs with Nova, in many cases there is already an agent
 running locally, eg. nova-compute, and this agent is already supplying
 information to the scheduler. I think this is where the facilities for
 gathering power/temperature/etc (eg, via lm-sensors) should be placed, and
 it can reported back to the scheduler along with other usage statistics.

 +1

 Using lm-sensors or equivalent seems better.
 Have a look at the following blueprint
 https://blueprints.launchpad.net/nova/+spec/extensible-resource-tracking



 If you think there's a compelling reason to use Ironic for this instead of
 lm-sensors, please clarify.



 Cheers,

 Devananda






  ___

 OpenStack-dev mailing list

 OpenStack-dev@lists.openstack.org

 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




  --

 Regards,

 Pradipta


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] HyperV-CI

2013-12-23 Thread Gary Kotton
Hi,
There seems to be an issue with the hyper CI. Please see - 
https://review.openstack.org/#/c/52687/. This code does is not related to the 
HyperV driver.
Thanks
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] Next two infra meetings canceled

2013-12-23 Thread James E. Blair
Hi,

Since they fall on the evenings of some major holidays, we're canceling
the next two Project Infrastructure meetings.  Enjoy the holidays!

-Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: ./run_test.sh Fails

2013-12-23 Thread Ben Nemec
 

On 2013-12-21 01:45, Sayali Lunkad wrote: 

 Subject: ./run_test.sh fails to build environment 
 
 Hello,
 
 I get this error when I try to set the environment for Horizon. Any idea why 
 this is happening? I am running Devstack on a VM with Ubuntu 12.04.
 
 sayali@sayali:/opt/stack/horizon$ ./run_tests.sh 
 
 [snip] 
 
 Downloading/unpacking iso8601=0.1.8 (from -r 
 /opt/stack/horizon/requirements.txt (line 9))
 Error urlopen error [Errno -2] Name or service not known while getting 
 https://pypi.python.org/packages/source/i/iso8601/iso8601-0.1.8.tar.gz#md5=b207ad4f2df92810533ce6145ab9c3e7
  [1] (from https://pypi.python.org/simple/iso8601/ [2])
 Cleaning up...
 Exception:
 Traceback (most recent call last):
 File 
 /opt/stack/horizon/.venv/local/lib/python2.7/site-packages/pip/basecommand.py,
  line 134, in main
 status = self.run(options, args)
 File 
 /opt/stack/horizon/.venv/local/lib/python2.7/site-packages/pip/commands/install.py,
  line 236, in run
 requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, 
 bundle=self.bundle)
 File /opt/stack/horizon/.venv/local/lib/python2.7/site-packages/pip/req.py, 
 line 1092, in prepare_files
 self.unpack_url(url, location, self.is_download)
 File /opt/stack/horizon/.venv/local/lib/python2.7/site-packages/pip/req.py, 
 line 1238, in unpack_url
 retval = unpack_http_url(link, location, self.download_cache, 
 self.download_dir)
 File 
 /opt/stack/horizon/.venv/local/lib/python2.7/site-packages/pip/download.py, 
 line 602, in unpack_http_url
 resp = _get_response_from_url(target_url, link)
 File 
 /opt/stack/horizon/.venv/local/lib/python2.7/site-packages/pip/download.py, 
 line 638, in _get_response_from_url
 resp = urlopen(target_url)
 File 
 /opt/stack/horizon/.venv/local/lib/python2.7/site-packages/pip/download.py, 
 line 176, in __call__
 response = self.get_opener(scheme=scheme).open(url)
 File /usr/lib/python2.7/urllib2.py, line 400, in open
 response = self._open(req, data)
 File /usr/lib/python2.7/urllib2.py, line 418, in _open
 '_open', req)
 File /usr/lib/python2.7/urllib2.py, line 378, in _call_chain
 result = func(*args)
 File 
 /opt/stack/horizon/.venv/local/lib/python2.7/site-packages/pip/download.py, 
 line 155, in https_open
 return self.do_open(self.specialized_conn_class, req)
 File /usr/lib/python2.7/urllib2.py, line 1177, in do_open
 raise URLError(err)
 URLError: urlopen error [Errno -2] Name or service not known
 
 Storing complete log in /home/sayali/.pip/pip.log
 Command tools/with_venv.sh pip install --upgrade -r 
 /opt/stack/horizon/requirements.txt -r 
 /opt/stack/horizon/test-requirements.txt failed.
 None

This looks like a simple download failure. It happens sometimes with
pypi. It's probably not a bad idea to just configure pip to use our
mirror as it's generally more stable. You can see what we do in
tripleo-image-elements here:
https://github.com/openstack/tripleo-image-elements/blob/master/elements/pypi-openstack/pre-install.d/00-configure-openstack-pypi-mirror
Mostly I think you just need to look at the pip.conf part. 

-Ben 
 

Links:
--
[1]
https://pypi.python.org/packages/source/i/iso8601/iso8601-0.1.8.tar.gz#md5=b207ad4f2df92810533ce6145ab9c3e7
[2] https://pypi.python.org/simple/iso8601/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: [openstack-community] A bug on OpenStack

2013-12-23 Thread Ben Nemec
 

This sounds more appropriate for the openstack list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack 

It's not clear to me at this point whether this is a bug or a
misconfiguration, and the configuration discussion should happen there. 

Thanks. 

-Ben 

On 2013-12-22 12:26, Sean Roberts wrote: 

 I've reposted your query below to openstack-dev for a greater pool of 
 eyeballs. 
 
 ~sean 
 
 Begin forwarded message:
 
 FROM: 武田 剛征 hisayuki.tak...@oceanearth-corp.com
 DATE: December 22, 2013 at 6:17:28 PST
 TO: commun...@lists.openstack.org
 SUBJECT: [OPENSTACK-COMMUNITY] A BUG ON OPENSTACK
 
 Hi OpenStack experts, 
 
 Could you please, help me out on this? I need your experts advice. 
 
 I am trying to build VlanManager environment with OpenStack grizzly on Cent 
 OS 6.4, but it doesn't work. 
 
 I am using all-in-one architecture (single-host). FlatDHCPManager 
 successfully worked. 
 
 So, I am assuming that there is any bug/defect that VlanManager can't work 
 with single-host architecture. 
 
 Does anyone know if there is such a bug related to VlanManager on 
 single-host architecture? 
 
 Any pointer or letting me know anyone who is familiar with this would be 
 highly appreciated. 
 
 Thank you so much in advance, 
 
 Bird Kafka
 
 ___
 Community mailing list
 commun...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/community [1]
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2]

 

Links:
--
[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/community
[2] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] No meeting this week

2013-12-23 Thread Russell Bryant
No Nova meeting this week.  We will resume on Thursday, January 2, at
21:00 UTC.

https://wiki.openstack.org/wiki/Meetings/Nova

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] Murano Release 0.4 Announcement

2013-12-23 Thread Timur Sufiev
I'm very glad to announce that a new stable version of Murano
v0.4https://launchpad.net/murano/1.0/0.4 has
been released!

The most noticeable change Murano Team is proud of is the Metadata
Repository feature, consisting of a new Web UI for managing metadata
objects via Horizon panel and the murano-repository service itself (along
with all the required changes in Dashboard and Conductor components). This
new feature moves Murano several steps closer to an implementation of its new
mission https://wiki.openstack.org/wiki/Murano/ApplicationCatalog, though
we're still at the beginning of a long road.

Among other improvements are full Havana and Neutron support, as well as
numerous improvements in Conductor's networking machinery (also known as
Advanced Networking). The latter allows Conductor to work with either Nova
Networking or Neutron by changing just one parameter in Conductor's config
file.

A full list of changes and much more details can be found in Release
Noteshttps://wiki.openstack.org/wiki/Murano/ReleaseNotes_v0.4 on
the project wiki.

-- 
Timur Sufiev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do we have some guidelines for mock, stub, mox when writing unit test?

2013-12-23 Thread Ben Nemec

On 2013-12-21 07:24, Matt Riedemann wrote:

On 12/19/2013 8:51 AM, John Garbutt wrote:

On 4 December 2013 17:10, Russell Bryant rbry...@redhat.com wrote:

I think option 3 makes the most sense here (pending anyone saying we
should run away screaming from mox3 for some reason).  It's actually
what I had been assuming since this thread a while back.

This means that we don't need to *require* that tests get converted 
if
you're changing one.  It just gets you bonus imaginary internet 
points.


Requiring mock for new tests seems fine.  We can grant exceptions in
specific cases if necessary.  In general, we should be using mock for
new tests.


I have lost track a bit here.

The above seems like a sane approach. Do we all agree on that now?

Can we add the above text into here:
https://wiki.openstack.org/wiki/ReviewChecklist#Nova_Review_Checklist

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Yeah, at some point I wanted to cleanup the various testing guides but
until then I like the idea of just putting something simple into the
nova review checklist. Basically use mock for new tests, mox can be
used in exceptional cases. What I've considered exceptional so far
includes changes that will be backported to a stable release where
mock isn't being used and cases where you basically have to bend over
backwards to work new mock tests into an existing test class that has
lots of existing setUp with mox. However, even in the latter case you
can usually use mock after resetting the mox setup via
self.mox.ResetAll() in the new test case(s).


I went ahead and added this to the wiki, so it's now an absolutely 
inviolate policy.  Unless, ya know, someone edits the wiki after me. ;-)


Also, I put it in the common section because this doesn't seem like 
something we should be doing differently per-project.  If anyone 
objects, feel free to add to the discussion here as to why.


Thanks.

-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Community meeting minutes - 12/23/2013

2013-12-23 Thread Renat Akhmerov
Hi,

Here the links to minutes and logs for the IRC meeting that we had today:

Minutes: 
http://eavesdrop.openstack.org/meetings/mistral/2013/mistral.2013-12-23-16.00.html
Log: 
http://eavesdrop.openstack.org/meetings/mistral/2013/mistral.2013-12-23-16.00.log.html

Feel free to join us next time.

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Delivering datastore logs to customers

2013-12-23 Thread Daniel Morris
Vipul,

I know we discussed this briefly in the Wednesday meeting but I still have a 
few questions.   I am not bought in to the idea that we do not need to maintain 
the records of saved logs.   I agree that we do not need to enable users to 
download and manipulate the logs themselves via Trove ( that can be left to 
Swift), but at a minimum, I believe that the system will still need to maintain 
a mapping of where the logs are stored in swift.  This is a simple addition to 
the list of available logs per datastore (an additional field of its swift 
location – a location exists, you know the log has been saved).  If we do not 
do this, how then does the user know where to find the logs they have saved or 
if they even exist in Swift without searching manually?  It may be that this is 
covered, but I don't see this represented in the BP.  Is the assumption that it 
is some known path?  I would expect to see the Swift location retuned on a GET 
of the available logs types for a specific instance (there is currently only a 
top-level GET for logs available per datastore type).

I am also assuming in this case, and per the BP, that If the user does not have 
the ability to select the storage location in Swift of if this is controlled 
exclusively by the deployer.  And that you would only allow one occurrence of 
the log, per datastore / instance and that the behavior of writing a log more 
than once to the same location is that it will overwrite / append, but it is 
not detailed in the BP.

Thanks,
Daniel
From: Vipul Sabhaya vip...@gmail.commailto:vip...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday, December 20, 2013 2:14 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [trove] Delivering datastore logs to customers

Yep agreed, this is a great idea.

We really only need two API calls to get this going:
- List available logs to ‘save’
- Save a log (to swift)

Some additional points to consider:
- We don’t need to create a record of every Log ‘saved’ in Trove.  These 
entries, treated as a Trove resource aren’t useful, since you don’t actually 
manipulate that resource.
- Deletes of Logs shouldn’t be part of the Trove API, if the user wants to 
delete them, just use Swift.
- A deployer should be able to choose which logs can be ‘saved’ by their users


On Wed, Dec 18, 2013 at 2:02 PM, Michael Basnight 
mbasni...@gmail.commailto:mbasni...@gmail.com wrote:
I think this is a good idea and I support it. In todays meeting [1] there were 
some questions, and I encourage them to get brought up here. My only question 
is in regard to the tail of a file we discussed in irc. After talking about 
it w/ other trovesters, I think it doesnt make sense to tail the log for most 
datstores. I cant imagine finding anything useful in say, a java, applications 
last 100 lines (especially if a stack trace was present). But I dont want to 
derail, so lets try to focus on the deliver to swift first option.

[1] 
http://eavesdrop.openstack.org/meetings/trove/2013/trove.2013-12-18-18.13.log.txt

On Wed, Dec 18, 2013 at 5:24 AM, Denis Makogon 
dmako...@mirantis.commailto:dmako...@mirantis.com wrote:

Greetings, OpenStack DBaaS community.


I'd like to start discussion around a new feature in Trove. The feature I 
would like to propose covers manipulating  database log files.


Main idea. Give user an ability to retrieve database log file for any 
purposes.

Goals to achieve. Suppose we have an application (binary application, 
without source code) which requires a DB connection to perform data 
manipulations and a user would like to perform development, debbuging of an 
application, also logs would be useful for audit process. Trove itself provides 
access only for CRUD operations inside of database, so the user cannot access 
the instance directly and analyze its log files. Therefore, Trove should be 
able to provide ways to allow a user to download the database log for analysis.


Log manipulations are designed to let user perform log investigations. 
Since Trove is a PaaS - level project, its user cannot interact with the 
compute instance directly, only with database through the provided API 
(database operations).

I would like to propose the following API operations:

  1.  Create DBLog entries.

  2.  Delete DBLog entries.

  3.  List DBLog entries.

Possible API, models, server, and guest configurations are described at wiki 
page. [1]

[1] https://wiki.openstack.org/wiki/TroveDBInstanceLogOperation

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Michael Basnight


Re: [openstack-dev] olso.config error on running Devstack

2013-12-23 Thread Ben Nemec
 

On 2013-12-18 09:26, Sayali Lunkad wrote: 

 Hello,
 
 I get the following error when I run stack.sh on Devstack
 
 Traceback (most recent call last):
 File /usr/local/bin/ceilometer-dbsync, line 6, in module
 from ceilometer.storage import dbsync
 File /opt/stack/ceilometer/ceilometer/storage/__init__.py, line 23, in 
 module
 from oslo.config import cfg
 ImportError: No module named config
 ++ failed
 ++ local r=1
 +++ jobs -p
 ++ kill
 ++ set +o xtrace
 
 Search gives me olso.config is installed. Please let me know of any solution.

Devstack pulls oslo.config from git, so if you have it installed on the
system through pip or something it could cause problems. If you can
verify that it's only in /opt/stack/oslo.config, you might try deleting
that directory and rerunning devstack to pull down a fresh copy. I don't
know for sure what the problem is, but those are a couple of things to
try. 

-Ben 
 ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do we have some guidelines for mock, stub, mox when writing unit test?

2013-12-23 Thread John Griffith
On Thu, Dec 5, 2013 at 8:38 PM, Jay Pipes jaypi...@gmail.com wrote:
 On 12/04/2013 12:10 PM, Russell Bryant wrote:

 On 12/04/2013 11:16 AM, Nikola Đipanov wrote:

 Resurrecting this thread because of an interesting review that came up
 yesterday [1].

 It seems that our lack of a firm decision on what to do with the mocking
 framework has left people confused. In hope to help - I'll give my view
 of where things are now and what we should do going forward, and
 hopefully we'll reach some consensus on this.

 Here's the breakdown:

 We should abandon mox:
 * It has not had a release in over 3 years [2] and a patch upstream for 2
 * There are bugs that are impacting the project with it (see above)
 * It will not be ported to python 3

 Proposed path forward options:
 1) Port nova to mock now:
* Literally unmanageable - huge review overhead and regression risk
 for not so much gain (maybe) [1]

 2) Opportunistically port nova (write new tests using mock, when fixing
 tests, move them to mock):
   * Will take a really long time to move to mock, and is not really a
 solution since we are stuck with mock for an undetermined period of time
 - it's what we are doing now (kind of).

 3) Same as 2) but move current codebase to mox3
   * Buys us py3k compat, and fresher code
   * Mox3 and mox have diverged and we would need to backport mox fixes
 onto the mox3 three and become de-facto active maintainers (as per Peter
 Feiner's last email - that may not be so easy).

 I think we should follow path 3) if we can, but we need to:

 1) Figure out what is the deal with mox3 and decide if owning it will
 really be less trouble than porting nova. To be hones - I was unable to
 even find the code repo for it, only [3]. If anyone has more info -
 please weigh in. We'll also need volunteers

 2) Make better testing guidelines when using mock, and maybe add some
 testing helpers (like we do already have for mox) that will make porting
 existing tests easier. mreidem already put this on this weeks nova
 meeting agenda - so that might be a good place to discuss all the issues
 mentioned here as well.

 We should really take a stronger stance on this soon IMHO, as this comes
 up with literally every commit.


 I think option 3 makes the most sense here (pending anyone saying we
 should run away screaming from mox3 for some reason).  It's actually
 what I had been assuming since this thread a while back.


 What precisely is the benefit of moving the existing code to mox3 versus
 moving the existing code to mock? Is mox3 so similar to mox that the
 transition would be minimal?


 This means that we don't need to *require* that tests get converted if
 you're changing one.  It just gets you bonus imaginary internet points.

 Requiring mock for new tests seems fine.  We can grant exceptions in
 specific cases if necessary.  In general, we should be using mock for
 new tests.


 My vote would be to use mock for everything new (no brainer), keep old mox
 stuff around and slowly port it to mock. I see little value in bringing in
 another mox3 library, especially if we'd end up having to maintain it.

FWIW this is exactly what the Cinder team agreed upon a while back and
the direction we've been going.  There hasn't really been any
push-back on this and in most cases the response from people has been
Wow, using mock was so much easier/straight forward.


 Best,
 -jay



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Process for proposing patches attached to launchpad bugs?

2013-12-23 Thread Dean Troyer
On Mon, Dec 23, 2013 at 3:50 AM, Robert Collins
robe...@robertcollins.netwrote:

 On 23 December 2013 17:35, Chet Burgess c...@metacloud.com wrote:
  It's unclear to me what exactly constitutes writing a new patch. I can
 check
  out oslo.messaging, and without trying to merge the patch just go and
 make
  the same change (its literarily a 2 line change). I can write the tests,
 and
  I can submit it (which I'm happy to do, I really want this bug fixed).
  Honestly though this change is so trivial I don't see how my patch would
  look all that different from the one already posted. I know there is
 prior
  art. The mixin class that kombu provides does the exact same thing. Is
 that


Research the term 'de minimis' WRT copyright and decide (with the help of
actual legal advice if necessary) when to just go ahead and submit a patch.

Prior art is a patent concept, not related to copyright. Copyright is


+1...this stuff gets confused too much these days...

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][IPv6] No IRC Meeting this week

2013-12-23 Thread Collins, Sean
See you all next week!

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Delivering datastore logs to customers

2013-12-23 Thread Denis Makogon
Goodday, Daniel. Thanks for response.

Today, before your message, i've update wiki page [1]. Now while POST user
would recieve DBLog responce object which would contain location ulr of
downloaded log file.
About the way of storing files, i've described inside of [1], in guest-side
configuretion, that each file inside container would contain timestamp, and
i'm not going to limit user with specific number of allowed files inside
Swift.
I hope, i answered all your questions.

[1] https://wiki.openstack.org/wiki/TroveDBInstanceLogOperation#API_Calls

Best regards, Denis Makogon.


2013/12/23 Daniel Morris daniel.mor...@rackspace.com

   Vipul,

  I know we discussed this briefly in the Wednesday meeting but I still
 have a few questions.   I am not bought in to the idea that we do not need
 to maintain the records of saved logs.   I agree that we do not need to
 enable users to download and manipulate the logs themselves via Trove (
 that can be left to Swift), but at a minimum, I believe that the system
 will still need to maintain a mapping of where the logs are stored in
 swift.  This is a simple addition to the list of available logs per
 datastore (an additional field of its swift location – a location exists,
 you know the log has been saved).  If we do not do this, how then does the
 user know where to find the logs they have saved or if they even exist in
 Swift without searching manually?  It may be that this is covered, but I
 don't see this represented in the BP.  Is the assumption that it is some
 known path?  I would expect to see the Swift location retuned on a GET of
 the available logs types for a specific instance (there is currently only a
 top-level GET for logs available per datastore type).

  I am also assuming in this case, and per the BP, that If the user does
 not have the ability to select the storage location in Swift of if this is
 controlled exclusively by the deployer.  And that you would only allow one
 occurrence of the log, per datastore / instance and that the behavior of
 writing a log more than once to the same location is that it will overwrite
 / append, but it is not detailed in the BP.

   Thanks,
 Daniel
 From: Vipul Sabhaya vip...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Friday, December 20, 2013 2:14 AM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [trove] Delivering datastore logs to
 customers

   Yep agreed, this is a great idea.

  We really only need two API calls to get this going:
 - List available logs to ‘save’
 - Save a log (to swift)

  Some additional points to consider:
  - We don’t need to create a record of every Log ‘saved’ in Trove.  These
 entries, treated as a Trove resource aren’t useful, since you don’t
 actually manipulate that resource.
 - Deletes of Logs shouldn’t be part of the Trove API, if the user wants to
 delete them, just use Swift.
 - A deployer should be able to choose which logs can be ‘saved’ by their
 users


 On Wed, Dec 18, 2013 at 2:02 PM, Michael Basnight mbasni...@gmail.comwrote:

  I think this is a good idea and I support it. In todays meeting [1]
 there were some questions, and I encourage them to get brought up here. My
 only question is in regard to the tail of a file we discussed in irc.
 After talking about it w/ other trovesters, I think it doesnt make sense to
 tail the log for most datstores. I cant imagine finding anything useful in
 say, a java, applications last 100 lines (especially if a stack trace was
 present). But I dont want to derail, so lets try to focus on the deliver
 to swift first option.

  [1]
 http://eavesdrop.openstack.org/meetings/trove/2013/trove.2013-12-18-18.13.log.txt

  On Wed, Dec 18, 2013 at 5:24 AM, Denis Makogon dmako...@mirantis.comwrote:

  Greetings, OpenStack DBaaS community.

  I'd like to start discussion around a new feature in Trove. The
 feature I would like to propose covers manipulating  database log files.



  Main idea. Give user an ability to retrieve database log file for
 any purposes.

 Goals to achieve. Suppose we have an application (binary
 application, without source code) which requires a DB connection to perform
 data manipulations and a user would like to perform development, debbuging
 of an application, also logs would be useful for audit process. Trove
 itself provides access only for CRUD operations inside of database, so the
 user cannot access the instance directly and analyze its log files.
 Therefore, Trove should be able to provide ways to allow a user to download
 the database log for analysis.


  Log manipulations are designed to let user perform log
 investigations. Since Trove is a PaaS - level project, its user cannot
 interact with the compute instance directly, only with database through the
 provided API (database operations).

 I would like to propose 

Re: [openstack-dev] Neutron Distributed Virtual Router

2013-12-23 Thread Gary Duan
Regarding using 'provider' in L3 router, for the BP 'L3 service integration
with service framework', I've submitted some code for review, which is
using 'provider' in a same notion as other advanced services. I am not sure
if it can be reused to describe 'centralized' and 'distributed' behavior.

https://review.openstack.org/#/c/59242/

Thanks,
Gary


On Wed, Dec 11, 2013 at 2:17 AM, Salvatore Orlando sorla...@nicira.comwrote:

 I generally tend to agree that once the distributed router is available,
 nobody would probably want to use a centralized one.
 Nevertheless, I think it is correct that, at least for the moment, some
 advanced services would only work with a centralized router.
 There might also be unforeseen scalability/security issues which might
 arise from the implementation, so it is worth giving users a chance to
 choose what router's they'd like.

 In the case of the NSX plugin, this was provided as an extended API
 attribute in the Havana release with the aim of making it the default
 solution for routing in the future.
 One thing that is worth adding is that at the time it was explored the
 ability of leveraging service providers for having a centralized router
 provider and a distributed one; we had working code, but then we
 reverted to the extended attribute. Perhaps it would be worth exploring
 whether this is a feasible solution, and whether it might be even possible
 to define flavors which characterise how routers and advanced services
 are provided.

 Salvatore


 On 10 December 2013 18:09, Nachi Ueno na...@ntti3.com wrote:

 I'm +1 for 'provider'.

 2013/12/9 Akihiro Motoki mot...@da.jp.nec.com:
  Neutron defines provider attribute and it is/will be used in advanced
 services (LB, FW, VPN).
  Doesn't it fit for a distributed router case? If we can cover all
 services with one concept, it would be nice.
 
  According to this thread, we assumes at least two types edge and
 distributed.
  Though edge and distributed is a type of implementations, I think
 they are some kind of provider.
 
  I just would like to add an option. I am open to provider vs
 distirbute attributes.
 
  Thanks,
  Akihiro
 
  (2013/12/10 7:01), Vasudevan, Swaminathan (PNB Roseville) wrote:
  Hi Folks,
 
  We are in the process of defining the API for the Neutron Distributed
 Virtual Router, and we have a question.
 
  Just wanted to get the feedback from the community before we implement
 and post for review.
 
  We are planning to use the “distributed” flag for the routers that are
 supposed to be routing traffic locally (both East West and North South).
  This “distributed” flag is already there in the “neutronclient” API,
 but currently only utilized by the “Nicira Plugin”.
  We would like to go ahead and use the same “distributed” flag and add
 an extension to the router table to accommodate the “distributed flag”.
 
  Please let us know your feedback.
 
  Thanks.
 
  Swaminathan Vasudevan
  Systems Software Engineer (TC)
  HP Networking
  Hewlett-Packard
  8000 Foothills Blvd
  M/S 5541
  Roseville, CA - 95747
  tel: 916.785.0937
  fax: 916.785.1815
  email: swaminathan.vasude...@hp.com mailto:
 swaminathan.vasude...@hp.com
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Availability of external testing logs

2013-12-23 Thread Sean Dague
On 12/23/2013 12:10 PM, Collins, Sean wrote:
 On Sun, Dec 22, 2013 at 12:30:50PM +0100, Salvatore Orlando wrote:
 I would suggest that external jobs should not vote until logs are publicly
 accessible, otherwise contributors would have no reason to understand where
 the negative vote came from.
 
 +1
 
 I've had Tail-F NCS Jenkins -1 some things that the OpenStack
 Jenkins has +1'd, and other times where I've seen it +1 things that
 OpenStack Jenkins -1'd.

Agreed.

I also think we need to have these systems prove themselves on
reliability before they post votes back. A mis configured CI system can
easily -1 the entire patch stream, and many of us use -Verified-1 as a
filter criteria on reviews, which effectively makes that a DOS attack.

Detailed public results need to come first. After those look reliable,
voting can be allowed.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] olso.config error on running Devstack

2013-12-23 Thread Sean Dague
On 12/23/2013 11:52 AM, Ben Nemec wrote:
 On 2013-12-18 09:26, Sayali Lunkad wrote:
 
 Hello,

 I get the following error when I run stack.sh on Devstack

 Traceback (most recent call last):
   File /usr/local/bin/ceilometer-dbsync, line 6, in module
 from ceilometer.storage import dbsync
   File /opt/stack/ceilometer/ceilometer/storage/__init__.py, line
 23, in module
 from oslo.config import cfg
 ImportError: No module named config
 ++ failed
 ++ local r=1
 +++ jobs -p
 ++ kill
 ++ set +o xtrace

 Search gives me olso.config is installed. Please let me know of any
 solution.
 
 
 Devstack pulls oslo.config from git, so if you have it installed on the
 system through pip or something it could cause problems.  If you can
 verify that it's only in /opt/stack/oslo.config, you might try deleting
 that directory and rerunning devstack to pull down a fresh copy.  I
 don't know for sure what the problem is, but those are a couple of
 things to try.

We actually try to resolve that here:

https://github.com/openstack-dev/devstack/blob/master/lib/oslo#L43

However, have I said how terrible python packaging is recently?
Basically you can very easily get yourself in a situation where *just
enough* of the distro package is left behind that pip thinks its there,
so won't install it, but the python loader doesn't, so won't work.

Then much sadness.

If anyone has a more fool proof way to fix this, suggestions appreciated.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest] negative tests

2013-12-23 Thread Miguel Lavalle
Ann,

You are correct. We WILL NOT develop negative tests anymore by hand. We
will take a generative approach in the future. I will update the etherpad
to reflect this

Regards


On Mon, Dec 23, 2013 at 5:12 AM, Anna Kamyshnikova 
akamyshnik...@mirantis.com wrote:

 Hello!

 I'm working on creating tests in tempest according to this etherpad page
 https://etherpad.openstack.org/p/icehouse-summit-qa-neutron.

 Here is mentioned that we should be add negative tests, for example, for
 floating ips, but as I understand (according to comment to
 https://bugs.launchpad.net/bugs/1262113) negative tests will be added
 automatically. In this case, is work on such tests as
 - Negative: create a floating ip specifying a non public network
 - Negative: create a floating ip specifying a floating ip address out of
 the external network subnet range

 - Negative: create a floating ip specifying a floating ip address that is
 in use

 - Negative: create / update a floating ip address specifying an invalid
 internal port

 - Negative: create / update a floating ip address specifying an internal
 port with no ip address

 - Negative: create / update a floating ip with an internal port with
 multiple ip addresses, specifying an invalid

 - Negative create /assciate a floating ip with an internal port with
 multiple ip addresses, when the ip address

 - Negative: delete an invalid floating ip

 - Negative: show non existing floating ip
  needed or not?

 Ann.


 On Mon, Dec 23, 2013 at 2:56 PM, Sean Dague s...@dague.net wrote:

 Please take this to a public list

 On 12/23/2013 03:42 AM, Anna Kamyshnikova wrote:
  Hello!
 
  I'm working on creating tests in tempest according to this etherpad
  page https://etherpad.openstack.org/p/icehouse-summit-qa-neutron.
 
  Here is mentioned that we should be add negative tests, for example, for
  floating ips, but as I understand (according to your comment
  to https://bugs.launchpad.net/bugs/1262113) negative tests will be
 added
  automatically. In this case, is work on such tests as
  - Negative: create a floating ip specifying a non public network
  - Negative: create a floating ip specifying a floating ip address out of
  the external network subnet range
 
  - Negative: create a floating ip specifying a floating ip address that
  is in use
 
  - Negative: create / update a floating ip address specifying an invalid
  internal port
 
  - Negative: create / update a floating ip address specifying an internal
  port with no ip address
 
  - Negative: create / update a floating ip with an internal port with
  multiple ip addresses, specifying an invalid
 
  - Negative create /assciate a floating ip with an internal port with
  multiple ip addresses, when the ip address
 
  - Negative: delete an invalid floating ip
 
  - Negative: show non existing floating ip
 
   needed or not?
 
  Ann.


 --
 Sean Dague
 http://dague.net



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] olso.config error on running Devstack

2013-12-23 Thread Ben Nemec

On 2013-12-23 13:18, Sean Dague wrote:

On 12/23/2013 11:52 AM, Ben Nemec wrote:

On 2013-12-18 09:26, Sayali Lunkad wrote:


Hello,

I get the following error when I run stack.sh on Devstack

Traceback (most recent call last):
  File /usr/local/bin/ceilometer-dbsync, line 6, in module
from ceilometer.storage import dbsync
  File /opt/stack/ceilometer/ceilometer/storage/__init__.py, line
23, in module
from oslo.config import cfg
ImportError: No module named config
++ failed
++ local r=1
+++ jobs -p
++ kill
++ set +o xtrace

Search gives me olso.config is installed. Please let me know of any
solution.



Devstack pulls oslo.config from git, so if you have it installed on 
the

system through pip or something it could cause problems.  If you can
verify that it's only in /opt/stack/oslo.config, you might try 
deleting

that directory and rerunning devstack to pull down a fresh copy.  I
don't know for sure what the problem is, but those are a couple of
things to try.


We actually try to resolve that here:

https://github.com/openstack-dev/devstack/blob/master/lib/oslo#L43

However, have I said how terrible python packaging is recently?
Basically you can very easily get yourself in a situation where *just
enough* of the distro package is left behind that pip thinks its there,
so won't install it, but the python loader doesn't, so won't work.

Then much sadness.

If anyone has a more fool proof way to fix this, suggestions 
appreciated.


-Sean


Ah, good to know.  I haven't actually run into this problem recently, so 
this was kind of a shot in the dark.


-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][qa] test_network_basic_ops and the FloatingIPChecker control point

2013-12-23 Thread Brent Eagles

Salvatore Orlando wrote:

Before starting this post I confess I did not read with the required level
of attention all this thread, so I apologise for any repetition.

I just wanted to point out that floating IPs in neutron are created
asynchronously when using the l3 agent, and I think this is clear to
everybody.
So when the create floating IP call returns, this does not mean the
floating IP has actually been wired, ie: IP configured on qg-interface and
SNAT/DNAT rules added.

Unfortunately, neutron lacks a concept of operational status for a floating
IP which would tell clients, including nova (it acts as a client wrt nova
api), when a floating IP is ready to be used. I started work in this
direction, but it has been suspended now for a week. If anybody wants to
take over and deems this a reasonable thing to do, it will be great.


Unless somebody picks it up before I get from the break, I'd like to 
discuss this further with you.



I think neutron tests checking connectivity might return more meaningful
failure data if they would gather the status of the various components
which might impact connectivity.
These are:
- The floating IP
- The router internal interface
- The VIF port
- The DHCP agent


I agree wholeheartedly. In fact, I think if we are going to rely on 
timeouts for pass/fail we need to do more for post-mortem details.



Collecting info about the latter is very important but a bit trickier. I
discussed with Sean and Maru that it would be great for a starter, grep the
console log to check whether the instance obtained an IP.
Other things to consider would be:
- adding an operational status to a subnet, which would express whether the
DHCP agent is in sync with that subnet (this information won't make sense
for subnets with dhcp disabled)
- working on a 'debug' administrative API which could return, for instance,
for each DHCP agent the list of configured networks and leases.


Interesting!


Regarding timeouts, I think it's fair for tempest to define a timeout and
ask that everything from VM boot to Floating IP wiring completes within
that timeout.

Regards,
Salvatore


I would agree. It would be impossible to have reasonable automated 
testing otherwise.


Cheers,

Brent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa][neutron] Please remember to mark changed tests as 'smoke'

2013-12-23 Thread David Kranz
We like all code submitted to tempest to actually run. Since the neutron 
gate jobs are still running only smoke tests, please mark any test that 
is added or whose code has changed as smoke. Note that 'smoke' has no 
real other meaning now since it was applied haphazardly in the first 
place and has been added for lots of neutron tests. Once the full 
neutron test suite is running in the gate, we are going to rework the 
smoke tags to mean a set of sanity tests that can run in a short amount 
of time.


 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Neutron Distributed Virtual Router

2013-12-23 Thread Bob Melander (bmelande)
I agree. With your patch it ought to be possible to make the distributed router 
a provider type so to me it seems like a good match.

Thanks,
Bob

From: Gary Duan gd...@varmour.commailto:gd...@varmour.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: måndag 23 december 2013 19:17
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Neutron Distributed Virtual Router

Regarding using 'provider' in L3 router, for the BP 'L3 service integration 
with service framework', I've submitted some code for review, which is using 
'provider' in a same notion as other advanced services. I am not sure if it can 
be reused to describe 'centralized' and 'distributed' behavior.

https://review.openstack.org/#/c/59242/

Thanks,
Gary


On Wed, Dec 11, 2013 at 2:17 AM, Salvatore Orlando 
sorla...@nicira.commailto:sorla...@nicira.com wrote:
I generally tend to agree that once the distributed router is available, nobody 
would probably want to use a centralized one.
Nevertheless, I think it is correct that, at least for the moment, some 
advanced services would only work with a centralized router.
There might also be unforeseen scalability/security issues which might arise 
from the implementation, so it is worth giving users a chance to choose what 
router's they'd like.

In the case of the NSX plugin, this was provided as an extended API attribute 
in the Havana release with the aim of making it the default solution for 
routing in the future.
One thing that is worth adding is that at the time it was explored the ability 
of leveraging service providers for having a centralized router provider and 
a distributed one; we had working code, but then we reverted to the extended 
attribute. Perhaps it would be worth exploring whether this is a feasible 
solution, and whether it might be even possible to define flavors which 
characterise how routers and advanced services are provided.

Salvatore


On 10 December 2013 18:09, Nachi Ueno na...@ntti3.commailto:na...@ntti3.com 
wrote:
I'm +1 for 'provider'.

2013/12/9 Akihiro Motoki mot...@da.jp.nec.commailto:mot...@da.jp.nec.com:
 Neutron defines provider attribute and it is/will be used in advanced 
 services (LB, FW, VPN).
 Doesn't it fit for a distributed router case? If we can cover all services 
 with one concept, it would be nice.

 According to this thread, we assumes at least two types edge and 
 distributed.
 Though edge and distributed is a type of implementations, I think they 
 are some kind of provider.

 I just would like to add an option. I am open to provider vs distirbute 
 attributes.

 Thanks,
 Akihiro

 (2013/12/10 7:01), Vasudevan, Swaminathan (PNB Roseville) wrote:
 Hi Folks,

 We are in the process of defining the API for the Neutron Distributed 
 Virtual Router, and we have a question.

 Just wanted to get the feedback from the community before we implement and 
 post for review.

 We are planning to use the “distributed” flag for the routers that are 
 supposed to be routing traffic locally (both East West and North South).
 This “distributed” flag is already there in the “neutronclient” API, but 
 currently only utilized by the “Nicira Plugin”.
 We would like to go ahead and use the same “distributed” flag and add an 
 extension to the router table to accommodate the “distributed flag”.

 Please let us know your feedback.

 Thanks.

 Swaminathan Vasudevan
 Systems Software Engineer (TC)
 HP Networking
 Hewlett-Packard
 8000 Foothills Blvd
 M/S 5541
 Roseville, CA - 95747
 tel: 916.785.0937tel:916.785.0937
 fax: 916.785.1815tel:916.785.1815
 email: swaminathan.vasude...@hp.commailto:swaminathan.vasude...@hp.com 
 mailto:swaminathan.vasude...@hp.commailto:swaminathan.vasude...@hp.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2013-12-23 Thread Jay Pipes

On 12/17/2013 10:09 AM, Ian Wells wrote:

Reiterating from the IRC mneeting, largely, so apologies.

Firstly, I disagree that
https://wiki.openstack.org/wiki/PCI_passthrough_SRIOV_support is an
accurate reflection of the current state.  It's a very unilateral view,
largely because the rest of us had been focussing on the google document
that we've been using for weeks.

Secondly, I totally disagree with this approach.  This assumes that
description of the (cloud-internal, hardware) details of each compute
node is best done with data stored centrally and driven by an API.  I
don't agree with either of these points.

Firstly, the best place to describe what's available on a compute node
is in the configuration on the compute node.  For instance, I describe
which interfaces do what in Neutron on the compute node.  This is
because when you're provisioning nodes, that's the moment you know how
you've attached it to the network and what hardware you've put in it and
what you intend the hardware to be for - or conversely your deployment
puppet or chef or whatever knows it, and Razor or MAAS has enumerated
it, but the activities are equivalent.  Storing it centrally distances
the compute node from its descriptive information for no good purpose
that I can see and adds the complexity of having to go make remote
requests just to start up.

Secondly, even if you did store this centrally, it's not clear to me
that an API is very useful.  As far as I can see, the need for an API is
really the need to manage PCI device flavors.  If you want that to be
API-managed, then the rest of a (rather complex) API cascades from that
one choice.  Most of the things that API lets you change (expressions
describing PCI devices) are the sort of thing that you set once and only
revisit when you start - for instance - deploying new hosts in a
different way.

I at the parallel in Neutron provider networks.  They're config driven,
largely on the compute hosts.  Agents know what ports on their machine
(the hardware tie) are associated with provider networks, by provider
network name.  The controller takes 'neutron net-create ...
--provider:network 'name'' and uses that to tie a virtual network to the
provider network definition on each host.  What we absolutely don't do
is have a complex admin API that lets us say 'in host aggregate 4,
provider network x (which I made earlier) is connected to eth6'.


FWIW, I could not agree more. The Neutron API already suffers from 
overcomplexity. There's really no need to make it even more complex than 
it already is, especially for a feature that more naturally fits in 
configuration data (Puppet/Chef/etc) and isn't something that you would 
really ever change for a compute host once set.


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do we have some guidelines for mock, stub, mox when writing unit test?

2013-12-23 Thread Ben Nemec
 

I added it to the page John linked earlier:
https://wiki.openstack.org/wiki/ReviewChecklist 

-Ben 

On 2013-12-23 17:08, Shawn Hartsock wrote: 

 Where in the wiki is this written down? Maybe I should read some of these 
 entries. I have looked but I can't find it. 
 On Dec 23, 2013 11:56 AM, Ben Nemec openst...@nemebean.com wrote:
 On 2013-12-21 07:24, Matt Riedemann wrote:
 On 12/19/2013 8:51 AM, John Garbutt wrote:
 On 4 December 2013 17:10, Russell Bryant rbry...@redhat.com wrote:
 I think option 3 makes the most sense here (pending anyone saying we
 should run away screaming from mox3 for some reason). It's actually
 what I had been assuming since this thread a while back.
 
 This means that we don't need to *require* that tests get converted if
 you're changing one. It just gets you bonus imaginary internet points.
 
 Requiring mock for new tests seems fine. We can grant exceptions in
 specific cases if necessary. In general, we should be using mock for
 new tests. 
 I have lost track a bit here.
 
 The above seems like a sane approach. Do we all agree on that now?
 
 Can we add the above text into here:
 https://wiki.openstack.org/wiki/ReviewChecklist#Nova_Review_Checklist [1]
 
 John
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2]

 Yeah, at some point I wanted to cleanup the various testing guides but
 until then I like the idea of just putting something simple into the
 nova review checklist. Basically use mock for new tests, mox can be
 used in exceptional cases. What I've considered exceptional so far
 includes changes that will be backported to a stable release where
 mock isn't being used and cases where you basically have to bend over
 backwards to work new mock tests into an existing test class that has
 lots of existing setUp with mox. However, even in the latter case you
 can usually use mock after resetting the mox setup via
 self.mox.ResetAll() in the new test case(s). 
 I went ahead and added this to the wiki, so it's now an absolutely
inviolate policy. Unless, ya know, someone edits the wiki after me. ;-)

 Also, I put it in the common section because this doesn't seem like
something we should be doing differently per-project. If anyone objects,
feel free to add to the discussion here as to why.

 Thanks.

 -Ben

 ___
 OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2] 

 

Links:
--
[1]
https://wiki.openstack.org/wiki/ReviewChecklist#Nova_Review_Checklist
[2] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] Todays' meeting log: PCI pass-through network support

2013-12-23 Thread Ian Wells
On autodiscovery and configuration, we agree that each compute node finds
out what it has based on some sort of list of match expressions; we just
disagree on where they should live.

I know we've talked APIs for setting that matching expression, but I would
prefer that compute nodes are responsible for their own physical
configuration - generally this seems wiser on the grounds that configuring
new hardware correctly is a devops problem and this pushes the problem into
the installer, clear devops territory.  It also makes the (I think likely)
assumption that the config may differ per compute node without having to
add more complexity to the API with host aggregates and so on.  And it
means that a compute node can start working without consulting the central
database or reporting its entire device list back to the central controller.

On PCI groups, I think it is a good idea to have them declared centrally
(their name, not their content).  Now, I would use config to define them
and maybe an API for the tenant to list their names, personally; that's
simpler and easier to implement and doesn't preclude adding an (admin) API
in the future.  But I don't imagine the list of groups will change
frequently so any update API would be very infrequently used, and if
someone really feels they want to implement it I'm not going to stop them.

On nova boot, I completely agree that we need a new argument to --nic to
specify the PCI group of the NIC.  The rest of the arguments - I'm
wondering if we could perhaps do this in two stages:
1. Neutron will read those arguments (attachment type, additional stuff
like port group where relevant) from the port during an attach and pass
relevant information to the plugging driver in Nova
2. We add a feature to nova so that you can specify other properties in the
--nic section line and they're passed straight to the port-create called
from within nova.

This is not specific to passthrough at all, just a useful general purpose
feature.  However, it would simplify both the problem and design here,
because these parameters, whatever they are, are now entirely the
responsibility of Neutron and Nova's simply transporting them into it.  A
PCI aware Neutron will presumably understand the attachment type, the port
group and so on, or will reject them if they're meaningless to it, and
we've even got room for future expansion without changing Nova or Neutron,
just the plugin.  We can propose it now and independently, put in a patch
and have it ready before we need it.  I think anything that helps to
clarify and divide the responsibilities of Nova and Neutron will be
helpful, because then we don't end up with too many
cross-project-interrelated patches.

I'm going to ignore the allocation problem for now.  If a single user can
allocate all the NICs in the cluster to himself, we still have a more
useful solution than the one now where he can't use them, so it's not the
top of our list.


Time seems to be running out for Icehouse. We need to come to agreement
 ASAP. I will be out from wednesday until after new year. I'm thinking that
 to move it forward after the new year, we may need to have the IRC meeting
 in a daily basis until we reach agreement. This should be one of our new
 year's resolutions?


Whatever gets it done.
-- 
Ian.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Support for Django 1.6

2013-12-23 Thread Thomas Goirand
On 12/23/2013 11:23 PM, Tim Schnell wrote:
 It looks like the defaults module has been removed in Django 1.6. It was
 deprecated in Django 1.4. You should be able to just change these imports
 to:
 
 from django.conf.urls import patterns, url
 
 https://docs.djangoproject.com/en/dev/releases/1.4/#django-conf-urls-defaul
 ts
 
 
 -Tim

Indeed, this was the problem. Thanks Tim, I have been able to upload the
package to Sid, without breaking Wheezy backport, thanks to this. \o/

Thomas Goirand (zigo)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Complex query BP implementation

2013-12-23 Thread Jay Pipes

On 12/16/2013 03:54 PM, Ildikó Váncsa wrote:

Hi guys,

The first working version of the Complex filter expressions in API
queries blueprint [1] was pushed for review[2].

We implemented a new query REST resource in order to provide rich query
functionality for samples, alarms and alarm history. The future plans
(in separated blueprints) with this new functionality is extending it to
support Statistics and stored queries. The new feature is documented on
Launchpad wiki[3], with an example for how to use the new query on the API.

What is your opinion about this solution?

I would appreciate some review comments and/or feedback on the
implementation. :)


Hi Ildiko, thanks for your proposed API for complex querying in 
Ceilometer. Unfortunately, I'm not a fan of the approach taken, but I do 
see some definite need/use cases here.


My main objection to the proposed solution is that it violates the 
principle in all of the OpenStack REST APIs that a POST request 
*creates* a resource. In the proposed API, you use:


POST /query/$resource

to actually retrieve records of type $resource. In all the other 
OpenStack REST APIs, the above request would create a $resource 
subresource of a query resource. And, to be honest, people expect HTTP 
REST APIs to use the GET HTTP method for querying, not POST. It's an 
anti-pattern to use POST in this way.


Now, that said... I think that the advanced query interface you propose 
does indeed have real-world, demanded use cases. Right now, you are 100% 
correct that the existing GET request filters are simplistic and don't 
support either aggregation or advanced union or intersection queries.


I would definitely be supportive of using POST to store saved queries 
(as you mention in your wiki page). However, the main query interface 
should remain the GET HTTP method, including for ad-hoc advanced querying.


So, what I would like to see is essentially a removal of the query 
resource, and instead tack on your advanced Ceilometer domain-specific 
language to the supported GET query arguments. This would have two 
advantages:


1) You will not need to re-implement the orderby and limit expressions. 
Virtually all other OpenStack APIs (including Ceilometer) use the 
limit and sort_by query parameters already, so those should be used 
as-is.


2) Users will already be familiar with the standard GET /samples, GET 
/alarms, etc query interface, and all they would need to learn is how to 
encode the advanced query parameters properly. No need to 
learn/implement new resource endpoints.


You used this English-language example of an advanced query:

Check for cpu_util samples reported between 18:00-18:15 or between 
18:30 - 18:45 where the utilization is between 23 and 26 percent.


and had the following POST request JSON-ified body:

POST /query/meters
[and,
  [and,
[and,
  [=, counter_name, cpu_util],
  [, counter_volume, 0.23]],
[and,
  [=, counter_name, cpu_util],
  [, counter_volume, 0.26]]],
  [or,
[and,
  [, timestamp, 2013-12-01T18:00:00],
  [, timestamp, 2013-12-01T18:15:00]],
[and,
  [, timestamp, 2013-12-01T18:30:00],
  [, timestamp, 2013-12-01T18:45:00

(note that the above doesn't actually correspond to the English-language 
query... you would not want the third-level ands and you would want = 
and = for the temporal BETWEEN clause...)


The equivalent GET request might be encoded like so:

GET 
/meters?expr=(expr1%20or%20expr2)%20and%20expr3expr1=(timestamp%3E%3D2013-12-01T18%3A00%3A00%20and%20timestamp%3C%3D2013-12-01T18%3A15%3A00)expr2=(timestamp%3E%3D2013-12-01T18%3A30%3A00%20and%20timestamp%3C%3D2013-12-01T18%3A45%3A00)expr3=(counter_name%3D%27cpu_util%27%20and%20(counter_volume%20%3E%200.23%20and%20counter_volume%20%3C%200.26))


Which is just the following, with the values url-encoded:

expr = (expr1 or expr2) and expr3
expr1 = (timestamp=2013-12-01T18:00:00 and timestamp=2013-12-01T18:15:00 )
expr2 = (timestamp=2013-12-01T18:30:00 and timestamp=2013-12-01T18:45:00)
expr3 = (counter_name='cpu_util' and (counter_volume0.23 and 
counter_volume0.26))


I know the expression might not look as nice as POST /query/meters, but 
it is in-line with Internet custom.


I would definitely support the use of your POST with JSON-encoded query 
DSL for stored views, though. For example, to save a stored query for 
the above report:


POST /reports
[and,
  [and,
  [=, counter_name, cpu_util],
  [between, counter_volume, 0.23, 0.26],
  [or,
[and,
  [, timestamp, 2013-12-01T18:00:00],
  [, timestamp, 2013-12-01T18:15:00]],
[and,
  [, timestamp, 2013-12-01T18:30:00],
  [, timestamp, 2013-12-01T18:45:00

And then you could issue the same query using the GET of the returned 
report UUID or report name...


GET /reports/$REPORT_ID

Best,
-jay


[1]
https://blueprints.launchpad.net/ceilometer/+spec/complex-filter-expressions-in-api-queries


[2]

[openstack-dev] [State-Management] No meeting this week

2013-12-23 Thread Joshua Harlow
Since its xmas in most of the world lets skip the IRC meeting this week 
(normally on thursdays).

See you all soon and have a great vacation!

P.S. #openstack-state-management if u feel the need to chat :-)

--

Joshua Harlow

It's openstack, relax... | harlo...@yahoo-inc.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][qa] test_network_basic_ops and the FloatingIPChecker control point

2013-12-23 Thread Yair Fried


- Original Message -
 From: Brent Eagles beag...@redhat.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Monday, December 23, 2013 10:48:50 PM
 Subject: Re: [openstack-dev] [neutron][qa] test_network_basic_ops and the 
 FloatingIPChecker control point
 
 Salvatore Orlando wrote:
  Before starting this post I confess I did not read with the
  required level
  of attention all this thread, so I apologise for any repetition.
 
  I just wanted to point out that floating IPs in neutron are created
  asynchronously when using the l3 agent, and I think this is clear
  to
  everybody.
  So when the create floating IP call returns, this does not mean the
  floating IP has actually been wired, ie: IP configured on
  qg-interface and
  SNAT/DNAT rules added.
 
  Unfortunately, neutron lacks a concept of operational status for a
  floating
  IP which would tell clients, including nova (it acts as a client
  wrt nova
  api), when a floating IP is ready to be used. I started work in
  this
  direction, but it has been suspended now for a week. If anybody
  wants to
  take over and deems this a reasonable thing to do, it will be
  great.
 
 Unless somebody picks it up before I get from the break, I'd like to
 discuss this further with you.
 
  I think neutron tests checking connectivity might return more
  meaningful
  failure data if they would gather the status of the various
  components
  which might impact connectivity.
  These are:
  - The floating IP
  - The router internal interface
  - The VIF port
  - The DHCP agent
I wrote something addressing at least some of these points: 
https://review.openstack.org/#/c/55146/
 
 I agree wholeheartedly. In fact, I think if we are going to rely on
 timeouts for pass/fail we need to do more for post-mortem details.
 
  Collecting info about the latter is very important but a bit
  trickier. I
  discussed with Sean and Maru that it would be great for a starter,
  grep the
  console log to check whether the instance obtained an IP.
  Other things to consider would be:
  - adding an operational status to a subnet, which would express
  whether the
  DHCP agent is in sync with that subnet (this information won't make
  sense
  for subnets with dhcp disabled)
  - working on a 'debug' administrative API which could return, for
  instance,
  for each DHCP agent the list of configured networks and leases.
 
 Interesting!
 
  Regarding timeouts, I think it's fair for tempest to define a
  timeout and
  ask that everything from VM boot to Floating IP wiring completes
  within
  that timeout.
 
  Regards,
  Salvatore
 
 I would agree. It would be impossible to have reasonable automated
 testing otherwise.
 
 Cheers,
 
 Brent
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] Todays' meeting log: PCI pass-through network support

2013-12-23 Thread Irena Berezovsky
Please, see inline

From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: Tuesday, December 24, 2013 1:38 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] [neutron] Todays' meeting log: PCI 
pass-through network support

On autodiscovery and configuration, we agree that each compute node finds out 
what it has based on some sort of list of match expressions; we just disagree 
on where they should live.

I know we've talked APIs for setting that matching expression, but I would 
prefer that compute nodes are responsible for their own physical configuration 
- generally this seems wiser on the grounds that configuring new hardware 
correctly is a devops problem and this pushes the problem into the installer, 
clear devops territory.  It also makes the (I think likely) assumption that the 
config may differ per compute node without having to add more complexity to the 
API with host aggregates and so on.  And it means that a compute node can start 
working without consulting the central database or reporting its entire device 
list back to the central controller.
[IrenaB] Totally agree on this.  For both auto-discovery and configuration, we 
need to close the format  and content that is will be available to nova.
My concern here if there is a way to provide auto-discovery based on network 
connectivity (something like what neutron has i.e. 
'physical_interface_mappings')
For configuration, maybe worth to provide some reference flow for managing it 
by installer.
On PCI groups, I think it is a good idea to have them declared centrally (their 
name, not their content).  Now, I would use config to define them and maybe an 
API for the tenant to list their names, personally; that's simpler and easier 
to implement and doesn't preclude adding an (admin) API in the future.  But I 
don't imagine the list of groups will change frequently so any update API would 
be very infrequently used, and if someone really feels they want to implement 
it I'm not going to stop them.

[IrenaB] The issue we need to resolve is nova scheduler taking its decision 
that satisfies network connectivity

On nova boot, I completely agree that we need a new argument to --nic to 
specify the PCI group of the NIC.  The rest of the arguments - I'm wondering if 
we could perhaps do this in two stages:
1. Neutron will read those arguments (attachment type, additional stuff like 
port group where relevant) from the port during an attach and pass relevant 
information to the plugging driver in Nova
[IrenaB] Do you mean via 'neutron port-create before 'nova boot'? Hopefully we 
can close the details during the discussion today.
2. We add a feature to nova so that you can specify other properties in the 
--nic section line and they're passed straight to the port-create called from 
within nova.
[IrenaB] I like this option. This should also allow to request virtio versus 
SR-IOV nic. It should be possible to have  both options available on the same 
Host.
This is not specific to passthrough at all, just a useful general purpose 
feature.  However, it would simplify both the problem and design here, because 
these parameters, whatever they are, are now entirely the responsibility of 
Neutron and Nova's simply transporting them into it.  A PCI aware Neutron will 
presumably understand the attachment type, the port group and so on, or will 
reject them if they're meaningless to it, and we've even got room for future 
expansion without changing Nova or Neutron, just the plugin.  We can propose it 
now and independently, put in a patch and have it ready before we need it.  I 
think anything that helps to clarify and divide the responsibilities of Nova 
and Neutron will be helpful, because then we don't end up with too many 
cross-project-interrelated patches.
[IrenaB] +2
I'm going to ignore the allocation problem for now.  If a single user can 
allocate all the NICs in the cluster to himself, we still have a more useful 
solution than the one now where he can't use them, so it's not the top of our 
list.
[IrenaB] Agree
Time seems to be running out for Icehouse. We need to come to agreement ASAP. I 
will be out from wednesday until after new year. I'm thinking that to move it 
forward after the new year, we may need to have the IRC meeting in a daily 
basis until we reach agreement. This should be one of our new year's 
resolutions?

Whatever gets it done.
[IrenaB] Fine with me. If we reach required decisions today regarding  neutron, 
I can start to dive into the details of SR-IOV mechanism driver assuming ML2 
plugin.

BR,
Irena
--
Ian.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tempest][qa] Adding tags to commit messages

2013-12-23 Thread Yair Fried
Hi,
Suggestion: Please consider tagging your Tempest commit messages the same way 
you do your mails in the mailing list

Explanation: Since tempest is a single project testing multiple Openstack 
project we have a very diverse collection of patches as well as reviewers. 
Tagging our commit messages will allow us to classify patches and thus:
1. Allow reviewer to focus on patches related to their area of expertise
2. Track trends in patches - I think we all know that we lack in Neutron 
testing for example, but can we assess how many network related patches are for 
awaiting review
3. Future automation of flagging interesting patches

You can usually tell all of this from reviewing the patch, but by then - you've 
spent time on a patch you might not even be qualified to review.
I suggest we tag our patches with, to start with, the components we are looking 
to test, and the type of test (sceanrio, api, ...) and that reviewers should -1 
untagged patches.

I think the tagging should be the 2nd line in the message:

==
Example commit message

[Neutron][Nova][Network][Scenario]

Explanation of how this scenario tests both Neutron and Nova
Network performance

Chang-id XXX
===

I would like this to start immediately but what do you guys think?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev