Re: [openstack-dev] [nova][manila] latest microversion considered dangerous

2015-08-28 Thread Dmitry Tantsur

On 08/28/2015 09:34 AM, Valeriy Ponomaryov wrote:

Dmitriy,

New tests, that cover new functionality already know which API version
they require. So, even in testing, it is not needed. All other existing
tests do not require API update.


Yeah, but you can't be sure that your change does not break the world, 
until you merge it and start updating tests. Probably it's not that 
important for projects who have their integration tests in-tree though..




So, I raise hand for restricting latest.

On Fri, Aug 28, 2015 at 10:20 AM, Dmitry Tantsur dtant...@redhat.com
mailto:dtant...@redhat.com wrote:

On 08/27/2015 09:38 PM, Ben Swartzlander wrote:

Manila recently implemented microversions, copying the
implementation
from Nova. I really like the feature! However I noticed that
it's legal
for clients to transmit latest instead of a real version number.

THIS IS A TERRIBLE IDEA!

I recommend removing support for latest and forcing clients to
request
a specific version (or accept the default).


I think latest is needed for integration testing. Otherwise you
have to update your tests each time new version is introduced.



Allowing clients to request the latest microversion guarantees
undefined (and likely broken) behavior* in every situation where a
client talks to a server that is newer than it.

Every client can only understand past and present API
implementation,
not future implementations. Transmitting latest implies an
assumption
that the future is not so different from the present. This
assumption
about future behavior is precisely what we don't want clients to
make,
because it prevents forward progress. One of the main reasons
microversions is a valuable feature is because it allows forward
progress by letting us make major changes without breaking old
clients.

If clients are allowed to assume that nothing will change too
much in
the future (which is what asking for latest implies) then the
server
will be right back in the situation it was trying to get out of
-- it
can never change any API in a way that might break old clients.

I can think of no situation where transmitting latest is
better than
transmitting the highest version that existed at the time the
client was
written.

-Ben Swartzlander

* Undefined/broken behavior unless the server restricts itself
to never
making any backward-compatiblity-breaking change of any kind.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Kind Regards
Valeriy Ponomaryov
www.mirantis.com http://www.mirantis.com
vponomar...@mirantis.com mailto:vponomar...@mirantis.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Blazar] Anyone interested?

2015-08-28 Thread Ildikó Váncsa
Hi All,

The resource reservation topic pops up time to time on different forums to 
cover use cases in terms of both IT and NFV. The Blazar project was intended to 
address this need, but according to my knowledge due to earlier integration and 
other difficulties the work has been stopped.

My question is that who would be interested in resurrecting the Blazar project 
and/or working on a reservation system in OpenStack?

Thanks and Best Regards,
Ildikó

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] (no subject)

2015-08-28 Thread Adrian Otto
Let's get these details into the QuickStart doc so anyone else hitting this can 
be clued in.

--
Adrian

On Aug 27, 2015, at 9:38 PM, Vikas Choudhary 
choudharyvika...@gmail.commailto:choudharyvika...@gmail.com wrote:


Hi Stanislaw,


I also faced similar issue.Reason might be that from inside master instance 
openstack heat service is not reachable.
Please check /var/log/cloud-init-log  for any connectivity related error 
message and if found try manually whichever command has failed with correct  
url.



If this is the issue, you need to set correct HOST_IP in localrc.



-Vikas Choudhary


___
Hi Stanislaw,

Your host with Fedora should have special config file, which will send
signal to WaitCondition.
For good example please take a look this template
 
https://github.com/openstack/heat-templates/blob/819a9a3fc9d6f449129c8cefa5e087569340109b/hot/native_waitcondition.yaml
https://github.com/openstack/heat-templates/blob/819a9a3fc9d6f449129c8cefa5e087569340109b/hot/native_waitcondition.yaml

Also the best place for such question I suppose will be
https://ask.openstack.org/en/questions/
https://ask.openstack.org/en/questions/

Regards,
Sergey.

On 26 August 2015 at 09:23, Pitucha, Stanislaw Izaak 
stanislaw.pitucha at 
hp.comhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
wrote:

 Hi all,

 I'm trying to stand up magnum according to the quickstart instructions
 with devstack.

 There's one resource which times out and fails: master_wait_condition. The
 kube master (fedora) host seems to be created, I can login to it via ssh,
 other resources are created successfully.



 What can I do from here? How do I debug this? I tried to look for the
 wc_notify itself to try manually, but I can't even find that script.



 Best Regards,

 Stanis?aw Pitucha



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: OpenStack-dev-request at 
 lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][manila] latest microversion considered dangerous

2015-08-28 Thread Dmitry Tantsur

On 08/27/2015 09:38 PM, Ben Swartzlander wrote:

Manila recently implemented microversions, copying the implementation
from Nova. I really like the feature! However I noticed that it's legal
for clients to transmit latest instead of a real version number.

THIS IS A TERRIBLE IDEA!

I recommend removing support for latest and forcing clients to request
a specific version (or accept the default).


I think latest is needed for integration testing. Otherwise you have 
to update your tests each time new version is introduced.




Allowing clients to request the latest microversion guarantees
undefined (and likely broken) behavior* in every situation where a
client talks to a server that is newer than it.

Every client can only understand past and present API implementation,
not future implementations. Transmitting latest implies an assumption
that the future is not so different from the present. This assumption
about future behavior is precisely what we don't want clients to make,
because it prevents forward progress. One of the main reasons
microversions is a valuable feature is because it allows forward
progress by letting us make major changes without breaking old clients.

If clients are allowed to assume that nothing will change too much in
the future (which is what asking for latest implies) then the server
will be right back in the situation it was trying to get out of -- it
can never change any API in a way that might break old clients.

I can think of no situation where transmitting latest is better than
transmitting the highest version that existed at the time the client was
written.

-Ben Swartzlander

* Undefined/broken behavior unless the server restricts itself to never
making any backward-compatiblity-breaking change of any kind.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] --detailed-description for OpenStack items

2015-08-28 Thread Tim Bell


 -Original Message-
 From: Matt Riedemann [mailto:mrie...@linux.vnet.ibm.com]
 Sent: 28 August 2015 02:29
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] --detailed-description for OpenStack items
 
 
 
 On 8/27/2015 12:23 PM, Tim Bell wrote:
  Some project such as cinder include a detailed description option
  where you can include an arbitrary string with a volume to remind the
  admins what the volume is used for.
 
  Has anyone looked at doing something similar for Nova for instances
  and Glance for images ?
 
  In many cases, the names get heavily overloaded with information.
 
  Tim
 
 
 
 
 __
 
   OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 The nova instances table already has a display_description column:
 
 http://git.openstack.org/cgit/openstack/nova/tree/nova/db/sqlalchemy/mo
 dels.py#n287
 
 Couldn't that just be used?  It doesn't look like the nova boot command in
 the CLI exposes it though.  Seems like an easy enough add.
 
 Although the server create API would have to be changed since today it just
 sets the description to be the same as the name:
 
 http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/co
 mpute/servers.py#n589
 
 That could be enabled with a microversion change in mitaka though.
 

This would be great. I had not checked the schema, only the CLI.

Should I submit a bug report (or is there a way of doing an enhancement 
request) ? A display function would also be needed in Horizon for the novice 
users (who are the ones who are asking for this the most)

Tim

 --
 
 Thanks,
 
 Matt Riedemann
 
 
 __
 
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: OpenStack-dev-
 requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] upcoming glanceclient release

2015-08-28 Thread Flavio Percoco

On 27/08/15 15:32 -0400, Nikhil Komawar wrote:


As a part of our continued effort to make the v2 primary API and get
people to consume it without confusion we are planning to move ahead
with the client release (the release would set the default version of
API to 2). There haven't been any major/minor raised here.

An issue regarding the possible impact of this release due to major bump
was raised during the morning meeting however, the client release should
follow semver semantics and indicate the same. A corresponding review
for release notes exists that should merge before the release. This
medium of communication seems enough; it follows the prescription for
necessary communication. I don't seem to find a definition for necessary
and sufficient media for communicating this information so will take
what we usually follow.

There are a few bugs [1] that could be considered as part of this
release but do not seem to be blockers. In order to accommodate the
deadlines of the release milestones and impact of releases in the
upcoming week to other projects, we can continue to fix bugs and release
them as a part of 1.x.x releases sooner than later as time and resource
permit. Also, the high ones can be part of the stable/* backports if
needed but the description has only shell impact so there isn't a strong
enough reason.

So, we need to move ahead with this release for Liberty.


+1

We've been making small steps towards this for a couple of cycles and
I'm happy we're finally switching the default version on the client
library.

The above being said, I believe our client library needs a lot of more
work but this release should set us in a better position to do that.

For folks consuming glanceclient, here's what you need to know:

If you're using glanceclient from your software - that is, you're
using the library and not the CLI - there's nothing you need to do. If
you're using the library, I'm assuming you're creating a client
instance using[0] or by instantiating the specific versioned client
class. Both of these cases require you to specify an API version to
use.

However, if you're using the CLI and you want to stick using the V1,
then you'll need to update your scripts and make sure they use
`--os-image-api 1`. Respectively, if your scripts are using
`--os-image-api 2`, then you should feel free to ignore that argument.

As a good practice, for now, I'd recommend specifying the argument
regardless of the version.

For other changes, please review the rel notes[1] (as soon as they are
there, or you can read[2])

[0] 
https://git.openstack.org/cgit/openstack/python-glanceclient/tree/glanceclient/client.py?h=stable/kilo#n21
[1] http://docs.openstack.org/developer/python-glanceclient/#release-notes
[2] https://review.openstack.org/#/c/217591/




[1]
https://bugs.launchpad.net/python-glanceclient/+bugs?field.tag=1.0.0-potential


Thanks Stuart for tagging these bugs and everyone for raising great
concerns pro/against the release[0]. And thanks Erno for pushing this
out.

[0] 
http://eavesdrop.openstack.org/meetings/glance/2015/glance.2015-08-27-14.00.log.html

Flavio



On 8/25/15 12:15 PM, Nikhil Komawar wrote:

Hi,

We are planning to cut a client release this Thursday by 1500UTC or so.
If there are any reviews that you absolutely need and are likely to not
break the client in the near future, please ping me (nikhil_k) or jokke_
on IRC #openstack-glance.

This will most likely be our final client release for Liberty.



--

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


pgpKeoKT3egLI.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][L3][dvr][fwaas] FWaaS with DVR

2015-08-28 Thread Germy Lure
Hi all,

I have two points.
a. For the problem in this thread, my suggestion is to introduce new
concepts to replace the existing firewall and SG.
Perhaps you have found the overlap between firewall and SG. It's trouble
for user to select.
So the new concepts are edge-firewall for N/S traffic and Distributed
firewall for W/E traffic. The former is similar to the existing firewall
but without E/W controlling and deployed on those nodes connect with
external world. The latter controls E/W traffic such as subnet to subnet,
VM to VM and subnet to VM and will be deployed on compute nodes.

We can attach firewall rules to VM port implicitly, especially the DVR is
disabled. I think it's difficult for a user to do that explicitly while
there are hundreds VMs.

b. For the problems like this.
From recent mailing list, we can see so many problems introduced by DVR.
Such as VPNaaS, floating-IP and FWaaS co-existing with DVR, etc..
Then, stackers, I don't know what's the standard or outgoing check of
releasing a feature in community. But can we make or add some provisions or
something else in order to avoid conflict between features?

Forgive my poor English
BR,
Germy

On Thu, Aug 27, 2015 at 11:44 PM, Mickey Spiegel emspi...@us.ibm.com
wrote:

 Bump

 The FWaaS team would really like some feedback from the DVR side.

 Mickey

 -Mickey Spiegel/San Jose/IBM wrote: -
 To: openstack-dev@lists.openstack.org
 From: Mickey Spiegel/San Jose/IBM
 Date: 08/19/2015 09:45AM
 Subject: [fwaas][dvr] FWaaS with DVR

 Currently, FWaaS behaves differently with DVR, applying to only
 north/south traffic, whereas FWaaS on routers in network nodes applies to
 both north/south and east/west traffic. There is a compatibility issue due
 to the asymmetric design of L3 forwarding in DVR, which breaks the
 connection tracking that FWaaS currently relies on.

 I started an etherpad where I hope the community can discuss the problem,
 collect multiple possible solutions, and eventually try to reach consensus
 about how to move forward:
 https://etherpad.openstack.org/p/FWaaS_with_DVR

 I listed every possible solution that I can think of as a starting point.
 I am somewhat new to OpenStack and FWaaS, so please correct anything that I
 might have misrepresented.

 Please add more possible solutions and comment on the possible solutions
 already listed.

 Mickey




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] info in paste will be removed?

2015-08-28 Thread Osanai, Hisashi

Folks,

I would like to know whether info in http://paste.openstack.org will be removed 
or not.
If it will be removed, I also would like to know a condition.

Thanks in advance,
Hisashi Osanai


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Branching strategy vs feature freeze

2015-08-28 Thread Dmitry Borodaenko

Thanks everyone for the feedback!

There weren't any comments about options (1) and (4), I'm interpreting
it as a consensus that we're not doing a future branch, and as a sign
that nobody wants to even think about CI for external forks (which I'm
sure will come back to haunt us, so don't count on not having to think
about it for long).

With the two extremes out of the way, and taking into account comments
from Thierry, Igor, Ruslan, and Mike, here's a first draft of how
exactly we can switch to the new model.

1) 7.0 hard code freeze -- September 3

Assuming that Earth remains in orbit and Fuel 7.0 hard code freeze
doesn't slip, on September 3 we will create stable/8.0 branches for all
Fuel components, and open master branch for feature development. From
day 1, we will be targeting Liberty, so this time creating parallel CI
jobs sets (8.0 and 8.0-kilo) will be done as part of 7.0 HCF. As soon as
8.0 becomes consistently green, 8.0-kilo will be discarded.

One of the first things to do in Fuel 8.0 is finish the conversion of
fuel-library to librarian for integration of upstream Puppet OpenStack,
and conversion of MOS packaging to upstream rpm and deb packaging
projects. Starting late relative to OpenStack milestones will allow us
to use a relatively stable base for these conversions.

2) 8.0 feature freeze -- December 10 (approximate)

Since we already had a long feature freeze in 7.0, the start of Fuel 8.0
release cycle inevitably remains coupled with MOS. Squeezing the rest of
the cycle before Liberty release on October 15 would make it absurdly
short, lets not do that. So no need for a downstream branch just yet.

What we can do instead is use the short feature freeze in 8.0 to start
working on Mitaka based Fuel 9.0 much earlier in the cycle, and gain
enough time to shift the 9.0 release schedule.

3) 8.0 soft code freeze -- December 24 (approximate)

Around December 24, we will create stable/8.0 branches, open master
branch for feature development and target it at Mitaka (by that time it
should be mitaka-1), following the same process as 7.0 HCF: create
parallel CI job sets 9.0 and 9.0-liberty.

This will give us enough time to design and implement Fuel 9.0 features
by Mitaka feature freeze (based on Kilo schedule, around March 17), even
accounting for having to divide attention between:

 a) fixing High  Critical bugs in stable/8.0
 b) fixing Medium  lower bugs in master
 c) implementing 9.0 features in master
 d) Christmas and New Year

4) 8.0 hard code freeze -- February 4 (approximate)

After 8.0 HCF, features for 9.0 become the primary focus.

5) 9.0 feature freeze -- April 14 (approximate)

Some Fuel features may be blocked or regressed by OpenStack feature
commits [*], we'll need at least 2 weeks after upstream FF to finalize
them and get them through review and CI, and, just for 9.0, 2 more weeks
for unforseen risks since we'll be doing it this way for the first time.

[*] With the conversion to upstream packaging and puppet code completed
   during 8.0 cycle, in 9.0 Fuel's bugfixing will depend on stability
   of these upstream projects.

   In Kilo, deb packaging took about a month to stabilize (April 30 to
   June 3), and puppet took another month (July 9). In Liberty,
   packaging and puppet are better aligned with OpenStack schedule and
   with each other, I think it's reasonable to expect that by Mitaka
   the lag for all 4 projects (deb, rpm, puppet, fuel) will be the same
   2 weeks after OpenStack release instead of 2 months.

6) 9.0 soft code freeze -- April 28 (approximate)

Same 2 weeks for dealing with feature merge fest fallout on the master
branch before opening it for feature work for the nest release. Same
stable branch and parallel stable/master CI dance as with 8.0, except
this time we should probably call the new branch stable/mitaka.

7) 9.0 hard code freeze -- May 26 (approximate)

And this is how we can get the first release candidate of
Mitaka-compatible Fuel only 4 weeks after Mitaka Release.

8) 9.0-based downstream branch

At some point after Fuel 9.0 SCF, a vendor can decide that their
distribution needs some bugfixes that were not accepted for
stable/mitaka in community Fuel, or even some features that missed the
feature freeze.

This is where a downstream branch in a repository hosted outside of
OpenStack Infra comes in. The considerations of pure-play contribution,
early community review, not spawning a proprietary fork, and not killing
puppies, dictate that they propose to community master before
backporting to downstream stable. Still, if someone decides that they
really hate puppies, Apache License provides enough rope to deal with a
fully closed and incrementally diverging fork.

This plan is too detailed and elaborate to work out exactly as laid out,
but I think it's achievable. Lets see if there's anything that we know
can't work the way I described, and what kinds of decisions we need to
make now or be prepared to make as we try to reconcile this plan 

Re: [openstack-dev] [Heat] convergence rally test results (so far)

2015-08-28 Thread Sergey Kraynev
Angus!

it's Awesome!  Thank you for the investigation.
I had a talk with guys from Sahara team and we decided to start testing
convergence with Sahara after L release.
I suppose, that Murano can also join to this process.

Also AFAIK Sahara team plan to create functional tests with Heat-engine. We
may add it as a non-voting job for our gate.
Probably it will be good to have two different type of this job: with
convergence and with default Heat.

On 28 August 2015 at 04:35, Angus Salkeld asalk...@mirantis.com wrote:

 Hi

 I have been running some rally tests against convergence and our existing
 implementation to compare.

 So far I have done the following:

1. defined a template with a resource group

 https://github.com/asalkeld/convergence-rally/blob/master/templates/resource_group_test_resource.yaml.template
2. the inner resource looks like this:

 https://github.com/asalkeld/convergence-rally/blob/master/templates/server_with_volume.yaml.template
  (it
uses TestResource to attempt to be a reasonable simulation of a
server+volume+floatingip)
3. defined a rally job:

 https://github.com/asalkeld/convergence-rally/blob/master/increasing_resources.yaml
  that
creates X resources then updates to X*2 then deletes.
4. I then ran the above with/without convergence and with 2,4,8
heat-engines

 Here are the results compared:

 https://docs.google.com/spreadsheets/d/12kRtPsmZBl_y78aw684PTBg3op1ftUYsAEqXBtT800A/edit?usp=sharing


Results look pretty nice (especially for create) :)
The strange thing for me: why on update 8 engines shows worse results
then with 4 engines? (may be mistake in graph... ?)





 Some notes on the results so far:

-  convergence with only 2 engines does suffer from RPC overload (it
gets message timeouts on larger templates). I wonder if this is the problem
in our convergence gate...

 Good spotting. If it's true, probably we should try to change  number of
engines... (not sure, how gate hardware react on it).


- convergence does very well with a reasonable number of engines
running.
- delete is slightly slower on convergence


Also about delete - may be we may to optimize it later, when convergence
way get more feedback.


 Still to test:

- the above, but measure memory usage
- many small templates (run concurrently)
- we need to ask projects using Heat to try with convergence (Murano,
TripleO, Magnum, Sahara, etc..)

 Any feedback welcome (suggestions on what else to test).

 -Angus

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




Regards,
Sergey.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] subunit2html location on images changing

2015-08-28 Thread Chris Dent

On Thu, 27 Aug 2015, Chris Dent wrote:


On Wed, 26 Aug 2015, Matthew Treinish wrote:


http://git.openstack.org/cgit/openstack-infra/devstack-gate/tree/functions.sh#n571


Is 'process_testr_artifacts' going to already be in scope for the
hook script or will it be necessary to source functions.sh to be
sure? If so, where is it?


In case anyone else has been wondering about this:

In conversation with Matt and Sean its become clear that the
functions in the linked functions.sh above are not available to the
hooks so the best thing to do for now is to just change to use the
os-testr-env location of subunit2html:

   /usr/os-testr-env/bin/subunit2html

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][manila] latest microversion considered dangerous

2015-08-28 Thread Valeriy Ponomaryov
Dmitriy,

New tests, that cover new functionality already know which API version they
require. So, even in testing, it is not needed. All other existing tests do
not require API update.

So, I raise hand for restricting latest.

On Fri, Aug 28, 2015 at 10:20 AM, Dmitry Tantsur dtant...@redhat.com
wrote:

 On 08/27/2015 09:38 PM, Ben Swartzlander wrote:

 Manila recently implemented microversions, copying the implementation
 from Nova. I really like the feature! However I noticed that it's legal
 for clients to transmit latest instead of a real version number.

 THIS IS A TERRIBLE IDEA!

 I recommend removing support for latest and forcing clients to request
 a specific version (or accept the default).


 I think latest is needed for integration testing. Otherwise you have to
 update your tests each time new version is introduced.



 Allowing clients to request the latest microversion guarantees
 undefined (and likely broken) behavior* in every situation where a
 client talks to a server that is newer than it.

 Every client can only understand past and present API implementation,
 not future implementations. Transmitting latest implies an assumption
 that the future is not so different from the present. This assumption
 about future behavior is precisely what we don't want clients to make,
 because it prevents forward progress. One of the main reasons
 microversions is a valuable feature is because it allows forward
 progress by letting us make major changes without breaking old clients.

 If clients are allowed to assume that nothing will change too much in
 the future (which is what asking for latest implies) then the server
 will be right back in the situation it was trying to get out of -- it
 can never change any API in a way that might break old clients.

 I can think of no situation where transmitting latest is better than
 transmitting the highest version that existed at the time the client was
 written.

 -Ben Swartzlander

 * Undefined/broken behavior unless the server restricts itself to never
 making any backward-compatiblity-breaking change of any kind.


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kind Regards
Valeriy Ponomaryov
www.mirantis.com
vponomar...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Cinder and Nova availability zones

2015-08-28 Thread Dulko, Michal
 From: Ben Swartzlander [mailto:b...@swartzlander.org]
 Sent: Thursday, August 27, 2015 8:11 PM
 To: OpenStack Development Mailing List (not for usage questions)
 
 On 08/27/2015 10:43 AM, Ivan Kolodyazhny wrote:
 
 
   Hi,
 
   Looks like we need to be able to set AZ per backend. What do you
 think about such option?
 
 
 
 I dislike such an option.
 
 The whole premise behind an AZ is that it's a failure domain. The node
 running the cinder services is in exactly one such failure domain. If you 
 have 2
 backends in 2 different AZs, then the cinder services managing those
 backends should be running on nodes that are also in those AZs. If you do it
 any other way then you create a situation where a failure in one AZ causes
 loss of services in a different AZ, which is exactly what the AZ feature is 
 trying
 to avoid.
 
 If you do the correct thing and run cinder services on nodes in the AZs that
 they're managing then you will never have a problem with the one-AZ-per-
 cinder.conf design we have today.
 
 -Ben

I disagree. You may have failure domains done on a different level, like using 
Ceph mechanisms for that. In such case you want to provide the user with a 
single backend regardless of compute AZ partitioning. To address such needs you 
would need to set multiple AZ per backend to make this achievable.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] convergence rally test results (so far)

2015-08-28 Thread Angus Salkeld
On Fri, Aug 28, 2015 at 6:35 PM Sergey Lukjanov slukja...@mirantis.com
wrote:

 Hi,

 great, it seems like migration to convergence could happen soon.

 How many times you were running each test case? Does time changing with
 number of iterations? Are you planning to test parallel stacks creation?


Given the test matrix convergence/non-convergence and 2,4,8 engines, I have
not done a lot of iterations - it's just time consuming. I might kill off
the 2-engine case to gain more iterations.
But from what I have observed the duration does not vary significantly.

I'll test smaller stacks with lots of iterations and with a high
concurrency. All this testing is currently on just one host so it is
somewhat limited. Hopefully this is at least giving a useful comparison
with these limitations.

-Angus



 Thanks.

 On Fri, Aug 28, 2015 at 10:17 AM, Sergey Kraynev skray...@mirantis.com
 wrote:

 Angus!

 it's Awesome!  Thank you for the investigation.
 I had a talk with guys from Sahara team and we decided to start testing
 convergence with Sahara after L release.
 I suppose, that Murano can also join to this process.

 Also AFAIK Sahara team plan to create functional tests with Heat-engine.
 We may add it as a non-voting job for our gate.
 Probably it will be good to have two different type of this job: with
 convergence and with default Heat.

 On 28 August 2015 at 04:35, Angus Salkeld asalk...@mirantis.com wrote:

 Hi

 I have been running some rally tests against convergence and our
 existing implementation to compare.

 So far I have done the following:

1. defined a template with a resource group

 https://github.com/asalkeld/convergence-rally/blob/master/templates/resource_group_test_resource.yaml.template
2. the inner resource looks like this:

 https://github.com/asalkeld/convergence-rally/blob/master/templates/server_with_volume.yaml.template
  (it
uses TestResource to attempt to be a reasonable simulation of a
server+volume+floatingip)
3. defined a rally job:

 https://github.com/asalkeld/convergence-rally/blob/master/increasing_resources.yaml
  that
creates X resources then updates to X*2 then deletes.
4. I then ran the above with/without convergence and with 2,4,8
heat-engines

 Here are the results compared:

 https://docs.google.com/spreadsheets/d/12kRtPsmZBl_y78aw684PTBg3op1ftUYsAEqXBtT800A/edit?usp=sharing


 Results look pretty nice (especially for create) :)
 The strange thing for me: why on update 8 engines shows worse results
 then with 4 engines? (may be mistake in graph... ?)





 Some notes on the results so far:

-  convergence with only 2 engines does suffer from RPC overload (it
gets message timeouts on larger templates). I wonder if this is the 
 problem
in our convergence gate...

 Good spotting. If it's true, probably we should try to change  number of
 engines... (not sure, how gate hardware react on it).


- convergence does very well with a reasonable number of engines
running.
- delete is slightly slower on convergence


 Also about delete - may be we may to optimize it later, when convergence
 way get more feedback.


 Still to test:

- the above, but measure memory usage
- many small templates (run concurrently)
- we need to ask projects using Heat to try with convergence
(Murano, TripleO, Magnum, Sahara, etc..)

 Any feedback welcome (suggestions on what else to test).

 -Angus


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 Regards,
 Sergey.



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Principal Software Engineer
 Mirantis Inc.
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] testing for setting the admin password via the libvirt driver

2015-08-28 Thread Daniel P. Berrange
On Tue, Aug 25, 2015 at 09:14:33AM -0500, Matt Riedemann wrote:
 Support to change the admin password on an instance via the libvirt driver
 landed in liberty [1] but the hypervisor support matrix wasn't updated [2].
 There is a version restriction in the driver that it won't work unless
 you're using at least libvirt 1.2.16.
 
 We should be able to at least update the hypervisor support matrix that this
 is supported for libvirt with the version restriction.  markus_z actually
 pointed that out in the review of the change to add the support but it was
 ignored.

Yes, in that case, it'd be appropriate to update the support matrix and
add in a footnote against it mentioning the min required version.

 The other thing I was wondering about was testing.  The check/gate queue
 jobs with ubuntu 14.04 only have libvirt 1.2.2.
 
 There is the fedora 21 job that runs on the experimental queue and I've
 traditionally considered this a place to test out libvirt driver features
 that need something newer than 1.2.2, but that only goes up to libvirt
 1.2.9.3 [3].
 
 It looks like you have to get up to fedora 23 to be able to test this
 set-admin-password function [4].  In fact it looks like the only major
 distro out there right now that supports this new enough version of libvirt
 is fc23 [5].
 
 Does anyone fancy getting a f23 job setup in the experimental queue for
 nova?  It would be nice to actually be able to test the bleeding edge
 features that we put into the driver code.

F23 is not released yet so may have instability which will hamper running
gate jobs. The other alternative is to setup a stable Fedora release
like F22, and then enable the VirtPreview repositor which gives you newer
set of the virt toolchain from F23/rawhide. This should be more stable
than running entire of F23/rawhide distro.

https://fedoraproject.org/wiki/Virtualization_Preview_Repository

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] convergence rally test results (so far)

2015-08-28 Thread Sergey Lukjanov
Hi,

great, it seems like migration to convergence could happen soon.

How many times you were running each test case? Does time changing with
number of iterations? Are you planning to test parallel stacks creation?

Thanks.

On Fri, Aug 28, 2015 at 10:17 AM, Sergey Kraynev skray...@mirantis.com
wrote:

 Angus!

 it's Awesome!  Thank you for the investigation.
 I had a talk with guys from Sahara team and we decided to start testing
 convergence with Sahara after L release.
 I suppose, that Murano can also join to this process.

 Also AFAIK Sahara team plan to create functional tests with Heat-engine.
 We may add it as a non-voting job for our gate.
 Probably it will be good to have two different type of this job: with
 convergence and with default Heat.

 On 28 August 2015 at 04:35, Angus Salkeld asalk...@mirantis.com wrote:

 Hi

 I have been running some rally tests against convergence and our existing
 implementation to compare.

 So far I have done the following:

1. defined a template with a resource group

 https://github.com/asalkeld/convergence-rally/blob/master/templates/resource_group_test_resource.yaml.template
2. the inner resource looks like this:

 https://github.com/asalkeld/convergence-rally/blob/master/templates/server_with_volume.yaml.template
  (it
uses TestResource to attempt to be a reasonable simulation of a
server+volume+floatingip)
3. defined a rally job:

 https://github.com/asalkeld/convergence-rally/blob/master/increasing_resources.yaml
  that
creates X resources then updates to X*2 then deletes.
4. I then ran the above with/without convergence and with 2,4,8
heat-engines

 Here are the results compared:

 https://docs.google.com/spreadsheets/d/12kRtPsmZBl_y78aw684PTBg3op1ftUYsAEqXBtT800A/edit?usp=sharing


 Results look pretty nice (especially for create) :)
 The strange thing for me: why on update 8 engines shows worse results
 then with 4 engines? (may be mistake in graph... ?)





 Some notes on the results so far:

-  convergence with only 2 engines does suffer from RPC overload (it
gets message timeouts on larger templates). I wonder if this is the 
 problem
in our convergence gate...

 Good spotting. If it's true, probably we should try to change  number of
 engines... (not sure, how gate hardware react on it).


- convergence does very well with a reasonable number of engines
running.
- delete is slightly slower on convergence


 Also about delete - may be we may to optimize it later, when convergence
 way get more feedback.


 Still to test:

- the above, but measure memory usage
- many small templates (run concurrently)
- we need to ask projects using Heat to try with convergence (Murano,
TripleO, Magnum, Sahara, etc..)

 Any feedback welcome (suggestions on what else to test).

 -Angus

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 Regards,
 Sergey.



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] final release in Liberty cycle

2015-08-28 Thread Thierry Carrez
Ruby Loo wrote:
 Our first semver release, 4.0.0, was tagged this week but a few more
 things need to be ironed out still (hopefully there will be an
 announcement about that in the near future).
 
 What I wanted to mention is that according to the new process, there
 will be a final release of ironic that coincides with the Liberty
 coordinated release. The current plan is to cut a 4.1.0 release around
 Liberty RC1, which will become our stable/liberty branch. According to
 the schedule[1], that would most likely happen the week of September 21
 or thereabouts. We'll have a better idea as we get closer to the date.

Right. Basically 4.1.0 will be your release candidate and will serve
as the end-of-cycle release unless major issues are found that justify
a 4.1.1 being cut on stable/liberty branch.

 It isn't clear to me how ironic is affected by the DepFreeze[2] and the
 global requirements. Maybe someone who understands that part, could
 explain. (And perhaps how the new ironic-lib fits into this freeze, or not.)

Due to the way stable/liberty will work, we softfreeze the master branch
of global requirements shortly after the liberty-3 date, and until all
projects with stable branches have cut stable/liberty (which happens
when they cut their first release candidate). At that point we cut a
stable/liberty branch for global requirements and master is unfrozen.

So for Ironic, that means new requirements (or requirements bumps)
should ideally be filed before end of next week if you want those part
of 4.1.0 (or stable/liberty).

Ift's a soft freeze so we'll still consider exceptions of course, but
that's the general rule.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] 40% failure on neutron python3.4 tests in the gate

2015-08-28 Thread Sean Dague
We're at a 18hr backup in the gate, which is really unusual given the
amount of decoupling. Even under our current load that means we're
seeing huge failure rates causing resets.

It appears one of the major culprits is the python34 tests in neutron,
which were over a 40% failure rate recently - http://goo.gl/9wCerK

That tends to lead to things like -
http://dl.dropbox.com/u/6514884/screenshot_249.png - which means a huge
amount of work has been reset. Right now 3 of 7 neutron patches in the
gate that are within the sliding window are in a failure state (they are
also the only current visible fails in the window).

Looking at one of the patches in question -
https://review.openstack.org/#/c/202207/ - shows it's been rechecked 3
times, and these failures were seen in earlier runs.

I do understand that people want to get their code merged, but
rechecking patches that are failing this much without going after the
root causes means everyone pays for it. This is blocking a lot of other
projects from landing code in a timely manner.

The functional tests seem to have a quite high failure rate as well from
spot checking. If the results of these tests are mostly going to be
ignored and rechecked, can we remove them from the gate definition on
neutron so they aren't damaging the overall flow of the gate?

Thanks,

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] Cloud Foundry service broker question

2015-08-28 Thread Nikolay Starodubtsev
Ok, I belive we can take it in mind as possible resolution. The problem is
that it will take it us too long, so we can discuss it while we will plan
Mitaka development.
However it's not a decision of the problem with service broker API for now.



Nikolay Starodubtsev

Software Engineer

Mirantis Inc.


Skype: dark_harlequine1

2015-08-27 23:25 GMT+03:00 Dmitry mey...@gmail.com:

 I would say to extend murano with additional capabilities.
 Dependency management for composite applications is very important for
 modern development so, I think, adding additional use-cases could be very
 benifitial for Murano.
 On Aug 27, 2015 2:53 PM, Nikolay Starodubtsev 
 nstarodubt...@mirantis.com wrote:

 Dmitry,
 Does I understand properly and your recommendation is to change some
 murano logic?



 Nikolay Starodubtsev

 Software Engineer

 Mirantis Inc.


 Skype: dark_harlequine1

 2015-08-24 23:31 GMT+03:00 Dmitry mey...@gmail.com:

 I think that you can model application dependencies in a way it will
 allow multi-step provisioning and further  maintenance of each component.
 The example of such modeling could be seen in OASIS TOSCA.
 On Aug 24, 2015 6:19 PM, Nikolay Starodubtsev 
 nstarodubt...@mirantis.com wrote:

 Hi all,
 Today I and Stan Lagun discussed a question How we can provision
 complex murano app through Cloud Foundry?
 Here you can see logs from #murano related to this discussion:
 http://eavesdrop.openstack.org/irclogs/%23murano/%23murano.2015-08-24.log.html#t2015-08-24T09:53:01

 So, the only way we see now is to provision apps which have
 dependencies is step by step provisioning with manually updating JSON files
 each iteration. We appreaciate any ideas.
 Here is the link for review:
 https://review.openstack.org/#/c/196820/





 Nikolay Starodubtsev

 Software Engineer

 Mirantis Inc.


 Skype: dark_harlequine1


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] [dashboard] public package visibility in Package Definitions UX concern

2015-08-28 Thread Nikolay Starodubtsev
My vote is for #1. If I remember the problem right it's the best solution.



Nikolay Starodubtsev

Software Engineer

Mirantis Inc.


Skype: dark_harlequine1

2015-08-20 13:16 GMT+03:00 Kirill Zaitsev kzait...@mirantis.com:

 On our latest irc meeting I raised a concern about public package
 visibility. Here’s the commit that caused my concerns
 https://review.openstack.org/#/c/213682/

 We currently have «catalog» and «package definitions» pages in our
 dashboard. The former contains packages that the user can add in his
 environment, and the latter contains packages the user can edit. This
 means, that admin user sees all the packages on package definitions page,
 while simple user can only see packages from his tenant.
 Lately we’ve added a filter, similar to the one «Images» dashboard has,
 that separates packages into «project» «public» and «other» groups, to ease
 selection, but this unfortunately introduced some negative UX, cause
 non-admin users now see the filter and expect all the public packages to be
 there.

 This can be solved in a couple of ways.
 1) Remove the filter for non-admin user, thus removing any concerns about
 public-packages. User can still sort the table by pressing on the public
 header.
 2) Renaming the filter to something like «my public» for non-admin
 3) Allowing user to see public packages from other tenants, but making all
 the edit options grey, although I’m not sure if it’s possible to do so for
 bulk operation checkboxes.
 4) Leave everything as is (ostrich algorithm), as we believe, that this is
 expected behaviour

 Personally I like #1 as it makes more sense to me and feels more
 consistent than other options.

 Ideas/opinions would be appreciated.

 --
 Kirill Zaitsev
 Murano team
 Software Engineer
 Mirantis, Inc

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Blazar] Anyone interested?

2015-08-28 Thread Nikolay Starodubtsev
Hi Ildikó,
The problem of blazar project was that active contributors moved to
different OpenStack projects or leave the community.
I want to be in the 'ressurection' process. Also, some other guys might be
interested. I remember that I saw some email in dev-list.



Nikolay Starodubtsev

Software Engineer

Mirantis Inc.


Skype: dark_harlequine1

2015-08-28 9:56 GMT+03:00 Ildikó Váncsa ildiko.van...@ericsson.com:

 Hi All,

 The resource reservation topic pops up time to time on different forums to
 cover use cases in terms of both IT and NFV. The Blazar project was
 intended to address this need, but according to my knowledge due to earlier
 integration and other difficulties the work has been stopped.

 My question is that who would be interested in resurrecting the Blazar
 project and/or working on a reservation system in OpenStack?

 Thanks and Best Regards,
 Ildikó

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] --detailed-description for OpenStack items

2015-08-28 Thread Flavio Percoco

On 27/08/15 18:35 +, Tim Bell wrote:

That could be done but we'd need to establish an agreed name so that Horizon or the CLIs, 
for example, could filter based on description. Give me all VMs with Ansys in the 
description.

If we use properties, a consistent approach would be needed so the higher level 
tooling could rely on it (and hide the implementation details). Currently, I 
don't think Horizon lets you set properties for an image or a VM.


mmh, I'm not sure about this either (whether horizon allows you to do
that or not) but I'd recommend using properties too.

Flavio



Tim


-Original Message-
From: Daniel Speichert [mailto:dan...@speichert.pl]
Sent: 27 August 2015 19:32
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] --detailed-description for OpenStack items

On 8/27/2015 13:23, Tim Bell wrote:




Some project such as cinder include a detailed description option
where you can include an arbitrary string with a volume to remind the
admins what the volume is used for.



Has anyone looked at doing something similar for Nova for instances
and Glance for images ?



In many cases, the names get heavily overloaded with information.



Tim




Wouldn't it be appropriate/simple to just specify a metadata like description=this 
is what it's used for?

Regards,
Daniel Speichert


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


pgpQO6UBTypGl.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [Nova] [Cinder] Need to add selection of availability zone for new volume

2015-08-28 Thread Dulko, Michal
Hi,

If I recall correctly your Horizon-based solution won't be possible, because of 
how Nova's code works internally - it just passes Nova's AZ to Cinder API, 
without allowing to overwrite it.

We're discussing this particular issue in another ML thread 
http://lists.openstack.org/pipermail/openstack-dev/2015-August/071732.html. I'm 
planning to create a BP and spec to sort out all Cinder AZ issues in Mitaka.

Apart from that there's a bugreport[1] and patch[2] aiming in temporarily 
fixing it for Liberty cycle.

[1] https://bugs.launchpad.net/cinder/+bug/1489575
[2] https://review.openstack.org/#/c/217857/

 -Original Message-
 From: Timur Nurlygayanov [mailto:tnurlygaya...@mirantis.com]
 Sent: Monday, August 17, 2015 2:19 PM
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] [Horizon] [Nova] [Cinder] Need to add selection of
 availability zone for new volume
 
 Hi OpenStack dev team,
 
 
 we found issue [1] in Horizon (probably, in Nova API too), which blocks the
 ability to boot VMs with option Instance Boot Source = Boot from image
 (creates new volume) in case when we have several Availability Zones in
 Nova and Cinder - it will fail with error Failure prepping block device.
 
 
 Looks like it is issue in the initial design of Boot from image (creates new
 volume) feature, because when we creates new volume we need to
 choose the Availability zone for this volume or use some default value (with
 depends on AZs configuration). In the same time Nova AZs and Cinder AZs
 are different Availability Zones and we need to manage them separately.
 
 
 For now, when we are using Boot from image (creates new volume)
 feature, Nova tries to create volume is selected Nova Availability Zone, which
 can be not presented in Cinder. In the result we will see error Failure
 prepping block device.
 
 I think Horizon UI should provide something like drop down list with the list
 of Cinder availability zones when user wants to boot VM with option Boot
 from image (creates new volume) - we can prepare the fix for the existing
 Horizon UI (to support many AZs for Nova  Cinder use case in Kilo and
 Liberty releases).
 
 
 Also, I know that Horizon team works on the new UI for Instance creation
 workflow, so, we need to make sure that it will be supported with new UI
 [2].
 
 
 Thank you!
 
 
 [1] https://bugs.launchpad.net/horizon/+bug/1485578
 [2] https://openstack.invisionapp.com/d/#/projects/2472307
 
 --
 
 
 
 Timur,
 Senior QA Engineer
 OpenStack Projects
 Mirantis Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] [DVR] easyOVS -- Smart tool to use/debug Neutron/DVR

2015-08-28 Thread Baohua Yang
Hi , all

When using neutron (especially with DVR), I find it difficult to debug
problems with lots of ovs rules, complicated iptables rules, network
namespaces, routing tables, ...

So I create https://github.com/yeasy/easyOVS
https://github.com/yeasy/easyOVSeasyOVS https://github.com/yeasy/easyOVS,
in summary, it can


   - Format the output and use color to make it clear and easy to compare.
   - Associate the OpenStack information (e.g., vm ip) on the virtual port
   or rule
   - Query openvswitch,iptables,namespace information in smart way.
   - Check if the DVR configuration is correct.
   - Smart command completion, try tab everywhere.
   - Support runing local system commands.

In latest 0.5 version, it supports checking your dvr configuration and
running states, e.g., on a compute node, I run 'dvr check' command, then it
will automatically check the configuration files, bridges, ports, network
spaces, iptables rules,... like

 No type given, guessing...compute node
=== Checking DVR on compute node ===
 Checking config files...
# Checking file = /etc/sysctl.conf...
# Checking file = /etc/neutron/neutron.conf...
# Checking file = /etc/neutron/plugins/ml2/ml2_conf.ini...
file /etc/neutron/plugins/ml2/ml2_conf.ini Not has [agent]
file /etc/neutron/plugins/ml2/ml2_conf.ini Not has l2_population = True
file /etc/neutron/plugins/ml2/ml2_conf.ini Not has
enable_distributed_routing = True
file /etc/neutron/plugins/ml2/ml2_conf.ini Not has arp_responder = True
# Checking file = /etc/neutron/l3_agent.ini...
 Checking config files has warnings

 Checking bridges...
# Existing bridges are br-tun, br-int, br-eno1, br-ex
# Vlan bridge is at br-tun, br-int, br-eno1, br-ex
 Checking bridges passed

 Checking vports ...
## Checking router port = qr-b0142af2-12
### Checking rfp port rfp-f046c591-7
Found associated floating ips : 172.29.161.127/32, 172.29.161.126/32
### Checking associated fpr port fpr-f046c591-7
### Check related fip_ns=fip-9e1c850d-e424-4379-8ebd-278ae995d5c3
Bridging in the same subnet
fg port is attached to br-ex
floating ip 172.29.161.127 match fg subnet
floating ip 172.29.161.126 match fg subnet
Checking chain rule number: neutron-postrouting-bottom...Passed
Checking chain rule number: OUTPUT...Passed
Checking chain rule number: neutron-l3-agent-snat...Passed
Checking chain rules: neutron-postrouting-bottom...Passed
Checking chain rules: PREROUTING...Passed
Checking chain rules: OUTPUT...Passed
Checking chain rules: POSTROUTING...Passed
Checking chain rules: POSTROUTING...Passed
Checking chain rules: neutron-l3-agent-POSTROUTING...Passed
Checking chain rules: neutron-l3-agent-PREROUTING...Passed
Checking chain rules: neutron-l3-agent-OUTPUT...Passed
DNAT for incoming: 172.29.161.127 -- 10.0.0.3 passed
Checking chain rules: neutron-l3-agent-float-snat...Passed
SNAT for outgoing: 10.0.0.3 -- 172.29.161.127 passed
Checking chain rules: neutron-l3-agent-OUTPUT...Passed
DNAT for incoming: 172.29.161.126 -- 10.0.0.216 passed
Checking chain rules: neutron-l3-agent-float-snat...Passed
SNAT for outgoing: 10.0.0.216 -- 172.29.161.126 passed
## Checking router port = qr-8c41bfc7-56
Checking passed already
 Checking vports passed


Welcome for any feedback, and welcome for any contribution!

I am trying to put this project into stackforge to let more people can use
and improve it, any thoughts if it is suitable?

https://review.openstack.org/#/c/212396/

Thanks for any help or suggestion!


-- 
Best wishes!
Baohua
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Cinder and Nova availability zones

2015-08-28 Thread Duncan Thomas
Except your failure domain includes the cinder volume service, independent
of the resiliency of you backend, so if they're all on one node then you
don't really have availability zones.

I have historically strongly espoused the same view as Ben, though there
are lots of people who want fake availability zones... No strong use cases
though
On 28 Aug 2015 11:59, Dulko, Michal michal.du...@intel.com wrote:

  From: Ben Swartzlander [mailto:b...@swartzlander.org]
  Sent: Thursday, August 27, 2015 8:11 PM
  To: OpenStack Development Mailing List (not for usage questions)
 
  On 08/27/2015 10:43 AM, Ivan Kolodyazhny wrote:
 
 
Hi,
 
Looks like we need to be able to set AZ per backend. What do you
  think about such option?
 
 
 
  I dislike such an option.
 
  The whole premise behind an AZ is that it's a failure domain. The node
  running the cinder services is in exactly one such failure domain. If
 you have 2
  backends in 2 different AZs, then the cinder services managing those
  backends should be running on nodes that are also in those AZs. If you
 do it
  any other way then you create a situation where a failure in one AZ
 causes
  loss of services in a different AZ, which is exactly what the AZ feature
 is trying
  to avoid.
 
  If you do the correct thing and run cinder services on nodes in the AZs
 that
  they're managing then you will never have a problem with the one-AZ-per-
  cinder.conf design we have today.
 
  -Ben

 I disagree. You may have failure domains done on a different level, like
 using Ceph mechanisms for that. In such case you want to provide the user
 with a single backend regardless of compute AZ partitioning. To address
 such needs you would need to set multiple AZ per backend to make this
 achievable.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] 40% failure on neutron python3.4 tests in the gate

2015-08-28 Thread Neil Jerram
On 28/08/15 13:39, Kevin Benton wrote:
 For the py34 failures, they seem to have started around the same time
 as a change was merged that adjusted the way they were ran so I
 proposed a revert for that patch
 here: https://review.openstack.org/218244



Which leads on to https://review.openstack.org/#/c/217379/6.

Which is itself failing to merge for various dvsm-functional reasons,
including failure of test_restart_wsgi_on_sighup_multiple_workers [1]. 
There's a bug for that at
https://bugs.launchpad.net/neutron/+bug/1478190, but that doesn't show
any activity for the last few days.

[1]
http://logs.openstack.org/79/217379/6/gate/gate-neutron-dsvm-functional/2991b11/testr_results.html.gz

Regards,
Neil



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Cinder and Nova availability zones

2015-08-28 Thread Dulko, Michal
 From: Duncan Thomas [mailto:duncan.tho...@gmail.com]
 Sent: Friday, August 28, 2015 2:31 PM
 
 Except your failure domain includes the cinder volume service, independent
 of the resiliency of you backend, so if they're all on one node then you don't
 really have availability zones.
 
 I have historically strongly espoused the same view as Ben, though there are
 lots of people who want fake availability zones... No strong use cases though

In case you have Ceph backend (actually I think this applies to any non-LVM 
backend), you normally run c-vol on your controller nodes in A/P manner. c-vol 
becomes more like control plane service and we don't provide AZs for control 
plane. Nova doesn't do it either, AZs are only for compute nodes.

Given that now Nova assumes that Cinder have same set of AZs, we should be able 
to create fake ones (or have a fallback option like in patch provided by Ned).
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] 40% failure on neutron python3.4 tests in the gate

2015-08-28 Thread Sean Dague
On 08/28/2015 09:22 AM, Assaf Muller wrote:
 
 
 On Fri, Aug 28, 2015 at 9:12 AM, Neil Jerram neil.jer...@metaswitch.com
 mailto:neil.jer...@metaswitch.com wrote:
 
 On 28/08/15 13:39, Kevin Benton wrote:
  For the py34 failures, they seem to have started around the same time
  as a change was merged that adjusted the way they were ran so I
  proposed a revert for that patch
  here: https://review.openstack.org/218244
 
 
 
 Which leads on to https://review.openstack.org/#/c/217379/6.
 
 
 Armando reported the py34 Neutron gate issues a few hours after they
 started,
 and I pushed that fix a few hours after that. Sadly it's taking time to
 get that
 through the gate.

When issues like these arrise, please bring them to the infra team in
#openstack-infra. They can promote fixes that unbreak things.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Use block_device_mapping_v2 for swap?

2015-08-28 Thread marios
I am working with the OS::Nova::Server resource and looking at the tests
[1], it should be possible to just define 'swap_size' and get a swap
space created on the instance:

  NovaCompute:
type: OS::Nova::Server
properties:
  image:
{get_param: Image}
  ...
  block_device_mapping_v2:
- swap_size: 1

When trying this the first thing I hit is a validation code nit that is
already fixed @ [2] (I have slightly older heat) and I applied that fix.
However, when I try and deploy with a Flavor that has a 2MB swap for
example, and with the above template, I still end up with a 2MB swap.

Am I right in my assumption that the above template is the equivalent of
specifying --swap on the nova boot cli (i.e. should this work?)? I am
working with the Ironic nova driver btw and when deploying using the
nova cli using --swap works; has anyone used/tested this property
recently? I'm not yet sure if this is worth filing a bug for yet.

thanks very much for reading! marios

[1]
https://github.com/openstack/heat/blob/a1819ff0696635c516d0eb1c59fa4f70cae27d65/heat/tests/nova/test_server.py#L2446
[2]
https://review.openstack.org/#/q/I2c538161d88a51022b91b584f16c1439848e7ada,n,z

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][sfc] Neutron stadium distribution and/or packaging

2015-08-28 Thread Paul Carver
Has anyone written anything up about expectations for how Big Tent or 
Neutron Stadium projects are expected to be 
installed/distributed/packaged?


In particular, I'm wondering how we're supposed to handle changes to 
Neutron components. For the networking-sfc project we need to make 
additions to the API and corresponding additions to neutronclient as 
well as modifying the OvS agent to configure new flow table entries in OvS.


The code is in a separate Git repo as is expected of a Stadium project 
but it doesn't make sense that we would package altered copies of files 
that are deployed by the regular Neutron packages.


Should we be creating 99%+ of the functionality in filenames that don't 
conflict and then making changes to files in the Neutron and 
neutronclient repos to stitch together the 1% that adds our new 
functionality to the existing components? Or do we stage the code in the 
Stadium project's repo then subsequently request to merge it into the 
neutron/neutronclient repo? Or is there some other preferred way to 
integrate the added features?




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] 40% failure on neutron python3.4 tests in the gate

2015-08-28 Thread Sean Dague
On 08/28/2015 08:34 AM, Kevin Benton wrote:
 One of the patches that fixes one of the functional failures that has
 been hitting is here: https://review.openstack.org/#/c/217927/
 
 However, it failed in the DVR job on the 'test_router_rescheduling'
 test.[1] This failure is because the logic to skip when DVR is enabled
 is based on a check that will always return False.[2] I pushed a patch
 to tempest to fix that [3] so once that gets merged we should be able to
 get the one above merged.
 
 For the py34 failures, they seem to have started around the same time as
 a change was merged that adjusted the way they were ran so I proposed a
 revert for that patch here: https://review.openstack.org/218244

That would be indicative of the fact that the tests aren't isolated, and
running them in parallel breaks things because the tests implicitly
depend on both order, and that everything before them actually ran.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][manila] latest microversion considered dangerous

2015-08-28 Thread Alex Meade
I don't know if this is really a big problem. IMO, even with microversions
you shouldn't be implementing things that aren't backwards compatible
within the major version. I thought the benefit of microversions is to know
if a given feature exists within the major version you are using. I would
consider a breaking change to be a major version bump. If we only do a
microversion bump for a backwards incompatible change then we are just
using microversions as major versions.

-Alex

On Fri, Aug 28, 2015 at 3:45 AM, Dmitry Tantsur dtant...@redhat.com wrote:

 On 08/28/2015 09:34 AM, Valeriy Ponomaryov wrote:

 Dmitriy,

 New tests, that cover new functionality already know which API version
 they require. So, even in testing, it is not needed. All other existing
 tests do not require API update.


 Yeah, but you can't be sure that your change does not break the world,
 until you merge it and start updating tests. Probably it's not that
 important for projects who have their integration tests in-tree though..


 So, I raise hand for restricting latest.

 On Fri, Aug 28, 2015 at 10:20 AM, Dmitry Tantsur dtant...@redhat.com
 mailto:dtant...@redhat.com wrote:

 On 08/27/2015 09:38 PM, Ben Swartzlander wrote:

 Manila recently implemented microversions, copying the
 implementation
 from Nova. I really like the feature! However I noticed that
 it's legal
 for clients to transmit latest instead of a real version number.

 THIS IS A TERRIBLE IDEA!

 I recommend removing support for latest and forcing clients to
 request
 a specific version (or accept the default).


 I think latest is needed for integration testing. Otherwise you
 have to update your tests each time new version is introduced.



 Allowing clients to request the latest microversion guarantees
 undefined (and likely broken) behavior* in every situation where a
 client talks to a server that is newer than it.

 Every client can only understand past and present API
 implementation,
 not future implementations. Transmitting latest implies an
 assumption
 that the future is not so different from the present. This
 assumption
 about future behavior is precisely what we don't want clients to
 make,
 because it prevents forward progress. One of the main reasons
 microversions is a valuable feature is because it allows forward
 progress by letting us make major changes without breaking old
 clients.

 If clients are allowed to assume that nothing will change too
 much in
 the future (which is what asking for latest implies) then the
 server
 will be right back in the situation it was trying to get out of
 -- it
 can never change any API in a way that might break old clients.

 I can think of no situation where transmitting latest is
 better than
 transmitting the highest version that existed at the time the
 client was
 written.

 -Ben Swartzlander

 * Undefined/broken behavior unless the server restricts itself
 to never
 making any backward-compatiblity-breaking change of any kind.



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 
 http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kind Regards
 Valeriy Ponomaryov
 www.mirantis.com http://www.mirantis.com
 vponomar...@mirantis.com mailto:vponomar...@mirantis.com


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [Blazar] Anyone interested?

2015-08-28 Thread Pierre Riteau
Hello,

The NSF-funded Chameleon project (https://www.chameleoncloud.org) uses Blazar 
to provide advance reservations of resources for running cloud computing 
experiments.

We would be interested in contributing as well.

Pierre Riteau

On 28 Aug 2015, at 07:56, Ildikó Váncsa ildiko.van...@ericsson.com wrote:

 Hi All,
 
 The resource reservation topic pops up time to time on different forums to 
 cover use cases in terms of both IT and NFV. The Blazar project was intended 
 to address this need, but according to my knowledge due to earlier 
 integration and other difficulties the work has been stopped.
 
 My question is that who would be interested in resurrecting the Blazar 
 project and/or working on a reservation system in OpenStack?
 
 Thanks and Best Regards,
 Ildikó
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Cinder and Nova availability zones

2015-08-28 Thread Ned Rhudy (BLOOMBERG/ 731 LEX)
Our use case for fake AZs (and why I pushed 
https://review.openstack.org/#/c/217857/ to enable that sort of behavior) is 
what Michal outlined, namely that we use Ceph and do not need or want Cinder to 
add itself to the mix when we're dealing with our failure domains. We already 
handle that via our Ceph crush map, so Cinder doesn't need to worry about it. 
It should just throw volumes at the configured RBD pool for the requested 
backend and not concern itself with what's going on behind the scenes.

From: openstack-dev@lists.openstack.org 
Subject: Re: [openstack-dev] [cinder] [nova] Cinder and Nova availability zones


Except your failure domain includes the cinder volume service, independent of 
the resiliency of you backend, so if they're all on one node then you don't 
really have availability zones.
I have historically strongly espoused the same view as Ben, though there are 
lots of people who want fake availability zones... No strong use cases though
On 28 Aug 2015 11:59, Dulko, Michal michal.du...@intel.com wrote:

 From: Ben Swartzlander [mailto:b...@swartzlander.org]
 Sent: Thursday, August 27, 2015 8:11 PM
 To: OpenStack Development Mailing List (not for usage questions)

 On 08/27/2015 10:43 AM, Ivan Kolodyazhny wrote:


   Hi,

   Looks like we need to be able to set AZ per backend. What do you
 think about such option?



 I dislike such an option.

 The whole premise behind an AZ is that it's a failure domain. The node
 running the cinder services is in exactly one such failure domain. If you 
 have 2
 backends in 2 different AZs, then the cinder services managing those
 backends should be running on nodes that are also in those AZs. If you do it
 any other way then you create a situation where a failure in one AZ causes
 loss of services in a different AZ, which is exactly what the AZ feature is 
 trying
 to avoid.

 If you do the correct thing and run cinder services on nodes in the AZs that
 they're managing then you will never have a problem with the one-AZ-per-
 cinder.conf design we have today.

 -Ben

I disagree. You may have failure domains done on a different level, like using 
Ceph mechanisms for that. In such case you want to provide the user with a 
single backend regardless of compute AZ partitioning. To address such needs you 
would need to set multiple AZ per backend to make this achievable.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] 40% failure on neutron python3.4 tests in the gate

2015-08-28 Thread Kevin Benton
Why would that only impact py34 and not py27? Aren't the py27 run with
testtools?


On Fri, Aug 28, 2015 at 5:41 AM, Sean Dague s...@dague.net wrote:

 On 08/28/2015 08:34 AM, Kevin Benton wrote:
  One of the patches that fixes one of the functional failures that has
  been hitting is here: https://review.openstack.org/#/c/217927/
 
  However, it failed in the DVR job on the 'test_router_rescheduling'
  test.[1] This failure is because the logic to skip when DVR is enabled
  is based on a check that will always return False.[2] I pushed a patch
  to tempest to fix that [3] so once that gets merged we should be able to
  get the one above merged.
 
  For the py34 failures, they seem to have started around the same time as
  a change was merged that adjusted the way they were ran so I proposed a
  revert for that patch here: https://review.openstack.org/218244

 That would be indicative of the fact that the tests aren't isolated, and
 running them in parallel breaks things because the tests implicitly
 depend on both order, and that everything before them actually ran.

 -Sean

 --
 Sean Dague
 http://dague.net

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] About logging-flexibility

2015-08-28 Thread Ihar Hrachyshka
 On 28 Aug 2015, at 14:16, Fujita, Daisuke fuzita.dais...@jp.fujitsu.com 
 wrote:
 
 Hi, Ihar and Dims
 
 Thank you for your reply.
 
 I uploaded new a patch-set which is a single patch for oslo.log
 I'd like you to do a code review.
 https://review.openstack.org/#/c/218139/
 
 After this email, I'd like to add you to reviewer lists.
 
 
 Thank you for your cooperation.
 
 Best Regards,
 Daisuke Fujita
 

Thanks a lot. I believe such a tiny patch can actually go into Liberty, but I 
leave it to oslo team to decide.
Ihar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][sfc] Neutron stadium distribution and/or packaging

2015-08-28 Thread Ihar Hrachyshka
 On 28 Aug 2015, at 14:08, Paul Carver pcar...@paulcarver.us wrote:
 
 Has anyone written anything up about expectations for how Big Tent or 
 Neutron Stadium projects are expected to be installed/distributed/packaged?
 

Seems like your questions below are more about extendability than e.g. 
packaging.

 In particular, I'm wondering how we're supposed to handle changes to Neutron 
 components. For the networking-sfc project we need to make additions to the 
 API and corresponding additions to neutronclient as well as modifying the OvS 
 agent to configure new flow table entries in OvS.
 
 The code is in a separate Git repo as is expected of a Stadium project but it 
 doesn't make sense that we would package altered copies of files that are 
 deployed by the regular Neutron packages.
 

Of course you should not ship you custom version of neutron with your 
sub-project. Instead, you should work with neutron team to make sure you have 
all needed to extend it without duplicating efforts in your project.

 Should we be creating 99%+ of the functionality in filenames that don't 
 conflict and then making changes to files in the Neutron and neutronclient 
 repos to stitch together the 1% that adds our new functionality to the 
 existing components? Or do we stage the code in the Stadium project's repo 
 then subsequently request to merge it into the neutron/neutronclient repo? Or 
 is there some other preferred way to integrate the added features?
 

I presume that all sub-projects should use their own python namespace and not 
pollute neutron.* namespace. If that’s not the case for your sub-project, you 
should migrate to a new namespace asap.

If there is anything missing in neutron or neutronclient for you to integrate 
with it, then you should work in those repositories to get the extension hooks 
or features you miss, and after it’s in neutron, you will merely utilise them 
from your sub-project. Of course it means some kind of dependency on the 
progress in neutron repository to be able to estimate feature plans in your 
sub-project.

Ihar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] info in paste will be removed?

2015-08-28 Thread Jeremy Stanley
On 2015-08-28 07:06:03 + (+), Osanai, Hisashi wrote:
 I would like to know whether info in http://paste.openstack.org
 will be removed or not. If it will be removed, I also would like
 to know a condition.

We (the project infrastructure root sysadmins) don't expire/purge
the content on paste.openstack.org, though have deleted individual
pastes on request if someone reports material which is abusive or
potentially illegal in many jurisdictions.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] About logging-flexibility

2015-08-28 Thread Fujita, Daisuke
Hi, Ihar and Dims

Thank you for your reply.

I uploaded new a patch-set which is a single patch for oslo.log
I'd like you to do a code review.
 https://review.openstack.org/#/c/218139/

After this email, I'd like to add you to reviewer lists.


Thank you for your cooperation.

Best Regards,
Daisuke Fujita


 -Original Message-
 From: Ihar Hrachyshka [mailto:ihrac...@redhat.com]
 Sent: Thursday, August 27, 2015 7:00 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [oslo] About logging-flexibility
 
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256
 
 On 08/27/2015 11:56 AM, Davanum Srinivas wrote:
  Daisuke,
 
  It's very late for merging these patches for Liberty. Sorry, they
  will have to wait till M. We can talk more about it on next
  Monday's Oslo meeting. Please let us know and i'll add a topic
  there if you can make it.
 
 
 Not judging the cycle concern, I believe this should be a single patch
 for oslo.log of LOC ~ 100-150 lines with tests.
 
 Ihar
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2
 
 iQEcBAEBCAAGBQJV3t+6AAoJEC5aWaUY1u572kQH/jSJOSbZbaK6M5ebptW/8i/E
 MxsbhRCez/Iwl33ULMjWbTUWNZgFY9SgBqrddR6ueSnn/KXsVxWodVQ5RtMVa8Gc
 VrtY7SpSQ7FFy0glC6tvGKkPHT44HOrXeZQ2b7hsA+bdH3s2Uwx/KJ1REcG+w4CY
 l0JUtycTtbhHC5Rb7Z17J9Z/rYWUtbWiZp4Ez+7jUdGsHHtNfO36tGQcKgNApIJQ
 Ns8qlXjwRo9yvOEwO+OhkR+i7FQDjzgmQLATNBehnq9hFfHe5mJy21U9pscIHiL7
 qzUBQ/i9zhWuybrc4FNk/YHxg3CoryD5xkBzbBQglX0qTVSCMfkvMjRfxLxlKfs=
 =k8ar
 -END PGP SIGNATURE-
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] 40% failure on neutron python3.4 tests in the gate

2015-08-28 Thread Sean Dague
On 08/28/2015 08:50 AM, Kevin Benton wrote:
 Why would that only impact py34 and not py27? Aren't the py27 run with
 testtools?

py34 is only running some subset of tests, so there are a lot of ways
this can go weird.

It may be that the db tests that are failing assume some other tests
which have a db setup thing run before them. In the py27 case there are
enough tests that do that setup that stastically one nearly always runs
before the ones that are problematic.

There are a couple of modes you can run testr in, like --isolated which
will expose tests that are coupled to other tests running before them.
If you generate a local fail you can also --analyze-isolation to figure
out what tests are coupled.

testr also reorders tests to attempt to be faster in aggregate. So run
order is different than it would be in testtools.run case.

In the testtools.run case all the tests are just run in discovery order.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][manila] latest microversion considered dangerous

2015-08-28 Thread Sean Dague
On 08/28/2015 09:32 AM, Alex Meade wrote:
 I don't know if this is really a big problem. IMO, even with
 microversions you shouldn't be implementing things that aren't backwards
 compatible within the major version. I thought the benefit of
 microversions is to know if a given feature exists within the major
 version you are using. I would consider a breaking change to be a major
 version bump. If we only do a microversion bump for a backwards
 incompatible change then we are just using microversions as major versions.

In the Nova case, Microversions aren't semver. They are content
negotiation. Backwards incompatible only means something if time's arrow
only flows in one direction. But when connecting to a bunch of random
OpenStack clouds, there is no forced progression into the future.

While each service is welcome to enforce more compatibility for the sake
of their users, one should not assume that microversions are semver as a
base case.

I agree that 'latest' is basically only useful for testing. The
python-novaclient code requires a microversion be specified on the API
side, and on the CLI side negotiates to the highest version of the API
that it understands which is supported on the server -
https://github.com/openstack/python-novaclient/blob/d27568eab50b10fc022719172bc15666f3cede0d/novaclient/__init__.py#L23

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] 40% failure on neutron python3.4 tests in the gate

2015-08-28 Thread Kevin Benton
One of the patches that fixes one of the functional failures that has been
hitting is here: https://review.openstack.org/#/c/217927/

However, it failed in the DVR job on the 'test_router_rescheduling'
test.[1] This failure is because the logic to skip when DVR is enabled is
based on a check that will always return False.[2] I pushed a patch to
tempest to fix that [3] so once that gets merged we should be able to get
the one above merged.

For the py34 failures, they seem to have started around the same time as a
change was merged that adjusted the way they were ran so I proposed a
revert for that patch here: https://review.openstack.org/218244

1.
http://logs.openstack.org/27/217927/1/check/gate-tempest-dsvm-neutron-dvr/3361f9f/logs/testr_results.html.gz
2.
https://github.com/openstack/tempest/blob/8d827589e6589814e01089eb56b4d109274c781a/tempest/scenario/test_network_basic_ops.py#L662-L663
3. https://review.openstack.org/#/c/218242/

On Fri, Aug 28, 2015 at 4:09 AM, Sean Dague s...@dague.net wrote:

 We're at a 18hr backup in the gate, which is really unusual given the
 amount of decoupling. Even under our current load that means we're
 seeing huge failure rates causing resets.

 It appears one of the major culprits is the python34 tests in neutron,
 which were over a 40% failure rate recently - http://goo.gl/9wCerK

 That tends to lead to things like -
 http://dl.dropbox.com/u/6514884/screenshot_249.png - which means a huge
 amount of work has been reset. Right now 3 of 7 neutron patches in the
 gate that are within the sliding window are in a failure state (they are
 also the only current visible fails in the window).

 Looking at one of the patches in question -
 https://review.openstack.org/#/c/202207/ - shows it's been rechecked 3
 times, and these failures were seen in earlier runs.

 I do understand that people want to get their code merged, but
 rechecking patches that are failing this much without going after the
 root causes means everyone pays for it. This is blocking a lot of other
 projects from landing code in a timely manner.

 The functional tests seem to have a quite high failure rate as well from
 spot checking. If the results of these tests are mostly going to be
 ignored and rechecked, can we remove them from the gate definition on
 neutron so they aren't damaging the overall flow of the gate?

 Thanks,

 -Sean

 --
 Sean Dague
 http://dague.net

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][sfc] Neutron stadium distribution and/or packaging

2015-08-28 Thread Kyle Mestery
On Fri, Aug 28, 2015 at 8:07 AM, Ihar Hrachyshka ihrac...@redhat.com
wrote:

  On 28 Aug 2015, at 14:08, Paul Carver pcar...@paulcarver.us wrote:
 
  Has anyone written anything up about expectations for how Big Tent or
 Neutron Stadium projects are expected to be
 installed/distributed/packaged?
 

 Seems like your questions below are more about extendability than e.g.
 packaging.


I agree, though I will say that your project is listed as
release:independent [1]. This means that networking-sfc will NOT release
when neutron and neutron-[fwaas, lbaas, vpnaas] release Liberty, but can
release whenever it desires. This would be when the code is complete and
the team has decided a release should be made. The process for handling
this release is documented here [2] (though wait for that to refresh based
on the review which merged here [3]).

[1] http://governance.openstack.org/reference/projects/neutron.html
[2]
http://docs.openstack.org/developer/neutron/devref/sub_project_guidelines.html#releases
[3] https://review.openstack.org/#/c/217723/


  In particular, I'm wondering how we're supposed to handle changes to
 Neutron components. For the networking-sfc project we need to make
 additions to the API and corresponding additions to neutronclient as well
 as modifying the OvS agent to configure new flow table entries in OvS.
 
  The code is in a separate Git repo as is expected of a Stadium project
 but it doesn't make sense that we would package altered copies of files
 that are deployed by the regular Neutron packages.
 

 Of course you should not ship you custom version of neutron with your
 sub-project. Instead, you should work with neutron team to make sure you
 have all needed to extend it without duplicating efforts in your project.

  Should we be creating 99%+ of the functionality in filenames that don't
 conflict and then making changes to files in the Neutron and neutronclient
 repos to stitch together the 1% that adds our new functionality to the
 existing components? Or do we stage the code in the Stadium project's repo
 then subsequently request to merge it into the neutron/neutronclient repo?
 Or is there some other preferred way to integrate the added features?
 

 I presume that all sub-projects should use their own python namespace and
 not pollute neutron.* namespace. If that’s not the case for your
 sub-project, you should migrate to a new namespace asap.

 If there is anything missing in neutron or neutronclient for you to
 integrate with it, then you should work in those repositories to get the
 extension hooks or features you miss, and after it’s in neutron, you will
 merely utilise them from your sub-project. Of course it means some kind of
 dependency on the progress in neutron repository to be able to estimate
 feature plans in your sub-project.

 I'll second Ihar here. If you have dependencies in other projects, those
need to be worked in so they can be consumed by networking-sfc.


 Ihar
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] 40% failure on neutron python3.4 tests in the gate

2015-08-28 Thread Assaf Muller
On Fri, Aug 28, 2015 at 9:12 AM, Neil Jerram neil.jer...@metaswitch.com
wrote:

 On 28/08/15 13:39, Kevin Benton wrote:
  For the py34 failures, they seem to have started around the same time
  as a change was merged that adjusted the way they were ran so I
  proposed a revert for that patch
  here: https://review.openstack.org/218244
 
 

 Which leads on to https://review.openstack.org/#/c/217379/6.


Armando reported the py34 Neutron gate issues a few hours after they
started,
and I pushed that fix a few hours after that. Sadly it's taking time to get
that
through the gate.



 Which is itself failing to merge for various dvsm-functional reasons,
 including failure of test_restart_wsgi_on_sighup_multiple_workers [1].
 There's a bug for that at
 https://bugs.launchpad.net/neutron/+bug/1478190, but that doesn't show
 any activity for the last few days.

 [1]

 http://logs.openstack.org/79/217379/6/gate/gate-neutron-dsvm-functional/2991b11/testr_results.html.gz

 Regards,
 Neil



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] upcoming glanceclient release

2015-08-28 Thread stuart . mclaren


I've compiled a list of backwards incompatabilities where the new client
will impact (in some cases break) existing scripts:

https://wiki.openstack.org/wiki/Glance-v2-v1-client-compatability



On 27/08/15 15:32 -0400, Nikhil Komawar wrote:


As a part of our continued effort to make the v2 primary API and get
people to consume it without confusion we are planning to move ahead
with the client release (the release would set the default version of
API to 2). There haven't been any major/minor raised here.

An issue regarding the possible impact of this release due to major bump
was raised during the morning meeting however, the client release should
follow semver semantics and indicate the same. A corresponding review
for release notes exists that should merge before the release. This
medium of communication seems enough; it follows the prescription for
necessary communication. I don't seem to find a definition for necessary
and sufficient media for communicating this information so will take
what we usually follow.

There are a few bugs [1] that could be considered as part of this
release but do not seem to be blockers. In order to accommodate the
deadlines of the release milestones and impact of releases in the
upcoming week to other projects, we can continue to fix bugs and release
them as a part of 1.x.x releases sooner than later as time and resource
permit. Also, the high ones can be part of the stable/* backports if
needed but the description has only shell impact so there isn't a strong
enough reason.

So, we need to move ahead with this release for Liberty.


+1

We've been making small steps towards this for a couple of cycles and
I'm happy we're finally switching the default version on the client
library.

The above being said, I believe our client library needs a lot of more
work but this release should set us in a better position to do that.

For folks consuming glanceclient, here's what you need to know:

If you're using glanceclient from your software - that is, you're
using the library and not the CLI - there's nothing you need to do. If
you're using the library, I'm assuming you're creating a client
instance using[0] or by instantiating the specific versioned client
class. Both of these cases require you to specify an API version to
use.

However, if you're using the CLI and you want to stick using the V1,
then you'll need to update your scripts and make sure they use
`--os-image-api 1`. Respectively, if your scripts are using
`--os-image-api 2`, then you should feel free to ignore that argument.

As a good practice, for now, I'd recommend specifying the argument
regardless of the version.

For other changes, please review the rel notes[1] (as soon as they are
there, or you can read[2])

[0] 
https://git.openstack.org/cgit/openstack/python-glanceclient/tree/glanceclient/client.py?h=stable/kilo#n21
[1] http://docs.openstack.org/developer/python-glanceclient/#release-notes
[2] https://review.openstack.org/#/c/217591/




[1]
https://bugs.launchpad.net/python-glanceclient/+bugs?field.tag=1.0.0-potential


Thanks Stuart for tagging these bugs and everyone for raising great
concerns pro/against the release[0]. And thanks Erno for pushing this
out.

[0] 
http://eavesdrop.openstack.org/meetings/glance/2015/glance.2015-08-27-14.00.log.html

Flavio


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api] [wsme] [ceilometer] Replacing WSME with _____ ?

2015-08-28 Thread Chris Dent


This morning I kicked off a quick spec for replacing WSME in
Ceilometer with ... something:

https://review.openstack.org/#/c/218155/

This is because not only is WSME not that great, it also results in
controller code that is inscrutable.

The problem with the spec is that it doesn't know what to replace
WSME with.

So, for your Friday afternoon pleasure I invite anyone with an
opinion to hold forth on what framework they would choose. The spec
lists a few options but please feel to not limit yourself to those.

If you just want to shoot the breeze please respond here. If you
have specific comments on the spec please response there.

Thanks!

P.S: An option not listed, and one that may make perfect sense for
ceilometer (but perhaps not aodh), is to do nothing and consider the
v2 api legacy.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] upcoming glanceclient release

2015-08-28 Thread Flavio Percoco

On 28/08/15 15:05 +0100, stuart.mcla...@hp.com wrote:


I've compiled a list of backwards incompatabilities where the new client
will impact (in some cases break) existing scripts:

https://wiki.openstack.org/wiki/Glance-v2-v1-client-compatability


Awesome!





On 27/08/15 15:32 -0400, Nikhil Komawar wrote:


As a part of our continued effort to make the v2 primary API and get
people to consume it without confusion we are planning to move ahead
with the client release (the release would set the default version of
API to 2). There haven't been any major/minor raised here.

An issue regarding the possible impact of this release due to major bump
was raised during the morning meeting however, the client release should
follow semver semantics and indicate the same. A corresponding review
for release notes exists that should merge before the release. This
medium of communication seems enough; it follows the prescription for
necessary communication. I don't seem to find a definition for necessary
and sufficient media for communicating this information so will take
what we usually follow.

There are a few bugs [1] that could be considered as part of this
release but do not seem to be blockers. In order to accommodate the
deadlines of the release milestones and impact of releases in the
upcoming week to other projects, we can continue to fix bugs and release
them as a part of 1.x.x releases sooner than later as time and resource
permit. Also, the high ones can be part of the stable/* backports if
needed but the description has only shell impact so there isn't a strong
enough reason.

So, we need to move ahead with this release for Liberty.


+1

We've been making small steps towards this for a couple of cycles and
I'm happy we're finally switching the default version on the client
library.

The above being said, I believe our client library needs a lot of more
work but this release should set us in a better position to do that.

For folks consuming glanceclient, here's what you need to know:

If you're using glanceclient from your software - that is, you're
using the library and not the CLI - there's nothing you need to do. If
you're using the library, I'm assuming you're creating a client
instance using[0] or by instantiating the specific versioned client
class. Both of these cases require you to specify an API version to
use.

However, if you're using the CLI and you want to stick using the V1,
then you'll need to update your scripts and make sure they use
`--os-image-api 1`. Respectively, if your scripts are using
`--os-image-api 2`, then you should feel free to ignore that argument.

As a good practice, for now, I'd recommend specifying the argument
regardless of the version.

For other changes, please review the rel notes[1] (as soon as they are
there, or you can read[2])

[0] 
https://git.openstack.org/cgit/openstack/python-glanceclient/tree/glanceclient/client.py?h=stable/kilo#n21
[1] http://docs.openstack.org/developer/python-glanceclient/#release-notes
[2] https://review.openstack.org/#/c/217591/




[1]
https://bugs.launchpad.net/python-glanceclient/+bugs?field.tag=1.0.0-potential


Thanks Stuart for tagging these bugs and everyone for raising great
concerns pro/against the release[0]. And thanks Erno for pushing this
out.

[0] 
http://eavesdrop.openstack.org/meetings/glance/2015/glance.2015-08-27-14.00.log.html

Flavio


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


pgpmPWhEo634Z.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] 40% failure on neutron python3.4 tests in the gate

2015-08-28 Thread Assaf Muller
To recap, we had three issues impacting the gate queue:

1) The neutron functional job has had a high failure rate for a while now.
Since it's impacting the gate,
I've removed it from the gate queue but kept it in the Neutron check queue:
https://review.openstack.org/#/c/218302/

If you'd like to help, the the list of bugs impacting the Neutron
functional job is linked in that patch.

2) A new Tempest scenario test was added that caused the DVR job failure
rate to sky rocket to over 50%.
It actually highlighted a legit bug with DVR and legacy routers. Kevin
proposed a patch that skips that test
entirely until we can resolve the bug in Neutron:
https://review.openstack.org/#/c/218242/ (Currently it tries to skip the
test conditionally, the next PS will skip the test entirely).

3) The Neutron py34 job has been made unstable due to a recent change (By
me, yay) that made the tests
run with multiple workers. This highlighted an issue with the Neutron unit
testing infrastructure, which is fixed here:
https://review.openstack.org/#/c/217379/

With all three patches merged we should be good to go.

On Fri, Aug 28, 2015 at 9:37 AM, Sean Dague s...@dague.net wrote:

 On 08/28/2015 09:22 AM, Assaf Muller wrote:
 
 
  On Fri, Aug 28, 2015 at 9:12 AM, Neil Jerram neil.jer...@metaswitch.com
  mailto:neil.jer...@metaswitch.com wrote:
 
  On 28/08/15 13:39, Kevin Benton wrote:
   For the py34 failures, they seem to have started around the same
 time
   as a change was merged that adjusted the way they were ran so I
   proposed a revert for that patch
   here: https://review.openstack.org/218244
  
  
 
  Which leads on to https://review.openstack.org/#/c/217379/6.
 
 
  Armando reported the py34 Neutron gate issues a few hours after they
  started,
  and I pushed that fix a few hours after that. Sadly it's taking time to
  get that
  through the gate.

 When issues like these arrise, please bring them to the infra team in
 #openstack-infra. They can promote fixes that unbreak things.

 -Sean

 --
 Sean Dague
 http://dague.net

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] 40% failure on neutron python3.4 tests in the gate

2015-08-28 Thread Sean Dague
On 08/28/2015 11:20 AM, Assaf Muller wrote:
 To recap, we had three issues impacting the gate queue:
 
 1) The neutron functional job has had a high failure rate for a while
 now. Since it's impacting the gate,
 I've removed it from the gate queue but kept it in the Neutron check queue:
 https://review.openstack.org/#/c/218302/
 
 If you'd like to help, the the list of bugs impacting the Neutron
 functional job is linked in that patch.
 
 2) A new Tempest scenario test was added that caused the DVR job failure
 rate to sky rocket to over 50%.
 It actually highlighted a legit bug with DVR and legacy routers. Kevin
 proposed a patch that skips that test
 entirely until we can resolve the bug in Neutron:
 https://review.openstack.org/#/c/218242/ (Currently it tries to skip the
 test conditionally, the next PS will skip the test entirely).
 
 3) The Neutron py34 job has been made unstable due to a recent change
 (By me, yay) that made the tests
 run with multiple workers. This highlighted an issue with the Neutron
 unit testing infrastructure, which is fixed here:
 https://review.openstack.org/#/c/217379/
 
 With all three patches merged we should be good to go.

Well, with all 3 of these we should be much better for sure. There are
probably additional issues causing intermittent failures which should be
looked at. These 3 are definitely masking anything else.

https://etherpad.openstack.org/p/gate-fire-2015-08-28 is a set of
patches to promote for things causing races in the gate (we've got a
cinder one was well). If other issues are known with fixes posted,
please feel free to add them with comments.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] [wsme] [ceilometer] Replacing WSME with _____ ?

2015-08-28 Thread John Trowbridge


On 08/28/2015 10:36 AM, Lucas Alvares Gomes wrote:
 Hi,
 
 If you just want to shoot the breeze please respond here. If you
 have specific comments on the spec please response there.

 
 I have been thinking about doing it for Ironic as well so I'm looking
 for options. IMHO after using WSME I would think that one of the most
 important criteria we should start looking at is if the project has a
 health, sizable and active community around it. It's crucial to use
 libraries that are being maintained.
 
 So at the present moment the [micro]framework that comes to my mind -
 without any testing or prototype of any sort - is Flask.

I personally find Flask to be super nice to work with. It is easy to
visualize what the API looks like just from reading the code. It also
has good documentation and a fairly large community.

 
 Cheers,
 Lucas
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][third-party] StorPool Cinder CI

2015-08-28 Thread Peter Penchev
On Fri, Aug 28, 2015 at 1:03 AM, Peter Penchev
openstack-...@storpool.com wrote:
 On Fri, Aug 28, 2015 at 12:22 AM, Asselin, Ramy ramy.asse...@hp.com wrote:
 Hi Peter,

 Your log files require downloads. Please fix it such that they can be viewed 
 directly [1]

 Hi, and thanks for the fast reply!  Yes, I'll try to change the
 webserver's configuration, although the snippet in the FAQ won''t help
 a lot, since it's a lighttpd server, not Apache.  I'll get back to you
 when I've figured something out.

OK, it took some twiddling with the lighttpd config, but it's done -
now files with a .gz extension are served uncompressed in a way that
makes the browser display them and not save them to disk.

About the rebasing over our local patches: I made the script display
the subject lines and the file lists of the commits on our local
branch (the ones that the source is being rebased onto).  Pay no
attention to the several commits to devstack; they are mostly
artifacts of our own infrastructure and the setup of the machines, and
in most cases they are no-ops.

About your question about 3129 and 217802/1 - well, to be fair, this
is not a Cinder patch, so it's kind of expected that you won't find it
in the Cinder commits :)  It's a Brick patch and it is indeed listed a
couple of lines down in the os-brick section :)

So, a couple of examples of our shiny new log setup (well, pretty much
the same as the old boring log setup, but oh well):

http://ci-openstack.storpool.com:8080/job/dsvm-tempest-storpool-cinder-driver/3166/
http://ci-openstack.storpool.com:8080/job/dsvm-tempest-storpool-cinder-driver/3167/
http://ci-openstack.storpool.com:8080/job/dsvm-tempest-storpool-cinder-driver/3168/

G'luck,
Peter

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][manila] latest microversion considered dangerous

2015-08-28 Thread Joe Gordon
On Aug 28, 2015 6:49 AM, Sean Dague s...@dague.net wrote:

 On 08/28/2015 09:32 AM, Alex Meade wrote:
  I don't know if this is really a big problem. IMO, even with
  microversions you shouldn't be implementing things that aren't backwards
  compatible within the major version. I thought the benefit of
  microversions is to know if a given feature exists within the major
  version you are using. I would consider a breaking change to be a major
  version bump. If we only do a microversion bump for a backwards
  incompatible change then we are just using microversions as major
versions.

 In the Nova case, Microversions aren't semver. They are content
 negotiation. Backwards incompatible only means something if time's arrow
 only flows in one direction. But when connecting to a bunch of random
 OpenStack clouds, there is no forced progression into the future.

 While each service is welcome to enforce more compatibility for the sake
 of their users, one should not assume that microversions are semver as a
 base case.

 I agree that 'latest' is basically only useful for testing. The

Sounds like we need to update the docs for this.

 python-novaclient code requires a microversion be specified on the API
 side, and on the CLI side negotiates to the highest version of the API
 that it understands which is supported on the server -

https://github.com/openstack/python-novaclient/blob/d27568eab50b10fc022719172bc15666f3cede0d/novaclient/__init__.py#L23

Considering how unclear these two points appear to be, are they clearly
documented somewhere? So that as more projects embrace microversions, they
don't end up having the same discussion.


 -Sean

 --
 Sean Dague
 http://dague.net

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] [wsme] [ceilometer] Replacing WSME with _____ ?

2015-08-28 Thread Lucas Alvares Gomes
Hi,

 If you just want to shoot the breeze please respond here. If you
 have specific comments on the spec please response there.


I have been thinking about doing it for Ironic as well so I'm looking
for options. IMHO after using WSME I would think that one of the most
important criteria we should start looking at is if the project has a
health, sizable and active community around it. It's crucial to use
libraries that are being maintained.

So at the present moment the [micro]framework that comes to my mind -
without any testing or prototype of any sort - is Flask.

Cheers,
Lucas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] contextlib.nested and Python3 failing

2015-08-28 Thread Brant Knudson
On Wed, Aug 19, 2015 at 6:51 PM, Sylvain Bauza sba...@redhat.com wrote:

 Hi,

 I was writing some tests so I added a contextlib.nested to a checked
 TestCase [1]. Unfortunately, contextlib.nested is no longer available in
 Python3 and there is no clear solution on how to provide a compatible
 import for both python2 and python3:
  - either providing a python3 compatible behaviour by using
 contextlib.ExitStack but that class is not available in Python 2
  - or provide contextlib2 for python2 (and thus adding it to the
 requirements)

 That sounds really disruptive and blocking as we are close to the
 FeatureFreeze. Many other users of contextlib.nested are not impacted by
 the job because it excludes all of them but since the test I'm changing is
 part of the existing validated tests, that leaves Jenkins -1'ing my change.

 Of course, a 3rd solution would consist of excluding my updated test from
 the python3 check but I can hear others yelling at that :-)

 Ideas appreciated.

 -Sylvain

 [1]
 https://review.openstack.org/#/c/199205/18/nova/tests/unit/scheduler/test_rpcapi.py,cm



Mock provides a context that patches multiple things so that no nesting is
needed: http://www.voidspace.org.uk/python/mock/patch.html#patch-multiple

oslotest provides fixtures for mock, so you don't need a context:
http://docs.openstack.org/developer/oslotest/api.html#module-oslotest.mockpatch

  __ Brant
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] [wsme] [ceilometer] Replacing WSME with _____ ?

2015-08-28 Thread Julien Danjou
On Fri, Aug 28 2015, Chris Dent wrote:

 This morning I kicked off a quick spec for replacing WSME in
 Ceilometer with ... something:

 https://review.openstack.org/#/c/218155/

 This is because not only is WSME not that great, it also results in
 controller code that is inscrutable.

 The problem with the spec is that it doesn't know what to replace
 WSME with.

 So, for your Friday afternoon pleasure I invite anyone with an
 opinion to hold forth on what framework they would choose. The spec
 lists a few options but please feel to not limit yourself to those.

 If you just want to shoot the breeze please respond here. If you
 have specific comments on the spec please response there.

For Gnocchi we've been relying on voluptuous¹ for data validation, and
Pecan as the rest of the framework – like what's used in Ceilometer and
consors.

I find it a pretty good option, more Pythonic than JSON Schema – which
has its pros and cons too.

What I'm not happy with is actually Pecan, as I find the routing system
way too much complex in the end. I think I'd prefer to go with something
like Flask finally.

 P.S: An option not listed, and one that may make perfect sense for
 ceilometer (but perhaps not aodh), is to do nothing and consider the
 v2 api legacy.

This is going to happen in a few cycles I hope for Ceilometer.

¹  https://pypi.python.org/pypi/voluptuous

-- 
Julien Danjou
# Free Software hacker
# http://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] upcoming glanceclient release

2015-08-28 Thread stuart . mclaren




I've compiled a list of backwards incompatabilities where the new client
will impact (in some cases break) existing scripts:

https://wiki.openstack.org/wiki/Glance-v2-v1-client-compatability



Awesome!


To be honest there's a little more red there than I'd like.

Of the 72 commands I tried, the new client failed to even parse the input in 36 
cases.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] [wsme] [ceilometer] Replacing WSME with _____ ?

2015-08-28 Thread Dmitry Tantsur

On 08/28/2015 04:36 PM, Lucas Alvares Gomes wrote:

Hi,


If you just want to shoot the breeze please respond here. If you
have specific comments on the spec please response there.



I have been thinking about doing it for Ironic as well so I'm looking
for options. IMHO after using WSME I would think that one of the most
important criteria we should start looking at is if the project has a
health, sizable and active community around it. It's crucial to use
libraries that are being maintained.

So at the present moment the [micro]framework that comes to my mind -
without any testing or prototype of any sort - is Flask.


We're using Flask in inspector. We have a nice experience, but note that 
inspector does not have very complex API :)




Cheers,
Lucas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Tracing a request (NOVA)

2015-08-28 Thread Vedsar Kushwaha
*i just want to understand as to how the request goes from the api-call to
the nova-api and so on after that.*
To answer so on after that, in addition to josh answer, you can also look
into
http://ilearnstack.com/2013/04/26/request-flow-for-provisioning-instance-in-openstack/
.

Now to answer first part, i just want to understand as to how the request
goes from the api-call to the nova-api:
You can look into *http://developer.openstack.org/api-ref.html
http://developer.openstack.org/api-ref.html*, particularly in
*http://developer.openstack.org/api-ref-compute-v2.1.html
http://developer.openstack.org/api-ref-compute-v2.1.html*


On Sat, Aug 29, 2015 at 8:39 AM, Joshua Harlow harlo...@outlook.com wrote:

 I made the following some time ago,

 https://wiki.openstack.org/wiki/RunInstanceWorkflows

 https://wiki.openstack.org/w/images/a/a9/Curr-run-instance.png

 That may be useful for u, (it may also not be that up to date),

 Cheers,

 Josh

 Dhvanan Shah wrote:

 Hi,

 I'm trying to trace a request made for an instance and looking at the
 flow in the code.
 I'm just trying to understand better how the request goes from the
 dashboard to the nova-api , to the other internal components of nova and
 to the scheduler and back with a suitable host and launching of the
 instance.

 i just want to understand as to how the request goes from the api-call
 to the nova-api and so on after that.
 I have understood the nova-scheduler and in that, the filter_scheduler
 receives something called request_spec that is the specifications of the
 request that is made, and I want to see where this comes from. I was not
 very successful in reverse engineering this.

 I could use some help as I want to implement a scheduling algorithm of
 my own but for that I need to understand how and where the requests come
 in and how the flow works.

 If someone could guide me as to where i can find help or point in some
 direction then it would be of great help.
 --
 Dhvanan Shah

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Vedsar Kushwaha
SDE@Amazon Development Center
Past - Indian Institute of Science
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Glance] keystonemiddleware multiple keystone endpoints

2015-08-28 Thread joehuang
Hello, Jamie,

I hope I am wrong :) 

One comment for your patch.

using region name to filter the endpoint for the token validation may not work 
if no-catalog is configured in keystone server. include_service_catalog = 
True(BoolOpt) (Optional) Indicate whether to set the X-Service-Catalog 
header. If False, middleware will not ask for service catalog on token 
validation and will not set the X-Service-Catalog header.


Best Regards
Chaoyi Huang ( Joe Huang )


-Original Message-
From: Jamie Lennox [mailto:jamielen...@redhat.com] 
Sent: Tuesday, August 25, 2015 3:38 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Keystone][Glance] keystonemiddleware  multiple 
keystone endpoints



- Original Message -
 From: Hans Feldt hans.fe...@ericsson.com
 To: openstack-dev@lists.openstack.org
 Sent: Thursday, August 20, 2015 10:40:28 PM
 Subject: [openstack-dev] [Keystone][Glance] keystonemiddleware  multiple 
 keystone endpoints
 
 How do you configure/use keystonemiddleware for a specific identity 
 endpoint among several?
 
 In an OPNFV multi region prototype I have keystone endpoints per 
 region. I would like keystonemiddleware (in context of glance-api) to 
 use the local keystone for performing user token validation. Instead 
 keystonemiddleware seems to use the first listed keystone endpoint in 
 the service catalog (which could be wrong/non-optimal in most 
 regions).
 
 I found this closed, related bug:
 https://bugs.launchpad.net/python-keystoneclient/+bug/1147530

Hey, 

There's two points to this. 

* If you are using an auth plugin then you're right it will just pick the first 
endpoint. You can look at project specific endpoints[1] so that there is only 
one keystone endpoint returned for the services project. I've also just added a 
review for this feature[2].
* If you're not using an auth plugin (so the admin_X options) then keystone 
will always use the endpoint that is configured in the options (identity_uri).

Hope that helps,

Jamie


[1] 
https://github.com/openstack/keystone-specs/blob/master/specs/juno/endpoint-group-filter.rst
[2] https://review.openstack.org/#/c/216579

 Thanks,
 Hans
 
 __
  OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Pecan and Liberty-3

2015-08-28 Thread Brandon Logan

I'm also going to be working on this and pushing one or more patches so
it can load service plugins with extensions.  Testing with neutron lbaas
has yielded no success so far.

On Fri, 2015-08-28 at 16:25 -0700, Kevin Benton wrote:
 This weekend or early next week I will be pushing a couple of more
 patches to deal with some of the big TODOs (e.g. bulk). Then we can
 rename it and see if we can review the merge.
 
 
 I don't intend to have it fully replace our built-in WSGI solution in
 Liberty. It's too late in the cycle to make that drastic of a switch.
 I just want to have it in the main tree and have the option of trying
 it out in Liberty.
 
 On Fri, Aug 28, 2015 at 4:11 PM, Salvatore Orlando
 salv.orla...@gmail.com wrote:
 I'll leave to Kevin's more informed judgment to comment on
 whether it is appropriate to merge:
 
 
 [1] is a list of patches still under review on the feature
 branch. Some of them fix issues (like executing API actions),
 or implement TODOs
 
 
 This is the current list of TODOs:
 salvatore@ubuntu:/opt/stack/neutron$ find ./neutron/newapi/
 -name \*.py | xargs grep -n TODO
 ./neutron/newapi/hooks/context.py:50:#
 TODO(kevinbenton): is_admin logic
 ./neutron/newapi/hooks/notifier.py:22:# TODO(kevinbenton):
 implement
 ./neutron/newapi/hooks/member_action.py:28:#
 TODO(salv-orlando): This hook must go. Handling actions like
 this is
 ./neutron/newapi/hooks/quota_enforcement.py:33:#
 TODO(salv-orlando): This hook must go when adaptin the pecan
 code to
 ./neutron/newapi/hooks/attribute_population.py:59:
  # TODO(kevinbenton): the parent_id logic currently in base.py
 ./neutron/newapi/hooks/ownership_validation.py:34:#
 TODO(salvatore-orlando): consider whether this check can be
 folded
 ./neutron/newapi/app.py:40:#TODO(kevinbenton): error
 templates
 ./neutron/newapi/controllers/root.py:150:#
 TODO(kevinbenton): allow fields after policy enforced fields
 present
 ./neutron/newapi/controllers/root.py:160:#
 TODO(kevinbenton): bulk!
 ./neutron/newapi/controllers/root.py:190:#
 TODO(kevinbenton): bulk?
 ./neutron/newapi/controllers/root.py:197:#
 TODO(kevinbenton): bulk?
 
 
 In my opinion the pecan API now is working-ish; however we
 know it is not yet 100% functionally equivalent; but most
 importantly we don't know how it works. So far a few corners
 have bet cut when it comes to testing.
 Even if it works it is therefore probably usable.
 Unfortunately I don't know what are the criteria the core team
 evaluates for merging it back (and I'm sure that for this
 release at least the home grown WSGI won't be replaced).
 
 
 Salvatore
 
 
 [1] https://review.openstack.org/#/q/status:open
 +project:openstack/neutron+branch:feature/pecan,n,z
 
 
 On 28 August 2015 at 22:51, Kyle Mestery mest...@mestery.com
 wrote:
 
 Folks:
 
 
 Kevin wants to merge the pecan stuff, and I agree with
 him. I'm on vacation next week during Liberty-3, so
 Armando, Carl and Doug are running the show while I'm
 out. I would guess that if Kevin thinks it's ok to
 merge it in before Liberty-3, I'd go with his opinion
 and let it happen. If not, it can get an FFE and we
 can do it post Liberty-3.
 
 
 I'm sending this to the broader openstack-dev list so
 that everyone can be aware of this plan, and so that
 Ihar can help collapse things back next week with Doug
 on this.
 
 
 Thanks!
 
 Kyle
 
 
 
 
 __
 OpenStack Development Mailing List (not for usage
 questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 

Re: [openstack-dev] [Neutron] Targeting Logging API for SG and FW rules feature to L-3 milestone

2015-08-28 Thread Germy Lure
Hi Cao,

I have reviewed the specification linked above. Thank you for introducing
such an interesting and important feature. But as I commented inline, I
think it still need some further work to do. Such as how to get those logs
stored? To admin and tenant, I think it's different.
And performance impact, if tenantA turn on logs, will tenantB on the same
host be impacted?

Many thanks,
Germy

On Fri, Aug 21, 2015 at 6:04 PM, hoan...@vn.fujitsu.com 
hoan...@vn.fujitsu.com wrote:

 Good day,

  The specification and source codes will definitely reviewing/filing in
 next week.
  #link
  http://eavesdrop.openstack.org/meetings/networking_fwaas/2015/network
  ing_fwaas.2015-08-19-23.59.log.html
 
  No - I did not say definitely - nowhere in that IRC log was that word
 used.

 I'm sorry.  Yes, that should be probably.

 --
 Best regards,

 Cao Xuan Hoang
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] versioned objects changes

2015-08-28 Thread Adrian Otto
We are going to merge this work. I understand and respect Hongbin's position, 
but I respectfully disagree. When we are presented with ways to implement low 
overhead best practices like versioned objects, we will. It's not that hard to 
bump the version of an object when you change it. I like having systemic 
enforcement of that.

On the subject of review 211057, if you submit a review to remove comments, 
that is purely stylistic in nature, then you are inviting a discussion of style 
with our reviewers, and deserve to make that patch stylistically perfect.

If that patch had actual code in it tat made Magnum better, and several 
reviewers voted against the format of the comments, that would be stupid, and I 
would +2 it in spite of any -1 votes as long as it meets our rules for 
submission (like it must have a bug number).

Finally, meaningful -1 votes are valuable, and should not be viewed as a waste 
of effort. That's what we do as a team to help each other continually improve, 
and to make Magnum something we can all be proud of. With all that said, if you 
only have a stylistic comment, that should be a -0 vote with a comment, not a 
-1. If you are making stylistic and material comments together, that's fine, 
use a -1 vote.

Thanks,

Adrian

On Aug 28, 2015, at 5:21 PM, Davanum Srinivas 
dava...@gmail.commailto:dava...@gmail.com wrote:

Hongbin,

We are hearing the best advice available from the folks who started the 
library, evangelized it across nova, ironic, heat, neutron etc.

If we can spend so much time and energy (*FOUR* -1's on a review which just 
changes some commented lines - https://review.openstack.org/#/c/211057/) then 
we can and should clearly do better in things that really matter in the long 
run.

If we get into the rhythm of doing the right things and figuring out the steps 
needed right from the get go, it will pay off in the future.

My 2 cents.

Thanks,
Dims

PS: Note that i used we wearing my magnum core hat and not the o.vo/oslo core 
hat :)

On Fri, Aug 28, 2015 at 6:52 PM, Dan Smith 
d...@danplanet.commailto:d...@danplanet.com wrote:
 If you want my inexperienced opinion, a young project is the perfect
 time to start this.

^--- This ---^

 I understand that something like [2] will cause a test to fail when you
 make a major change to a versioned object. But you *want* that. It helps
 reviewers more easily catch contributors to say You need to update the
 version, because the hash changed. The sooner you start using versioned
 objects in the way they are designed, the smaller the upfront cost, and
 it will also be a major savings later on if something like [1] pops up.

...and the way it will be the least overhead is if it's part of the
culture of contributors and reviewers. It's infinitely harder to take
the culture shift after everyone is used to not having to think about
upgrades, not to mention the technical recovery Ryan mentioned.

It's not my call for Magnum, but long-term thinking definitely pays off
in this particular area.

--Dan


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Davanum Srinivas :: https://twitter.com/dims
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [tempest] CI - integration job status

2015-08-28 Thread Emilien Macchi
On Fri, Aug 28, 2015 at 2:42 PM, Emilien Macchi emil...@redhat.com wrote:

 So this week we managed to iterate to have more components part of
 Puppet OpenStack Integration CI.
 Everything is work in progress but let me share the status:

 * one single Puppet run of the scenario001.pp is enough to deploy
 OpenStack (MySQL, RabbitMQ, Keystone WSGI, Nova, Glance, Neutron
 (ML2-OVS) - second Puppet run shows that manifest is idempotent :-)
 * tempest is running at the end (identity, image and compute tests) -
 some failures on scenarios and some tests, but ~90% of success.


In fact Matthew advised us to run 'smoke' since it's a suite of tests that
are enough to validate our OpenStack cloud is running. It runs some
important API tests and 2 scenarios that validate the full workflow (spawn
a VM, ssh and ping outside, etc).

And now it's 100% :-)

* Results are visible i https://review.openstack.org/#/c/217352/ (see
 gate-puppet-openstack-integration-dsvm-centos7 logs for details)

 Next steps:
 * during the Puppet OpenStack midcycle next week, Paul Belanger and I
 will make progress together on this work, any help is highly welcome.
 * While I'm working on single node, Paul is focusing on multi node job
 with Zuul v3 - though I'll let him give status if needed over this thread.
 * Optimize Tempest run - we need to select what to test (scenarios, etc)
 so the job is effective and we don't spend useless to test the world.
 Big kudos to Matthew Treinish for his help, his input is really useful
 for us.

 Blockers:
 Well... to make it work I had to use Depends-on a few number of patches.
 Please review them if we want to make progress:

 Use zuul-cloner for tempest
 https://review.openstack.org/#/c/217242/

 allow to optionally git clone tempest
 https://review.openstack.org/#/c/216841/

 glance_id_setter: execute after creating Glance image
 https://review.openstack.org/#/c/216432/

 Bad configuration for glance/neutron setters
 https://review.openstack.org/#/c/174638/

 Make sure neutron network is created before Tempest_neutron_net_id_setter
 https://review.openstack.org/#/c/218398/

 Make sure Glance_image is executed after Keystone_endpoint
 https://review.openstack.org/#/c/216488/

 Make sure Nova_admin_tenant_id_setter is executed after Keystone_endpoint
 https://review.openstack.org/#/c/216950/

 Fix 'shared' parameter check in neutron_network provider
 https://review.openstack.org/#/c/204152/

 scenario001: deploy  test glance
 https://review.openstack.org/#/c/216418/

 scenario001: deploy RabbitMQ
 https://review.openstack.org/#/c/216828/

 scenario001: deploy neutron
 https://review.openstack.org/#/c/216831/

 scenario001: deploy nova
 https://review.openstack.org/#/c/216938/

 Run tempest with compute tests
 https://review.openstack.org/#/c/217352/




Also https://review.openstack.org/218474


 In advance, thanks a lot for your reviews, any feedback is welcome!
 --
 Emilien Macchi


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle] multiple cascade services

2015-08-28 Thread joehuang
Hi,

I think you may have some misunderstanding on the PoC design. (the proxy node 
only to listen the RPC to compute-node/cinder-volume/L2/L3 agent…)


1)  The cascading layer including the proxy nodes are assumed running in 
VMs but not in physical servers (you can do that). Even in CJK intercloud ( 
China, Japan, Korea ) intercloud, the cascading layer including API,messagebus, 
DB, proxy nodes are running in VMs



2)  For proxy nodes running in VMs, it's not strange that  multiple proxy 
nodes running over one physical server. And if the load of one proxy nodes 
increased, it’s easy to move VM from one physical server to another, this is 
quite mature technology and easy to monitor, to deal with. And most of 
virtualization also support hot scale-up for one virtual machine.



3)  It's already in some scenario that the ZooKeeper is used to manage the 
proxy node role and membership. And backup node will take over the 
responsibility of the failed node.


So I did not see that “fake node” mode will bring extra benefit. On the other 
hand, the “fake node” add additional complexity:

1 ) the complexity of the code in cascade service, to implement the RPC to 
scheduler and the RPC to compute node/cinder volume.

2 ) how to judge the load of a “fake node”.  If all “fake-nodes” will run 
flatly(no special process or thread, just a symbol) in the same process, then 
how can you judge the load of a “fake node”, by message number ? but message 
number does not imply the  load. The load is often measured through CPU 
utilization / memory occupy, so how to calculate the load for each “fake node” 
and then make decision to move which nodes to other physical server? How to 
manage this “fake-node” in Zookeeper like cluster ware. You may want to make 
fake-node run in different process or thread space, then you need to manage 
“fake-node” and process/thread relationship.

I admit that the proposal 3 is much more complex to make it work for the 
flexible load balance. We have to record relative stamp for each message in the 
queue, pick the message from message bus, and put the message into task queue 
for each site in DB, then execute this task in order.

As what has been described above that the proposal 2 does not bring extra 
benefit, and if we don’t want to strive for the 3rd direction, we’d better 
fallback to the proposal 1.

Best Regards
Chaoyi Huang ( Joe Huang )

From: e...@gampel.co.il [mailto:e...@gampel.co.il] On Behalf Of Eran Gampel
Sent: Thursday, August 27, 2015 7:07 PM
To: joehuang; Irena Berezovsky; Eshed Gal-Or; Ayal Baron; OpenStack Development 
Mailing List (not for usage questions); caizhiyuan (A); Saggi Mizrahi; Orran 
Krieger; Gal Sagie; Orran Krieger; Zhipeng Huang
Subject: Re: [openstack-dev][tricircle] multiple cascade services

Hi,
Please see my comments inline
BR,
Eran

Hello,

As what we discussed in the yesterday’s meeting, the contradict is how to scale 
out cascade services.


1)  In PoC, one proxy node will only forward to one bottom openstack, the 
proxy node will be added to a regarding AZ, and multiple proxy nodes for one 
bottom OpenStack is feasible by adding more proxy nodes into this AZ, and the 
proxy node will be scheduled like usual.



Is this perfect? No. Because the VM’s host attribute is binding to a specific 
proxy node, therefore, these multiple proxy nodes can’t work in cluster mode, 
and each proxy node has to be backup by one slave node.



[Eran] I agree with this point - In the PoC you had a limitation of single 
active proxy per bottom site.  In addition, each proxy could only support a 
Single bottom site by-design.



2)  The fake node introduced in the cascade service.

Because fanout rpc call for Neutron API is assumed, then no multiple fake nodes 
for one bottom openstack is allowed.



[Eran] In fact, this is not a limitation in the current design.  We could have 
multiple fake nodes to handle the same bottom site, but only 1 that is 
Active.  If this Active node becomes unavailable, one of the other Passive 
nodes can take over with some leader-election or any other known design pattern 
(it's an implementation decision).

And because the traffic to one bottom OpenStack is un-predictable, and move 
these fake nodes dynamically among cascade service is very complicated, 
therefore we can’t deploy multiple fake nodes in one cascade service.



[Eran] I'm not sure I follow you on this point... as we see it, there are 3 
places where load is an issue (and potential bottleneck):

1. API + message queue + database

2. Cascading Service itself (dependency builder, communication service, DAL)

3. Task execution



I think you were concerned about #2, which in our design must be a 
single-active per bottom site (to maintain task order of execution).

In our opinion, the heaviest part is actually #3 (task execution), which is 
delegated to a separate execution path (Mistral workflow or otherwise).

In case we have one Cascading Service 

Re: [openstack-dev] [neutron][sfc] Neutron stadium distribution and/or packaging

2015-08-28 Thread Paul Carver
It's possible that I've misunderstood Big Tent/Stadium, but I thought 
we were talking about enhancements to Neutron, not separate unrelated 
projects.


We have several efforts focused on adding capabilities to Neutron. This 
isn't about polluting the Neutron namespace but rather about adding 
capabilities that Neutron currently is missing.


My concern is that we need to add to the Neutron API, the Neutron CLI, 
and enhance the capabilities of the OvS agent. I'm under the impression 
that the Neutron Stadium allows us to do this, but I'm fuzzy on the 
implementation details.


Is the Neutron Stadium expected to allow additions to the Neutron API, 
the Neutron client, and the Neutron components such as ML2 and the OvS 
agent?



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] versioned objects changes

2015-08-28 Thread Egor Guz
Adrian, agree with your points. But I think we should discuss it during the 
next team meeting and address/answer all concerns which team members may have. 
Grzegorz, can you join?

—
Egor

From: Adrian Otto adrian.o...@rackspace.commailto:adrian.o...@rackspace.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday, August 28, 2015 at 18:51
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum] versioned objects changes

We are going to merge this work. I understand and respect Hongbin's position, 
but I respectfully disagree. When we are presented with ways to implement low 
overhead best practices like versioned objects, we will. It's not that hard to 
bump the version of an object when you change it. I like having systemic 
enforcement of that.

On the subject of review 211057, if you submit a review to remove comments, 
that is purely stylistic in nature, then you are inviting a discussion of style 
with our reviewers, and deserve to make that patch stylistically perfect.

If that patch had actual code in it tat made Magnum better, and several 
reviewers voted against the format of the comments, that would be stupid, and I 
would +2 it in spite of any -1 votes as long as it meets our rules for 
submission (like it must have a bug number).

Finally, meaningful -1 votes are valuable, and should not be viewed as a waste 
of effort. That's what we do as a team to help each other continually improve, 
and to make Magnum something we can all be proud of. With all that said, if you 
only have a stylistic comment, that should be a -0 vote with a comment, not a 
-1. If you are making stylistic and material comments together, that's fine, 
use a -1 vote.

Thanks,

Adrian

On Aug 28, 2015, at 5:21 PM, Davanum Srinivas 
dava...@gmail.commailto:dava...@gmail.com wrote:

Hongbin,

We are hearing the best advice available from the folks who started the 
library, evangelized it across nova, ironic, heat, neutron etc.

If we can spend so much time and energy (*FOUR* -1's on a review which just 
changes some commented lines - https://review.openstack.org/#/c/211057/) then 
we can and should clearly do better in things that really matter in the long 
run.

If we get into the rhythm of doing the right things and figuring out the steps 
needed right from the get go, it will pay off in the future.

My 2 cents.

Thanks,
Dims

PS: Note that i used we wearing my magnum core hat and not the o.vo/oslo core 
hat :)

On Fri, Aug 28, 2015 at 6:52 PM, Dan Smith 
d...@danplanet.commailto:d...@danplanet.com wrote:
 If you want my inexperienced opinion, a young project is the perfect
 time to start this.

^--- This ---^

 I understand that something like [2] will cause a test to fail when you
 make a major change to a versioned object. But you *want* that. It helps
 reviewers more easily catch contributors to say You need to update the
 version, because the hash changed. The sooner you start using versioned
 objects in the way they are designed, the smaller the upfront cost, and
 it will also be a major savings later on if something like [1] pops up.

...and the way it will be the least overhead is if it's part of the
culture of contributors and reviewers. It's infinitely harder to take
the culture shift after everyone is used to not having to think about
upgrades, not to mention the technical recovery Ryan mentioned.

It's not my call for Magnum, but long-term thinking definitely pays off
in this particular area.

--Dan


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Davanum Srinivas :: https://twitter.com/dims
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Tracing a request (NOVA)

2015-08-28 Thread Joshua Harlow

I made the following some time ago,

https://wiki.openstack.org/wiki/RunInstanceWorkflows

https://wiki.openstack.org/w/images/a/a9/Curr-run-instance.png

That may be useful for u, (it may also not be that up to date),

Cheers,

Josh

Dhvanan Shah wrote:

Hi,

I'm trying to trace a request made for an instance and looking at the
flow in the code.
I'm just trying to understand better how the request goes from the
dashboard to the nova-api , to the other internal components of nova and
to the scheduler and back with a suitable host and launching of the
instance.

i just want to understand as to how the request goes from the api-call
to the nova-api and so on after that.
I have understood the nova-scheduler and in that, the filter_scheduler
receives something called request_spec that is the specifications of the
request that is made, and I want to see where this comes from. I was not
very successful in reverse engineering this.

I could use some help as I want to implement a scheduling algorithm of
my own but for that I need to understand how and where the requests come
in and how the flow works.

If someone could guide me as to where i can find help or point in some
direction then it would be of great help.
--
Dhvanan Shah

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova migration policy

2015-08-28 Thread Brian Elliott
In an effort to clarify expectations around good practices in writing schema 
and data migrations in nova with respect to live upgrades, I’ve added some 
extra bits to the live upgrade devref.  Please check it out and add your 
thoughts:

https://review.openstack.org/#/c/218362/ 
https://review.openstack.org/#/c/218362/

Thanks,
Brian__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] [wsme] [ceilometer] Replacing WSME with _____ ?

2015-08-28 Thread Julien Danjou
On Fri, Aug 28 2015, Jay Pipes wrote:

 voluptuous may be more Pythonic, as Julien mentioned, but the problem is you
 can't expose the validation schema to the end user via any standard document
 format (like JSONSchema). Using the jsonschema library along with standard
 JSONSchema documents allows the API to publish its expected request and
 response schemas to the end user, allowing, for example, a client library to
 pull the schema documents and utilize a JSONSchema parsing/validation library
 locally to pre-validate data before ever sending it over the wire.

That's a good point. I think we took a look at some point to generate
JSON Schema from voluptuous, but we didn't continue since we were not
sure there were a use case. Though that might be possible I imagine if
somebody asks at some point.

(or we could also rewrite the few schemas we have to JSON Schema since
there not tight to the WSGI framework anyway)

-- 
Julien Danjou
;; Free Software hacker
;; http://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] [wsme] [ceilometer] Replacing WSME with _____ ?

2015-08-28 Thread Jason Myers
I enjoy using validictory for using Jsonschema with Python 
https://pypi.python.org/pypi/validictory.

Sent from my iPhone

 On Aug 28, 2015, at 11:29 AM, Jay Pipes jaypi...@gmail.com wrote:
 
 On 08/28/2015 07:22 AM, Chris Dent wrote:
 
 This morning I kicked off a quick spec for replacing WSME in
 Ceilometer with ... something:
 
 https://review.openstack.org/#/c/218155/
 
 This is because not only is WSME not that great, it also results in
 controller code that is inscrutable.
 
 The problem with the spec is that it doesn't know what to replace
 WSME with.
 
 So, for your Friday afternoon pleasure I invite anyone with an
 opinion to hold forth on what framework they would choose. The spec
 lists a few options but please feel to not limit yourself to those.
 
 If you just want to shoot the breeze please respond here. If you
 have specific comments on the spec please response there.
 
 I'm not going to get into another discussion about what WSGI/routing 
 framework to use (go Falcon! ;) ). But, since you are asking specifically 
 about *validation* of request input, I'd like to suggest just using plain ol' 
 JSONSchema, and exposing the JSONSchema documents in a GET 
 /schemas/{object_type} resource endpoint.
 
 voluptuous may be more Pythonic, as Julien mentioned, but the problem is you 
 can't expose the validation schema to the end user via any standard document 
 format (like JSONSchema). Using the jsonschema library along with standard 
 JSONSchema documents allows the API to publish its expected request and 
 response schemas to the end user, allowing, for example, a client library to 
 pull the schema documents and utilize a JSONSchema parsing/validation library 
 locally to pre-validate data before ever sending it over the wire.
 
 Best,
 -jay
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][versionedobjects][ceilometer] explain the benefits of ceilometer+versionedobjects

2015-08-28 Thread Roman Dobosz
On Thu, 27 Aug 2015 15:37:24 -0400
gord at live.ca (gord chung) wrote:

 polling agent --- topic queue --- notification agent --- topic queue 
 --- collector (direct connection to db)
 or
 OpenStack service --- topic queue --- notification agent --- topic 
 queue --- collector (direct connection to db)
 or
 from Aodh/alarming pov:
 ceilometer-api (direct connection to db) --- http --- alarm evaluator 
 --- rpc --- alarm notifier --- http --- [Heat/other]
 
 based on the above workflows, is there a good place for adoption of 
 versionedobjects? and if so, what is the benefit? most of us are keen on 
 adopting consistent design practices but none of us can honestly 
 determine why versionedobjects would be beneficial to Ceilometer. if 
 someone could explain it to us like we are 5 -- it's probably best to 
 explain everything/anything like i'm 5 -- that would help immensely on 
 moving this work forward.

Hi Gordon,

The first thing that come to my mind is the database schema changes -
this is the area that OVO is aiming at. Even though you don't have a
need for the schema changing today, it might happen in the future.

So imagine we have new versions of the schema for the events, alarms or
samples in ceilometer introduced in Mitaka release while you have all
your ceilo services on Liberty release. To upgrade ceilometer you'll
have to stop all services to avoid data corruption. With
versionedobjects you can do this one by one without disrupting
telemetry jobs.

The other thing, maybe not so obvious, is to put versionedobject layer
between application and the MongoDB driver, so that all of the schema
changes will be automatically handled on ovo, and also serialization
might also be done on such layer.

Hope that clear your doubts.

-- 
Cheers,
Roman Dobosz

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][versionedobjects][ceilometer] explain the benefits of ceilometer+versionedobjects

2015-08-28 Thread Dan Smith
 there was a little skeptism because it was originally sold as magic,
 but reading the slides from Vancouver[1], it is not magic.

I think I specifically said they're not magic in my slides. Not sure
who sold you them as magic, but you should leave them a
less-than-five-stars review.

 Ceilometer functions mainly on queue-based IPC. most of the
 communication is async transferring of json payloads where callback is
 not required. the basic workflows are:

This is specifically something versionedobjects should help with. The
remotable RPC method calls on an object are something that nova uses
heavily, but other projects don't use at all.

 polling agent --- topic queue --- notification agent --- topic queue
 --- collector (direct connection to db)

What happens if any of these components are running different versions
of the ceilometer code at one point? During an upgrade, you presumably
don't want to have to take all of these things down at once, and so the
notification agent might get an object from the polling agent that
is older or newer than it expects. More specifically, maybe the
collector is writing to older schema and gets a newer object from the
front of the queue with data it can't store. If you're getting
versionedobjects instead of raw json, you at least have an indication
that this is happening. If you get an older object, you might choose to
do something specific for the fields that are now in the DB schema, but
aren't in the object you received.

 OpenStack service --- topic queue --- notification agent --- topic
 queue --- collector (direct connection to db)

This is a good one. If Nova was sending notifications as objects, then
the notification agent would get a version with each notification,
knowing specifically when the notification is newer than it supports,
instead of us just changing things (on purpose or by accident) and you
breaking.

From the storage in the DB perspective, I'm not sure what your
persistence looks like. However, we've been storing _some_ things in our
DB as serialized objects. That means that if we pull something out in a
year, after which time things in the actual object implementation have
changed, then we have an indication of what version it was stored in,
and presumably can apply a process to update it (or handle the
differences) at load time. I'm not sure if that's useful for ceilometer,
but it is definitely useful for nova, where we can avoid converting
everything in the database every time we add/change a field in something
-- a process that is very critical to avoid in our goals for improving
the upgrade experience for operators.

So, I dunno if ceilometer needs to adopt versionedobjects for anything.
It seems like it would apply to the cases you describe above, but if
not, there's no reason to use it just because others are. Nova will
(when I stop putting it off) start sending notifications as serialized
and versioned objects at some point, but you may choose to just unwrap
it and treat it as a json blob beyond the handler, if that's what is
determined as the best course.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][versionedobjects][ceilometer] explain the benefits of ceilometer+versionedobjects

2015-08-28 Thread gord chung



On 28/08/15 12:18 PM, Roman Dobosz wrote:

So imagine we have new versions of the schema for the events, alarms or
samples in ceilometer introduced in Mitaka release while you have all
your ceilo services on Liberty release. To upgrade ceilometer you'll
have to stop all services to avoid data corruption. With
versionedobjects you can do this one by one without disrupting
telemetry jobs.
are versions checked for every single message? has anyone considered the 
overhead to validating each message? since ceilometer is queue based, we 
could technically just publish to a new queue when schema changes... and 
the consuming services will listen to the queue it knows of.


ie. our notification service changes schema so it will now publish to a 
v2 queue, the existing collector service consumes the v1 queue until 
done at which point you can upgrade it and it will listen to v2 queue.


this way there is no need to validate/convert anything and you can still 
take services down one at a time. this support doesn't exist currently 
(i just randomly thought of it) but assuming there's no flaw in my idea 
(which there may be) isn't this more efficient?


The other thing, maybe not so obvious, is to put versionedobject layer
between application and the MongoDB driver, so that all of the schema
changes will be automatically handled on ovo, and also serialization
might also be done on such layer.


i don't quite understand this, is this a mongodb specific solution? 
admittedly the schemaless design of mongo i can imagine causing issues 
but currently we're trying to avoid wasting resources on existing 
mongodb solution as we attempt to move to new api. if it's just a 
generic db solution, i'd be interested to apply it to future designs.


cheers,

--
gord


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][nova] Streamlining of config options in nova

2015-08-28 Thread Davanum Srinivas
Markus,

C) +1 to file a spec early, so we can discuss in Tokyo if needed.

Thanks,
dims

On Fri, Aug 28, 2015 at 11:16 AM, Markus Zoeller mzoel...@de.ibm.com
wrote:

 Markus Zoeller/Germany/IBM@IBMDE wrote on 08/19/2015 02:15:55 PM:

  From: Markus Zoeller/Germany/IBM@IBMDE
  To: OpenStack Development Mailing List \(not for usage questions\)
  openstack-dev@lists.openstack.org
  Date: 08/19/2015 02:31 PM
  Subject: Re: [openstack-dev] [openstack][nova] Streamlining of config
  options in nova
 
  Markus Zoeller/Germany/IBM@IBMDE wrote on 08/17/2015 09:37:09 AM:
 
   From: Markus Zoeller/Germany/IBM@IBMDE
   To: OpenStack Development Mailing List \(not for usage questions\)
   openstack-dev@lists.openstack.org
   Date: 08/17/2015 09:48 AM
   Subject: Re: [openstack-dev] [openstack][nova] Streamlining of config
   options in nova
  
   Michael Still mi...@stillhq.com wrote on 08/12/2015 10:08:26 PM:
  
From: Michael Still mi...@stillhq.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date: 08/12/2015 10:14 PM
Subject: Re: [openstack-dev] [openstack][nova] Streamlining of
 config
options in nova
[...]
   
Do we see https://review.openstack.org/#/c/205154/ as a reasonable
example of such centralization? If not, what needs to change there
 to
make it an example of that centralization? I see value in having a
worked example people can follow before we attempt a large number of

these moves.
[...]
Michael
  

 For the sake of completeness:
 A) An example of the centralization of the config options which addresses
the issues Marian mentioned in the beginning of this thread:
https://review.openstack.org/#/c/214581/4
Module nova/virt/vmwareapi/imagecache.py is a good example how it
should look like in the end.
 B) A failed (and painful) attempt to replace the global CONF with an
object, which was brought up by danpb:
https://review.openstack.org/#/c/218319/2
 C) Enhancing oslo.config to provide more structure and information,
which was brought up by myself [1][2]

 TODO
 
 1) I can create the blueprint to drive A), any veto?
 2) I'll discuss C) with the oslo folks
 3) I lack a good solution for B). Let's talk at the next summit about it

 References
 --
 [1]
 https://blueprints.launchpad.net/oslo.config/+spec/option-interdependencies
 [2] https://blueprints.launchpad.net/oslo.config/+spec/help-text-markup



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Davanum Srinivas :: https://twitter.com/dims
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] upcoming glanceclient release

2015-08-28 Thread Jordan Pittier
On Fri, Aug 28, 2015 at 5:12 PM, stuart.mcla...@hp.com wrote:



 I've compiled a list of backwards incompatabilities where the new client
 will impact (in some cases break) existing scripts:

 https://wiki.openstack.org/wiki/Glance-v2-v1-client-compatability


 Awesome!


 To be honest there's a little more red there than I'd like.

 Of the 72 commands I tried, the new client failed to even parse the input
 in 36 cases.

 Yep, I am not involved in Glance development but as a user this looks bad.
And I didn't know Glance v2 lost that many features (is-public,
all-tenants, list by name)...

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][bugs] more specific tags?

2015-08-28 Thread Markus Zoeller
This is a proposal to enhance the list of offical tags for our bugs
in Launchpad. During the tagging process in the last weeks it seems
to me that some of the tags are too coarse-grained. Would you see a
benefit in enhancing the official list to more fine-grained tags?

Additionally I would like to enhance the tags by use case oriented
tags like spawn and snapshot (like we already have with 
live-migration) to signalize which features are impacted by this
bug which could be benefitial for:
* release letters
* use case centered SMEs through the components/layers

refine component oriented tags:
* network: nova-network + neutron + pci-passthrough
* volumes: block-device-mapping + multipath + fibre-channel + attachment

add new bug-theme tags:
* numa
* availability_zones

add use case centered tags:
* spawn
* resize
* rebuild
* snapshot
* swap-disk

add non-functional tags:
* performance
* gate-failure
* upgrades

add release cycle tags:
* in-stable-juno
* in-stable-kilo

Would you see this as benefitial? If yes, which tags would you like to
have additionally? If no, what did I miss or overlook?

Regards,
Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][nova] Streamlining of config options in nova

2015-08-28 Thread Markus Zoeller
Markus Zoeller/Germany/IBM@IBMDE wrote on 08/19/2015 02:15:55 PM:

 From: Markus Zoeller/Germany/IBM@IBMDE
 To: OpenStack Development Mailing List \(not for usage questions\) 
 openstack-dev@lists.openstack.org
 Date: 08/19/2015 02:31 PM
 Subject: Re: [openstack-dev] [openstack][nova] Streamlining of config 
 options in nova
 
 Markus Zoeller/Germany/IBM@IBMDE wrote on 08/17/2015 09:37:09 AM:
 
  From: Markus Zoeller/Germany/IBM@IBMDE
  To: OpenStack Development Mailing List \(not for usage questions\) 
  openstack-dev@lists.openstack.org
  Date: 08/17/2015 09:48 AM
  Subject: Re: [openstack-dev] [openstack][nova] Streamlining of config 
  options in nova
  
  Michael Still mi...@stillhq.com wrote on 08/12/2015 10:08:26 PM:
  
   From: Michael Still mi...@stillhq.com
   To: OpenStack Development Mailing List (not for usage questions) 
   openstack-dev@lists.openstack.org
   Date: 08/12/2015 10:14 PM
   Subject: Re: [openstack-dev] [openstack][nova] Streamlining of 
config 
   options in nova
   [...]
   
   Do we see https://review.openstack.org/#/c/205154/ as a reasonable 
   example of such centralization? If not, what needs to change there 
to 
   make it an example of that centralization? I see value in having a 
   worked example people can follow before we attempt a large number of 

   these moves.
   [...]
   Michael
  

For the sake of completeness:
A) An example of the centralization of the config options which addresses
   the issues Marian mentioned in the beginning of this thread:
   https://review.openstack.org/#/c/214581/4
   Module nova/virt/vmwareapi/imagecache.py is a good example how it 
   should look like in the end.
B) A failed (and painful) attempt to replace the global CONF with an
   object, which was brought up by danpb: 
   https://review.openstack.org/#/c/218319/2
C) Enhancing oslo.config to provide more structure and information,
   which was brought up by myself [1][2]

TODO

1) I can create the blueprint to drive A), any veto?
2) I'll discuss C) with the oslo folks
3) I lack a good solution for B). Let's talk at the next summit about it

References
--
[1] 
https://blueprints.launchpad.net/oslo.config/+spec/option-interdependencies
[2] https://blueprints.launchpad.net/oslo.config/+spec/help-text-markup 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Branching strategy vs feature freeze

2015-08-28 Thread Dmitry Borodaenko
On Fri, Aug 28, 2015 at 1:13 AM Igor Marnat imar...@mirantis.com wrote:

 Dmitry,
 I don't have yet enough context to discuss Fuel 9.0 release but I have
 a question about 8.0.

 You mentioned that the start of Fuel 8.0 release cycle inevitably
 remains coupled with MOS. Does it mean that we still consider
 decoupling for 8.0, just later in the cycle, or we are going to do it
 for 9.0?


In theory we could decouple the later milestones of the 8.0 release cycle,
but as I explained it doesn't make much sense to change anything before 8.0
SCF: trying to make Fuel 8.0 cycle shorter simply won't work, and making it
longer isn't useful.

Short FF in 8.0 allows us to start 9.0 cycle earlier, so with 9.0 we still
have options.

-- 
Dmitry Borodaenko
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] [wsme] [ceilometer] Replacing WSME with _____ ?

2015-08-28 Thread Jay Pipes

On 08/28/2015 07:22 AM, Chris Dent wrote:


This morning I kicked off a quick spec for replacing WSME in
Ceilometer with ... something:

 https://review.openstack.org/#/c/218155/

This is because not only is WSME not that great, it also results in
controller code that is inscrutable.

The problem with the spec is that it doesn't know what to replace
WSME with.

So, for your Friday afternoon pleasure I invite anyone with an
opinion to hold forth on what framework they would choose. The spec
lists a few options but please feel to not limit yourself to those.

If you just want to shoot the breeze please respond here. If you
have specific comments on the spec please response there.


I'm not going to get into another discussion about what WSGI/routing 
framework to use (go Falcon! ;) ). But, since you are asking 
specifically about *validation* of request input, I'd like to suggest 
just using plain ol' JSONSchema, and exposing the JSONSchema documents 
in a GET /schemas/{object_type} resource endpoint.


voluptuous may be more Pythonic, as Julien mentioned, but the problem is 
you can't expose the validation schema to the end user via any standard 
document format (like JSONSchema). Using the jsonschema library along 
with standard JSONSchema documents allows the API to publish its 
expected request and response schemas to the end user, allowing, for 
example, a client library to pull the schema documents and utilize a 
JSONSchema parsing/validation library locally to pre-validate data 
before ever sending it over the wire.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] [wsme] [ceilometer] Replacing WSME with _____ ?

2015-08-28 Thread Monty Taylor

On 08/28/2015 08:32 AM, Julien Danjou wrote:

On Fri, Aug 28 2015, Chris Dent wrote:


This morning I kicked off a quick spec for replacing WSME in
Ceilometer with ... something:

 https://review.openstack.org/#/c/218155/

This is because not only is WSME not that great, it also results in
controller code that is inscrutable.

The problem with the spec is that it doesn't know what to replace
WSME with.

So, for your Friday afternoon pleasure I invite anyone with an
opinion to hold forth on what framework they would choose. The spec
lists a few options but please feel to not limit yourself to those.

If you just want to shoot the breeze please respond here. If you
have specific comments on the spec please response there.


For Gnocchi we've been relying on voluptuous¹ for data validation, and
Pecan as the rest of the framework – like what's used in Ceilometer and
consors.

I find it a pretty good option, more Pythonic than JSON Schema – which
has its pros and cons too.

What I'm not happy with is actually Pecan, as I find the routing system
way too much complex in the end. I think I'd prefer to go with something
like Flask finally.


P.S: An option not listed, and one that may make perfect sense for
ceilometer (but perhaps not aodh), is to do nothing and consider the
v2 api legacy.


This is going to happen in a few cycles I hope for Ceilometer.

¹  https://pypi.python.org/pypi/voluptuous


We use voluptuous in Infra for data validation and have been very 
pleased with it.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] [wsme] [ceilometer] Replacing WSME with _____ ?

2015-08-28 Thread Joshua Harlow

Monty Taylor wrote:

On 08/28/2015 08:32 AM, Julien Danjou wrote:

On Fri, Aug 28 2015, Chris Dent wrote:


This morning I kicked off a quick spec for replacing WSME in
Ceilometer with ... something:

https://review.openstack.org/#/c/218155/

This is because not only is WSME not that great, it also results in
controller code that is inscrutable.

The problem with the spec is that it doesn't know what to replace
WSME with.

So, for your Friday afternoon pleasure I invite anyone with an
opinion to hold forth on what framework they would choose. The spec
lists a few options but please feel to not limit yourself to those.

If you just want to shoot the breeze please respond here. If you
have specific comments on the spec please response there.


For Gnocchi we've been relying on voluptuous¹ for data validation, and
Pecan as the rest of the framework – like what's used in Ceilometer and
consors.

I find it a pretty good option, more Pythonic than JSON Schema – which
has its pros and cons too.

What I'm not happy with is actually Pecan, as I find the routing system
way too much complex in the end. I think I'd prefer to go with something
like Flask finally.


P.S: An option not listed, and one that may make perfect sense for
ceilometer (but perhaps not aodh), is to do nothing and consider the
v2 api legacy.


This is going to happen in a few cycles I hope for Ceilometer.

¹ https://pypi.python.org/pypi/voluptuous


We use voluptuous in Infra for data validation and have been very
pleased with it.


Out of curiosity how are u using voluptuous (and/or getting it 
installed?), when I tried to propose that to the global requirements 
requirement list/repo I got shot-down (since json-schema does similar 
things);


Review @ https://review.openstack.org/#/c/131920/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] [tempest] CI - integration job status

2015-08-28 Thread Emilien Macchi
So this week we managed to iterate to have more components part of
Puppet OpenStack Integration CI.
Everything is work in progress but let me share the status:

* one single Puppet run of the scenario001.pp is enough to deploy
OpenStack (MySQL, RabbitMQ, Keystone WSGI, Nova, Glance, Neutron
(ML2-OVS) - second Puppet run shows that manifest is idempotent :-)
* tempest is running at the end (identity, image and compute tests) -
some failures on scenarios and some tests, but ~90% of success.
* Results are visible i https://review.openstack.org/#/c/217352/ (see
gate-puppet-openstack-integration-dsvm-centos7 logs for details)

Next steps:
* during the Puppet OpenStack midcycle next week, Paul Belanger and I
will make progress together on this work, any help is highly welcome.
* While I'm working on single node, Paul is focusing on multi node job
with Zuul v3 - though I'll let him give status if needed over this thread.
* Optimize Tempest run - we need to select what to test (scenarios, etc)
so the job is effective and we don't spend useless to test the world.
Big kudos to Matthew Treinish for his help, his input is really useful
for us.

Blockers:
Well... to make it work I had to use Depends-on a few number of patches.
Please review them if we want to make progress:

Use zuul-cloner for tempest
https://review.openstack.org/#/c/217242/

allow to optionally git clone tempest
https://review.openstack.org/#/c/216841/

glance_id_setter: execute after creating Glance image
https://review.openstack.org/#/c/216432/

Bad configuration for glance/neutron setters
https://review.openstack.org/#/c/174638/

Make sure neutron network is created before Tempest_neutron_net_id_setter
https://review.openstack.org/#/c/218398/

Make sure Glance_image is executed after Keystone_endpoint
https://review.openstack.org/#/c/216488/

Make sure Nova_admin_tenant_id_setter is executed after Keystone_endpoint
https://review.openstack.org/#/c/216950/

Fix 'shared' parameter check in neutron_network provider
https://review.openstack.org/#/c/204152/

scenario001: deploy  test glance
https://review.openstack.org/#/c/216418/

scenario001: deploy RabbitMQ
https://review.openstack.org/#/c/216828/

scenario001: deploy neutron
https://review.openstack.org/#/c/216831/

scenario001: deploy nova
https://review.openstack.org/#/c/216938/

Run tempest with compute tests
https://review.openstack.org/#/c/217352/


In advance, thanks a lot for your reviews, any feedback is welcome!
--
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][versionedobjects][ceilometer] explain the benefits of ceilometer+versionedobjects

2015-08-28 Thread gord chung



On 28/08/15 12:49 PM, Dan Smith wrote:

there was a little skeptism because it was originally sold as magic,
but reading the slides from Vancouver[1], it is not magic.

I think I specifically said they're not magic in my slides. Not sure
who sold you them as magic, but you should leave them a
less-than-five-stars review.


i like how your slides leveled it for us :)



Ceilometer functions mainly on queue-based IPC. most of the
communication is async transferring of json payloads where callback is
not required. the basic workflows are:

This is specifically something versionedobjects should help with. The
remotable RPC method calls on an object are something that nova uses
heavily, but other projects don't use at all.


polling agent --- topic queue --- notification agent --- topic queue
--- collector (direct connection to db)

What happens if any of these components are running different versions
of the ceilometer code at one point? During an upgrade, you presumably
don't want to have to take all of these things down at once, and so the
notification agent might get an object from the polling agent that
is older or newer than it expects. More specifically, maybe the
collector is writing to older schema and gets a newer object from the
front of the queue with data it can't store. If you're getting
versionedobjects instead of raw json, you at least have an indication
that this is happening. If you get an older object, you might choose to
do something specific for the fields that are now in the DB schema, but
aren't in the object you received.


OpenStack service --- topic queue --- notification agent --- topic
queue --- collector (direct connection to db)

This is a good one. If Nova was sending notifications as objects, then
the notification agent would get a version with each notification,
knowing specifically when the notification is newer than it supports,
instead of us just changing things (on purpose or by accident) and you
breaking.

 From the storage in the DB perspective, I'm not sure what your
persistence looks like. However, we've been storing _some_ things in our
DB as serialized objects. That means that if we pull something out in a
year, after which time things in the actual object implementation have
changed, then we have an indication of what version it was stored in,
and presumably can apply a process to update it (or handle the
differences) at load time. I'm not sure if that's useful for ceilometer,
but it is definitely useful for nova, where we can avoid converting
everything in the database every time we add/change a field in something
-- a process that is very critical to avoid in our goals for improving
the upgrade experience for operators.


we store everything as primitives: floats, time, integer, etc... since 
we need to query on attributes. it seems like versionedobjects might not 
be useful to our db configuration currently.




So, I dunno if ceilometer needs to adopt versionedobjects for anything.
It seems like it would apply to the cases you describe above, but if
not, there's no reason to use it just because others are. Nova will
(when I stop putting it off) start sending notifications as serialized
and versioned objects at some point, but you may choose to just unwrap
it and treat it as a json blob beyond the handler, if that's what is
determined as the best course.


i'm really looking forward to this. i think the entire Ceilometer team 
is waiting for someone to contractualise messages. right now it's a crap 
shoot when we listen to messages from other services.


cheers,

--
gord


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][manila] latest microversion considered dangerous

2015-08-28 Thread Matt Riedemann



On 8/28/2015 10:35 AM, Joe Gordon wrote:


On Aug 28, 2015 6:49 AM, Sean Dague s...@dague.net
mailto:s...@dague.net wrote:
 
  On 08/28/2015 09:32 AM, Alex Meade wrote:
   I don't know if this is really a big problem. IMO, even with
   microversions you shouldn't be implementing things that aren't
backwards
   compatible within the major version. I thought the benefit of
   microversions is to know if a given feature exists within the major
   version you are using. I would consider a breaking change to be a major
   version bump. If we only do a microversion bump for a backwards
   incompatible change then we are just using microversions as major
versions.
 
  In the Nova case, Microversions aren't semver. They are content
  negotiation. Backwards incompatible only means something if time's arrow
  only flows in one direction. But when connecting to a bunch of random
  OpenStack clouds, there is no forced progression into the future.
 
  While each service is welcome to enforce more compatibility for the sake
  of their users, one should not assume that microversions are semver as a
  base case.
 
  I agree that 'latest' is basically only useful for testing. The

Sounds like we need to update the docs for this.

  python-novaclient code requires a microversion be specified on the API
  side, and on the CLI side negotiates to the highest version of the API
  that it understands which is supported on the server -
 
https://github.com/openstack/python-novaclient/blob/d27568eab50b10fc022719172bc15666f3cede0d/novaclient/__init__.py#L23

Considering how unclear these two points appear to be, are they clearly
documented somewhere? So that as more projects embrace microversions,
they don't end up having the same discussion.


Yar: https://review.openstack.org/#/c/218403/



 
  -Sean
 
  --
  Sean Dague
  http://dague.net
 
 
__
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] [wsme] [ceilometer] Replacing WSME with _____ ?

2015-08-28 Thread Monty Taylor

On 08/28/2015 08:32 AM, Julien Danjou wrote:

On Fri, Aug 28 2015, Chris Dent wrote:


This morning I kicked off a quick spec for replacing WSME in
Ceilometer with ... something:

 https://review.openstack.org/#/c/218155/

This is because not only is WSME not that great, it also results in
controller code that is inscrutable.

The problem with the spec is that it doesn't know what to replace
WSME with.

So, for your Friday afternoon pleasure I invite anyone with an
opinion to hold forth on what framework they would choose. The spec
lists a few options but please feel to not limit yourself to those.

If you just want to shoot the breeze please respond here. If you
have specific comments on the spec please response there.


For Gnocchi we've been relying on voluptuous¹ for data validation, and
Pecan as the rest of the framework – like what's used in Ceilometer and
consors.

I find it a pretty good option, more Pythonic than JSON Schema – which
has its pros and cons too.

What I'm not happy with is actually Pecan, as I find the routing system
way too much complex in the end. I think I'd prefer to go with something
like Flask finally.


One more thing from my previous things - this exists:

https://github.com/rantav/flask-restful-swagger

The docs team has been working on getting a bunch of swagger stuff 
going. This might be (needs real investigation) a nice way of 
complimenting that work - and is also looking for adoption. So one could 
imagine a future where we took that direction and OpenStack offered to 
adopt the module.


Just a thought - my opinion should be considered useless.


P.S: An option not listed, and one that may make perfect sense for
ceilometer (but perhaps not aodh), is to do nothing and consider the
v2 api legacy.


This is going to happen in a few cycles I hope for Ceilometer.

¹  https://pypi.python.org/pypi/voluptuous



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Gnocchi] Added Ilya Tyaptin as core reviewer

2015-08-28 Thread Julien Danjou
Hi fellows,

Ilya did a few good contributions to Gnocchi, especially around the
InfluxDB driver, so I'm glad to add him to the list of core reviewers.

Welcome aboard.

Cheers,
-- 
Julien Danjou
// Free Software hacker
// http://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] 40% failure on neutron python3.4 tests in the gate

2015-08-28 Thread Salvatore Orlando
On 28 August 2015 at 16:57, Sean Dague s...@dague.net wrote:

 On 08/28/2015 11:20 AM, Assaf Muller wrote:
  To recap, we had three issues impacting the gate queue:
 
  1) The neutron functional job has had a high failure rate for a while
  now. Since it's impacting the gate,
  I've removed it from the gate queue but kept it in the Neutron check
 queue:
  https://review.openstack.org/#/c/218302/
 
  If you'd like to help, the the list of bugs impacting the Neutron
  functional job is linked in that patch.
 
  2) A new Tempest scenario test was added that caused the DVR job failure
  rate to sky rocket to over 50%.
  It actually highlighted a legit bug with DVR and legacy routers. Kevin
  proposed a patch that skips that test
  entirely until we can resolve the bug in Neutron:
  https://review.openstack.org/#/c/218242/ (Currently it tries to skip the
  test conditionally, the next PS will skip the test entirely).
 
  3) The Neutron py34 job has been made unstable due to a recent change
  (By me, yay) that made the tests
  run with multiple workers. This highlighted an issue with the Neutron
  unit testing infrastructure, which is fixed here:
  https://review.openstack.org/#/c/217379/
 
  With all three patches merged we should be good to go.

 Well, with all 3 of these we should be much better for sure. There are
 probably additional issues causing intermittent failures which should be
 looked at. These 3 are definitely masking anything else.


Sadly, since the issues are independent, it is very likely for one of the
patch to fail jenkins tests for one of the other two issues.
If the situation persists is it crazy to conside switching neutron-py34 and
neutron-functional to non-voting until these patches merge.
Neutron cores might abstain from approving patches (unless trivial or
documentation) while these jobs are non-voting.



 https://etherpad.openstack.org/p/gate-fire-2015-08-28 is a set of
 patches to promote for things causing races in the gate (we've got a
 cinder one was well). If other issues are known with fixes posted,
 please feel free to add them with comments.





 -Sean

 --
 Sean Dague
 http://dague.net

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] --detailed-description for OpenStack items

2015-08-28 Thread Matt Riedemann



On 8/28/2015 2:38 AM, Tim Bell wrote:




-Original Message-
From: Matt Riedemann [mailto:mrie...@linux.vnet.ibm.com]
Sent: 28 August 2015 02:29
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] --detailed-description for OpenStack items



On 8/27/2015 12:23 PM, Tim Bell wrote:

Some project such as cinder include a detailed description option
where you can include an arbitrary string with a volume to remind the
admins what the volume is used for.

Has anyone looked at doing something similar for Nova for instances
and Glance for images ?

In many cases, the names get heavily overloaded with information.

Tim





__


 OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



The nova instances table already has a display_description column:

http://git.openstack.org/cgit/openstack/nova/tree/nova/db/sqlalchemy/mo
dels.py#n287

Couldn't that just be used?  It doesn't look like the nova boot command in
the CLI exposes it though.  Seems like an easy enough add.

Although the server create API would have to be changed since today it just
sets the description to be the same as the name:

http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/co
mpute/servers.py#n589

That could be enabled with a microversion change in mitaka though.



This would be great. I had not checked the schema, only the CLI.

Should I submit a bug report (or is there a way of doing an enhancement 
request) ? A display function would also be needed in Horizon for the novice 
users (who are the ones who are asking for this the most)

Tim


--

Thanks,

Matt Riedemann


__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-
requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Tim, sounds like a perfect case of writing a backlog spec that then one 
of the developers in nova can pick up for mitaka:


http://git.openstack.org/cgit/openstack/nova-specs/tree/README.rst#n57

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] How much do we rely on dnsmasq?

2015-08-28 Thread Andrey Danin
My biggest concern is about 4to6 translations. I would prefer to avoid them
as longer as it possible.

From my point of view pure ipv6 is easier to implement than dual stack. I
see it like when you install Fuel node you decide once and forever what IP
version you are go with. We will have two implementations of Fuel - v4 and
v6 - and then will try to merge them together.

Another option may be if PXE, management, and storage networks are still
v4, but OSt API endpoints and Neutron are configured to use IPv6. But some
problems may hide there.


On Fri, Aug 28, 2015 at 9:07 PM, Sean M. Collins s...@coreitpro.com wrote:

 On Thu, Aug 27, 2015 at 08:27:24PM EDT, Andrey Danin wrote:
  Hi, Sean,
 
  Dnsmasq is managed by Cobbler. Cobbler may also manage isc-dhcpd + BIND
  [0].

 Great - thanks for the link.

  So, switching from dnsmasq requires 2 more services been installed. I
  think it's not a big deal to a update Cobbler container. The most work
 will
  be in adding ipv6 support into everything: fuelmenu, Nailgun/UI, a lot of
  Puppet modules, especially L23network module, OSTF. Also, it doubles QA
  efforts.

 Thanks - I agree there is a lot of places we'll have to cover.

  Other questions come up. Do we want to support dual stack too? When a
 user
  will choose an IP version: once during master node installation or it'll
 be
  allowed to switch over in any moment?

 I think probably in the first iteration it'll be dualstack, since we'll
 mostly just be working on enabling IPv6 in all the components. Stretch
 goal will be for Fuel to not require IPv4 at all so that in the future
 we can deploy it in IPv6 only environments.

 --
 Sean M. Collins

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Andrey Danin
ada...@mirantis.com
skype: gcon.monolake
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][versionedobjects][ceilometer] explain the benefits of ceilometer+versionedobjects

2015-08-28 Thread gord chung
i should start by saying i re-read my subject line and it arguably comes 
off aggressive -- i should probably have dropped 'explain' :)


On 28/08/15 01:47 PM, Alec Hothan (ahothan) wrote:


On 8/28/15, 10:07 AM, gord chung g...@live.ca wrote:



On 28/08/15 12:18 PM, Roman Dobosz wrote:

So imagine we have new versions of the schema for the events, alarms or
samples in ceilometer introduced in Mitaka release while you have all
your ceilo services on Liberty release. To upgrade ceilometer you'll
have to stop all services to avoid data corruption. With
versionedobjects you can do this one by one without disrupting
telemetry jobs.

are versions checked for every single message? has anyone considered the
overhead to validating each message? since ceilometer is queue based, we
could technically just publish to a new queue when schema changes... and
the consuming services will listen to the queue it knows of.

ie. our notification service changes schema so it will now publish to a
v2 queue, the existing collector service consumes the v1 queue until
done at which point you can upgrade it and it will listen to v2 queue.

this way there is no need to validate/convert anything and you can still
take services down one at a time. this support doesn't exist currently
(i just randomly thought of it) but assuming there's no flaw in my idea
(which there may be) isn't this more efficient?

If high performance is a concern for ceilometer (and it should) then maybe
there might be better options than JSON?
JSON is great for many applications but can be inappropriate for other
demanding applications.
There are other popular open source encoding options that yield much more
compact wire payload, more efficient encoding/decoding and handle
versioning to a reasonable extent.


i should clarify. we let oslo.messaging serialise our dictionary how it 
does... i believe it's JSON. i'd be interested to switch it to something 
more efficient. maybe it's time we revive the msgpacks patch[1] or are 
there better alternatives? (hoping i didn't just unleash a storm of 
'this is better' replies)




Queue based versioning might be less runtime overhead per message but at
the expense of a potentially complex queue version management (which can
become tricky if you have more than 2 versions).
I think Neutron was considering to use versioned queues as well for its
rolling upgrade (along with versioned objects) and I already pointed out
that managing the queues could be tricky.

In general, trying to provide a versioning framework that allows to do
arbitrary changes between versions is quite difficult (and often bound to
fail).

yeah, so that's what a lot of the devs are debating about right now. 
performance is our key driver so if we do something we think/know will 
negatively impact performance, it better bring a whole lot more of 
something else. if queue based versioning offers comparable 
functionalities, i'd personally be more interested to explore that route 
first. is there a thread/patch/log that we could read to see what 
Neutron discovered when they looked into it?


[1] https://review.openstack.org/#/c/151301/

--
gord


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] versioned objects changes

2015-08-28 Thread Hongbin Lu
Ryan,

Thanks for sharing your inputs. By looking through your response, I couldn't 
find the reasoning about why a young project is the perfect time to enforce a 
strict object version rule. I think a young project often starts with a static 
(or non-frequently changing) version until a point in time the project reaches 
a certain level of maturity. Isn't it? As a core reviewer of Magnum, I observe 
that the project is under fast development and objects are changing from time 
to time. It is very heavy to do all the work for strictly enforcing the version 
(bump version number, document the changed fields, re-generate the hashes, 
implement the compatibility check, etc.). Instead, I would prefer to let all 
objects stay in a beta version, until a time in future, the team decides to 
start bumping it.

Best regards,
Hongbin

From: Ryan Rossiter [mailto:rlros...@linux.vnet.ibm.com]
Sent: August-27-15 2:41 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum] versioned objects changes

If you want my inexperienced opinion, a young project is the perfect time to 
start this. Nova has had a bunch of problems with versioned objects that don't 
get realized until the next release (because that's the point in time at which 
grenade (or worse, operators) catch this). At that point, you then need to hack 
things around and backport them in order to get them working in the old branch. 
[1] is an excellent example of Nova having to backport a fix to an object 
because we weren't using strict object testing.

I don't feel that this should be adding overhead to contributors and reviewers. 
With [2], this test absolutely helps both contributors and reviewers. Yes, it 
requires fixing things when a change happens to an object. Learning to do 
this fix to update object hashes is extremely easy to do and I hope my 
updated comment on there makes it even easier (also be aware I am new to 
OpenStack  Nova as of about 2 months ago, so this stuff was new to me too not 
very long ago).

I understand that something like [2] will cause a test to fail when you make a 
major change to a versioned object. But you *want* that. It helps reviewers 
more easily catch contributors to say You need to update the version, because 
the hash changed. The sooner you start using versioned objects in the way they 
are designed, the smaller the upfront cost, and it will also be a major savings 
later on if something like [1] pops up.

[1]: https://bugs.launchpad.net/nova/+bug/1474074
[2]: https://review.openstack.org/#/c/217342/
On 8/27/2015 9:46 AM, Hongbin Lu wrote:
-1 from me.

IMHO, the rolling upgrade feature makes sense for a mature project (like Nova), 
but not for a young project like Magnum. It incurs overheads for contributors  
reviewers to check the object compatibility in each patch. As you mentioned, 
the key benefit of this feature is supporting different version of magnum 
components running at the same time (i.e. running magnum-api 1.0 with 
magnum-conductor 1.1). I don't think supporting this advanced use case is a 
must at the current stage.

However, I don't mean to against merging patches of this feature. I just 
disagree to enforce the rule of object version change in the near future.

Best regards,
Hongbin

From: Grasza, Grzegorz [mailto:grzegorz.gra...@intel.com]
Sent: August-26-15 4:47 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum] versioned objects changes

Hi,

I noticed that right now, when we make changes (adding/removing fields) in 
https://github.com/openstack/magnum/tree/master/magnum/objects , we don't 
change object versions.

The idea of objects is that each change in their fields should be versioned, 
documentation about the change should also be written in a comment inside the 
object and the obj_make_compatible method should be implemented or updated. See 
an example here:
https://github.com/openstack/nova/commit/ad6051bb5c2b62a0de6708cd2d7ac1e3cfd8f1d3#diff-7c6fefb09f0e1b446141d4c8f1ac5458L27

The question is, do you think magnum should support rolling upgrades from next 
release or maybe it's still too early?

If yes, I think core reviewers should start checking for these incompatible 
changes.

To clarify, rolling upgrades means support for running magnum services at 
different versions at the same time.
In Nova, there is an RPC call in the conductor to backport objects, which is 
called when older code gets an object it doesn't understand. This patch does 
this in Magnum: https://review.openstack.org/#/c/184791/ .

I can report bugs and propose patches with version changes for this release, to 
get the effort started.

In Mitaka, when Grenade gets multi-node support, it can be used to add CI tests 
for rolling upgrades in Magnum.


/ Greg





__

OpenStack Development Mailing List (not for usage questions)

Unsubscribe: 

[openstack-dev] [tc][app-catalog] Application for inclusion in big tent

2015-08-28 Thread Christopher Aedo
Hello! We put together our details and submitted a review for adding
the Community App Catalog project to the OpenStack governance projects
list [1].  We are looking forward to continuing to grow the catalog in
cooporation with the other projects, and building this showcase of all
the things that can be done with an OpenStack environment.

-Christopher

[1] https://review.openstack.org/#/c/217957/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] versioned objects changes

2015-08-28 Thread Dan Smith
 If you want my inexperienced opinion, a young project is the perfect
 time to start this.

^--- This ---^

 I understand that something like [2] will cause a test to fail when you
 make a major change to a versioned object. But you *want* that. It helps
 reviewers more easily catch contributors to say You need to update the
 version, because the hash changed. The sooner you start using versioned
 objects in the way they are designed, the smaller the upfront cost, and
 it will also be a major savings later on if something like [1] pops up.

...and the way it will be the least overhead is if it's part of the
culture of contributors and reviewers. It's infinitely harder to take
the culture shift after everyone is used to not having to think about
upgrades, not to mention the technical recovery Ryan mentioned.

It's not my call for Magnum, but long-term thinking definitely pays off
in this particular area.

--Dan


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] info in paste will be removed?

2015-08-28 Thread Osanai, Hisashi

On Friday, August 28, 2015 8:49 PM, Jeremy Stanley wrote:

 We (the project infrastructure root sysadmins) don't expire/purge
 the content on paste.openstack.org, though have deleted individual
 pastes on request if someone reports material which is abusive or
 potentially illegal in many jurisdictions.

Thanks for the quick response. This behavior is what I wanted to have :-)

Thanks again!
Hisashi Osanai

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Pecan and Liberty-3

2015-08-28 Thread Salvatore Orlando
I'll leave to Kevin's more informed judgment to comment on whether it is
appropriate to merge:

[1] is a list of patches still under review on the feature branch. Some of
them fix issues (like executing API actions), or implement TODOs

This is the current list of TODOs:
salvatore@ubuntu:/opt/stack/neutron$ find ./neutron/newapi/ -name \*.py |
xargs grep -n TODO
./neutron/newapi/hooks/context.py:50:# TODO(kevinbenton): is_admin
logic
./neutron/newapi/hooks/notifier.py:22:# TODO(kevinbenton): implement
./neutron/newapi/hooks/member_action.py:28:# TODO(salv-orlando):
This hook must go. Handling actions like this is
./neutron/newapi/hooks/quota_enforcement.py:33:#
TODO(salv-orlando): This hook must go when adaptin the pecan code to
./neutron/newapi/hooks/attribute_population.py:59:#
TODO(kevinbenton): the parent_id logic currently in base.py
./neutron/newapi/hooks/ownership_validation.py:34:#
TODO(salvatore-orlando): consider whether this check can be folded
./neutron/newapi/app.py:40:#TODO(kevinbenton): error templates
./neutron/newapi/controllers/root.py:150:# TODO(kevinbenton): allow
fields after policy enforced fields present
./neutron/newapi/controllers/root.py:160:# TODO(kevinbenton): bulk!
./neutron/newapi/controllers/root.py:190:# TODO(kevinbenton): bulk?
./neutron/newapi/controllers/root.py:197:# TODO(kevinbenton): bulk?

In my opinion the pecan API now is working-ish; however we know it is not
yet 100% functionally equivalent; but most importantly we don't know how it
works. So far a few corners have bet cut when it comes to testing.
Even if it works it is therefore probably usable. Unfortunately I don't
know what are the criteria the core team evaluates for merging it back (and
I'm sure that for this release at least the home grown WSGI won't be
replaced).

Salvatore

[1]
https://review.openstack.org/#/q/status:open+project:openstack/neutron+branch:feature/pecan,n,z

On 28 August 2015 at 22:51, Kyle Mestery mest...@mestery.com wrote:

 Folks:

 Kevin wants to merge the pecan stuff, and I agree with him. I'm on
 vacation next week during Liberty-3, so Armando, Carl and Doug are running
 the show while I'm out. I would guess that if Kevin thinks it's ok to merge
 it in before Liberty-3, I'd go with his opinion and let it happen. If not,
 it can get an FFE and we can do it post Liberty-3.

 I'm sending this to the broader openstack-dev list so that everyone can be
 aware of this plan, and so that Ihar can help collapse things back next
 week with Doug on this.

 Thanks!
 Kyle

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] [wsme] [ceilometer] Replacing WSME with _____ ?

2015-08-28 Thread Morgan Fainberg
It seems like Flask has a reasonable amount of support and there is a good
ecosystem around it but that aside (as Jay said)... I definitely support
exposing the schema to the end user; making it easier for the end user to
validate input / model outputs for their integration with OpenStack
services is an important feature of whatever is selected.

I admit I am intrigued by the swagger things (and I expect to be spending
some serious time considering it) that Monty linked (especially relating to
Flask, but there is a bias as Keystone is moving to flask).

--Morgan

On Fri, Aug 28, 2015 at 1:26 PM, michael mccune m...@redhat.com wrote:

 On 08/28/2015 10:36 AM, Lucas Alvares Gomes wrote:

 So at the present moment the [micro]framework that comes to my mind -
 without any testing or prototype of any sort - is Flask.


 just wanted to add on here, sahara is using flask.

 mike



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Pecan and Liberty-3

2015-08-28 Thread Kevin Benton
This weekend or early next week I will be pushing a couple of more patches
to deal with some of the big TODOs (e.g. bulk). Then we can rename it and
see if we can review the merge.

I don't intend to have it fully replace our built-in WSGI solution in
Liberty. It's too late in the cycle to make that drastic of a switch. I
just want to have it in the main tree and have the option of trying it out
in Liberty.

On Fri, Aug 28, 2015 at 4:11 PM, Salvatore Orlando salv.orla...@gmail.com
wrote:

 I'll leave to Kevin's more informed judgment to comment on whether it is
 appropriate to merge:

 [1] is a list of patches still under review on the feature branch. Some of
 them fix issues (like executing API actions), or implement TODOs

 This is the current list of TODOs:
 salvatore@ubuntu:/opt/stack/neutron$ find ./neutron/newapi/ -name \*.py |
 xargs grep -n TODO
 ./neutron/newapi/hooks/context.py:50:# TODO(kevinbenton): is_admin
 logic
 ./neutron/newapi/hooks/notifier.py:22:# TODO(kevinbenton): implement
 ./neutron/newapi/hooks/member_action.py:28:# TODO(salv-orlando):
 This hook must go. Handling actions like this is
 ./neutron/newapi/hooks/quota_enforcement.py:33:#
 TODO(salv-orlando): This hook must go when adaptin the pecan code to
 ./neutron/newapi/hooks/attribute_population.py:59:#
 TODO(kevinbenton): the parent_id logic currently in base.py
 ./neutron/newapi/hooks/ownership_validation.py:34:#
 TODO(salvatore-orlando): consider whether this check can be folded
 ./neutron/newapi/app.py:40:#TODO(kevinbenton): error templates
 ./neutron/newapi/controllers/root.py:150:# TODO(kevinbenton):
 allow fields after policy enforced fields present
 ./neutron/newapi/controllers/root.py:160:# TODO(kevinbenton): bulk!
 ./neutron/newapi/controllers/root.py:190:# TODO(kevinbenton): bulk?
 ./neutron/newapi/controllers/root.py:197:# TODO(kevinbenton): bulk?

 In my opinion the pecan API now is working-ish; however we know it is
 not yet 100% functionally equivalent; but most importantly we don't know
 how it works. So far a few corners have bet cut when it comes to testing.
 Even if it works it is therefore probably usable. Unfortunately I don't
 know what are the criteria the core team evaluates for merging it back (and
 I'm sure that for this release at least the home grown WSGI won't be
 replaced).

 Salvatore

 [1]
 https://review.openstack.org/#/q/status:open+project:openstack/neutron+branch:feature/pecan,n,z

 On 28 August 2015 at 22:51, Kyle Mestery mest...@mestery.com wrote:

 Folks:

 Kevin wants to merge the pecan stuff, and I agree with him. I'm on
 vacation next week during Liberty-3, so Armando, Carl and Doug are running
 the show while I'm out. I would guess that if Kevin thinks it's ok to merge
 it in before Liberty-3, I'd go with his opinion and let it happen. If not,
 it can get an FFE and we can do it post Liberty-3.

 I'm sending this to the broader openstack-dev list so that everyone can
 be aware of this plan, and so that Ihar can help collapse things back next
 week with Doug on this.

 Thanks!
 Kyle

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] [wsme] [ceilometer] Replacing WSME with _____ ?

2015-08-28 Thread Everett Toews
On Aug 28, 2015, at 6:10 PM, Morgan Fainberg morgan.fainb...@gmail.com wrote:

 It seems like Flask has a reasonable amount of support and there is a good 
 ecosystem around it but that aside (as Jay said)... I definitely support 
 exposing the schema to the end user; making it easier for the end user to 
 validate input / model outputs for their integration with OpenStack services 
 is an important feature of whatever is selected. 
 
 I admit I am intrigued by the swagger things (and I expect to be spending 
 some serious time considering it) that Monty linked (especially relating to 
 Flask, but there is a bias as Keystone is moving to flask).

With respect to Swagger, we have a guideline [1] in flight from Anne that 
involves it for API docs.

Everett

[1] https://review.openstack.org/#/c/214817/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Tracing a request (NOVA)

2015-08-28 Thread Dhvanan Shah
Hi,

I'm trying to trace a request made for an instance and looking at the flow
in the code.
I'm just trying to understand better how the request goes from the
dashboard to the nova-api , to the other internal components of nova and to
the scheduler and back with a suitable host and launching of the instance.

i just want to understand as to how the request goes from the api-call to
the nova-api and so on after that.
I have understood the nova-scheduler and in that, the filter_scheduler
receives something called request_spec that is the specifications of the
request that is made, and I want to see where this comes from. I was not
very successful in reverse engineering this.

I could use some help as I want to implement a scheduling algorithm of my
own but for that I need to understand how and where the requests come in
and how the flow works.

If someone could guide me as to where i can find help or point in some
direction then it would be of great help.

-- 
Dhvanan Shah
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Core Reviewer participation in the weekly meeting

2015-08-28 Thread Kyle Mestery
I'm sending out this email to encourage Neutron core reviewers to attend
and participate in the weekly Neutron team meeting [1]. Attendance from
core reviewers has been very low for a while now, and this is the one time
each week (or bi-weekly if you attend only one of the rotating meeting) I
have to share information which is important to the entire team. Especially
as the core reviewer team has grown, I feel this is very important for
everyone to join to keep apprised of not only things in Neutron but also of
things cross-project. I highly encourage everyone to make an effort to
attend one of the meetings (we rotate to accommodate timezones).

I should also note that as part of our devref for what it means to be a
core reviewer, we call out atttendance at this weekly meeting [2]. I
encourage everyone to review this if you have further concerns about what
being a Neutron core reviewer means.

Thanks,
Kyle

[1] https://wiki.openstack.org/wiki/Network/Meetings
[2]
http://docs.openstack.org/developer/neutron/policies/core-reviewers.html#neutron-core-reviewer-membership-expectations
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >