[openstack-dev] [stable] New to project stable maintenance, question on requirements changes

2015-02-24 Thread Trevor McKay
Hi folks,

I've just joined the stable maintenance team for Sahara.

We have this review here, from OpenStack proposal bot:

https://review.openstack.org/158775/

Since it came from the proposal bot, there's no justification in the
commit message and no cherry pick.

I didn't see this case covered as one of the strict set in

https://wiki.openstack.org/wiki/StableBranch

Do we trust the proposal bot? How do I know I should trust it? On
master, I assume if there
is a mistake it will soon be rectified, but stable ...  Do we have a doc
that talks about stable maintenance
and requirements changes?  Should we?

Am I being paranoid? :)

Thanks,

Trevor 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] novaclient functional test guidelines

2015-02-24 Thread melanie witt
On Feb 24, 2015, at 9:47, Sean Dague s...@dague.net wrote:

 I'm happy if there are other theories about how we do these things,
 being the first functional test in the python-novaclient tree that
 creates and destroys real resources, there isn't an established pattern
 yet. But I think doing all CLI calls in CLI tests is actually really
 cumbersome, especially in the amount of output parsing code needed if
 you are going to setup any complicated resource structure.

I think I'm in agreement with the pattern you describe.

I imagine having a set of functional tests for the API, that don't do any CLI 
calls at all. With that we test that the API works properly. Then have a 
separate set for the CLI, which only calls CLI for the command being tested, 
everything else to set up and tear down the test done by API calls. This would 
be done with the rationale that because the entire API functionality is tested 
separately, we can safely use it for setup/teardown with the intent to isolate 
the CLI test to the command being tested and avoid introducing side effects 
from the CLI commands.

But I suppose one could make the same argument for using CLI everywhere (if 
they are all tested, they can also be trusted not to introduce side effects). I 
tend to favor using the API because it's the most bare bones setup/teardown we 
could use. At the same time I understand the idea of performing an entire test 
using the CLI, as a way of replicating the experience a real user might have 
using the CLI, from start to end. I don't think I feel strongly either way.

For the --poll stuff, I agree the API should have it and the CLI uses it. And 
with and without poll functionality should be tested separately, API and CLI.

melanie (melwitt)






signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] New to project stable maintenance, question on requirements changes

2015-02-24 Thread Sean Dague
On 02/24/2015 02:34 PM, Trevor McKay wrote:
 Hi folks,
 
 I've just joined the stable maintenance team for Sahara.
 
 We have this review here, from OpenStack proposal bot:
 
 https://review.openstack.org/158775/
 
 Since it came from the proposal bot, there's no justification in the
 commit message and no cherry pick.
 
 I didn't see this case covered as one of the strict set in
 
 https://wiki.openstack.org/wiki/StableBranch
 
 Do we trust the proposal bot? How do I know I should trust it? On
 master, I assume if there
 is a mistake it will soon be rectified, but stable ...  Do we have a doc
 that talks about stable maintenance
 and requirements changes?  Should we?
 
 Am I being paranoid? :)

Slightly, but that's probably good.

Requirements proposal bot changes had to first be Approved on the
corresponding requirements stable branch, so that should be both safe,
and mandatory to go in.

I agree that it would be nicer to have more justification in there.
There is the beginning of the patch up to do something a bit better here
- https://review.openstack.org/#/c/145932/ - though it could use to be
improved.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Tag bugs with features from current release

2015-02-24 Thread Dmitry Borodaenko
During bugs triage and release notes preparation we often need to find
out whether a bug is a regression introduced by new code in the
current release, or may have been present in previous releases. In the
latter case, depending on its severity, it may need to be reflected in
the release notes, and its fix backported to previous release series.

I propose to create official tags out of names of all blueprints
targeted to current release, use these tags to label all related
regression bugs.

Thoughts, objections?

-- 
Dmitry Borodaenko

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][all] Revisiting the 6 month release cycle

2015-02-24 Thread Joe Gordon
On Tue, Feb 24, 2015 at 2:38 AM, Thierry Carrez thie...@openstack.org
wrote:

 Joe Gordon wrote:
  [...]
  I think a lot of the frustration with our current cadence comes out of
  the big stop everything (development, planning etc.), and stabilize the
  release process. Which in turn isn't really making things more stable.

 I guess I have a different position. I think the release candidate
 period is the only thing that makes your code drops actually usable.
 It's the only moment in the cycle where integrators test. It's the only
 moment in the cycle where developers work on bugs they did not file
 themselves, but focus on a project-wide priority list of release
 blockers. If you remove that period, then nobody will ever work on
 release blockers that do not directly affect them. Shorten that period
 to one week, and no integrator will have the time to run a proper QA
 program to detect those release blockers.


I still think the 3 week RC candidate cycle needs to happen, the difference
is it would be done by stable maintenance. And I agree, the RC candidate
period is one of the few moments where developers work on bugs they did not
file themselves. So I am not sure how this would actually work.  Perhaps
the answer is we have deeper issues if we don't want to fix bugs until the
last minute.




 I understand that from your developer perspective it's annoying to have
 to work on project-wide priorities rather than your own, and therefore
 you'd like to get rid of those -- but the resulting drop in quality is
 just something we can't afford.

  So I propose we keep the 6 month release cycle, but change the
  development cycle from a 6 month one with 3 intermediate milestones to a
  6 week one with a milestone at the end of it.
 
  What this actually means:
 
* Stop approving blueprints for specific stable releases, instead just
  approve them and target them to milestones.
o Milestones stop becoming Kilo-1, Kilo-2, Kilo-3 etc. and just
  become 1,2,3,4,5,6,7,8,9 etc.
o If something misses what was previously known as Kilo-3 it has
  to wait a week for what for milestone 4.
* Development focuses on milestones only. So 6 week cycle with say 1
  week of stabilization, finish things up before each milestone
* Process of cutting a stable branch (planning, making the branch,
  doing release candidates, testing etc.) is done by a dedicated
  stable branch team. And it should be done based on a specific
 milestone
* Goal: Change the default development planning mode to ignore stable
  branches, and allow for us to think of things in terms of the number
  of milestones needed, not will it make the stable branch or not

 I don't think that would solve any of the issues you mentioned:
  Current issues
* 3 weeks of feature freeze for all projects at the end of each cycle
  (3 out of the 26 feature blocked)

 So you'll have 3 x 1 week of feature freeze for all projects, instead of
 1 x 3 weeks. That will be less efficient (integrators need a 1week
 feature freeze period to actually start testing a non-moving target),
 more painful (have to organize it 3 times instead of 1), and likely
 inefficient (takes generally more than one week to find critical bugs,
 develop the fix, and get it reviewed). And in the end, it's still 3 out
 of the 26 feature blocked.


As said before, I don't envision integrator consuming every milestone. Just
the standard 3 week RC cycle for stable branch candidates.



* 3 weeks of release candidates. Once a candidate is cut development
  is open for next release. While this is good in theory, not much
  work actually starts on next release.

 That is not really what I observe. People start landing their feature in
 the master branch starting the day after RC1. I actually observe the
 opposite: too many people switching to master development, and not
 enough people working on RC2+ bugs.


Unfortunately I think we are both right. To many people move on and don't
work on RC2 bugs, but development still slows down.



* some projects have non priority feature freezes and at Milestone 2
  (so 9 out of 26 weeks restricted in those projects)

 That was their own choice. I for one was really surprised that they
 would freeze earlier -- I think 3 weeks is the right balance.

* vicious development circle
o vicious circle:
+ big push right to land lots of features right before the
 release

 I think you'll have exactly the same push before the stable milestone
 (or the one that will be adopted by $DISTRO).


I am hoping the push would be smaller, but I don't think we can remove it
completely.



+ a lot of effort is spent getting the release ready
+ after the release people are a little burnt out and take it
  easy until the next summit

 Not convinced the burn out will be less significant with 4 releases
 instead of one every 6 months. Arguably it 

Re: [openstack-dev] [stable] New to project stable maintenance, question on requirements changes

2015-02-24 Thread Trevor McKay
Sean, 

 thanks!  I feel better already. I'll check out the review.

Trevor

On Tue, 2015-02-24 at 14:39 -0500, Sean Dague wrote:
 On 02/24/2015 02:34 PM, Trevor McKay wrote:
  Hi folks,
  
  I've just joined the stable maintenance team for Sahara.
  
  We have this review here, from OpenStack proposal bot:
  
  https://review.openstack.org/158775/
  
  Since it came from the proposal bot, there's no justification in the
  commit message and no cherry pick.
  
  I didn't see this case covered as one of the strict set in
  
  https://wiki.openstack.org/wiki/StableBranch
  
  Do we trust the proposal bot? How do I know I should trust it? On
  master, I assume if there
  is a mistake it will soon be rectified, but stable ...  Do we have a doc
  that talks about stable maintenance
  and requirements changes?  Should we?
  
  Am I being paranoid? :)
 
 Slightly, but that's probably good.
 
 Requirements proposal bot changes had to first be Approved on the
 corresponding requirements stable branch, so that should be both safe,
 and mandatory to go in.
 
 I agree that it would be nicer to have more justification in there.
 There is the beginning of the patch up to do something a bit better here
 - https://review.openstack.org/#/c/145932/ - though it could use to be
 improved.
 
   -Sean
 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Re-evaluating the suitability of the 6 month release cycle

2015-02-24 Thread Sean Dague
On 02/24/2015 03:21 PM, Joe Gordon wrote:
 
 
 On Tue, Feb 24, 2015 at 6:57 AM, Daniel P. Berrange berra...@redhat.com
 mailto:berra...@redhat.com wrote:
 
 On Tue, Feb 24, 2015 at 08:50:45AM -0500, Sean Dague wrote:
  On 02/24/2015 07:48 AM, Russell Bryant wrote:
   On 02/24/2015 12:54 PM, Daniel P. Berrange wrote:
   On Tue, Feb 24, 2015 at 11:48:29AM +, Chris Dent wrote:
   On Tue, 24 Feb 2015, Daniel P. Berrange wrote:
  
   need to do more work. If this is so, then I don't think this
 is a blocker,
   it is just a sign that the project needs to focus on
 providing more resources
   to the teams impacted in that way.
  
   What are the mechanisms whereby the project provides more
 resources
   to teams?
  
   The technical committee and / or foundation board can highlight
 the need
   for investment of resources in critical areas of the project,
 to either
   the community members or vendors involved. As an example, this
 was done
   successfully recently to increase involvement in maintaining
 the EC2
   API support.  There are plenty of vendors involved in OpenStack
 which
   have the ability to target resources, if they can learn where those
   resources are best spent.
  
   Indeed ... and if horizontal teams are the ones hit the most by the
   extra work, each project should help with that burden.  For example,
   projects may need to take their responsibility for documentation
 more
   seriously and require documentation with features (content at
 least, not
   necessarily integration into the proper documentation deliverables)
   instead of assuming it magically gets written later.
 
  Right, and I think this actually hits at the most important part
 of the
  discussion. The question of:
 
  1) what would we need to do to make different release cadences viable?
  2) are those good things to do regardless of release cadence?
 
  The horizontal teams really can't function at different cadences. It
  completely breaks any flow and planning at turns them even further
 into
  firefighting because now everyone has crunch time at different times,
  and the horizontal efforts are required to just play catch up. I know
  what that future looks like, the horizontal teams dry up because
 no one
  wants that job.
 
  Ok, so that being said, what we'd need to do is have horizontal teams
  move to more of a self supporting model. So that the relevant content
  for a project (docs, full stack tests, requirements, etc) all live
  within that project itself, and aren't centrally synchronized.
  Installation of projects needs to be fully isolated from each other so
  that upgrading project A can be done independent of project B, as
 their
  release cadences might all bit disparate. Basically, ever OpenStack
  project needs to reabsorb the cross project efforts they've
 externalized.
 
  Then if project A decided to move off the coupled release, it's impact
  to the rest would be minimal. These are robust components that
 stand on
  their own, and work well with robust other components.
 
  Which... is basically the point of the big tent / new project
 governance
  model. Decompose OpenStack from a giant blob of goo into Robust
 elements
  that are more loosely coupled (so independently robust, and robust in
  their interaction with others). Move the horizontal teams into
  infrastructure vs. content roles, have projects own more of this
 content
  themselves.
 
  But it is a long hard process. Devstack external plugins was implement
  to support this kind of model, but having walked a bunch of different
  teams through this (at all skill levels) there ends up being a lot of
  work to get this right, and a lot of rethinking by teams that assumed
  their interaction with full stack testing is something they'd get to
  contribute once and have someone else maintain (instead of something
  they now need dedicated watchful eye on).
 
  The amount of full stack configurations immediately goes beyond
 anywhere
  near testable, so it requires more robust project testing to ensure
  every exposed interface is more robust (i.e. the testing in pyramids
  from https://review.openstack.org/#/c/150653/).
 
  And, I think the answer to #2 is: yes, this just makes it all better.
 
  So, honestly, I'm massively supportive of the end game. I've been
  carving out the bits of this I can for the last six months. But I
 think
  the way we get there is to actually get the refactoring of the
  horizontal efforts first.
 
 I pretty much fully agree that refactoring the horizontal efforts to
 distribute responsbility across the 

Re: [openstack-dev] [nova] novaclient functional test guidelines

2015-02-24 Thread Ed Leafe
On Feb 24, 2015, at 2:30 PM, Sean Dague s...@dague.net wrote:

 Right, I think to some degree novaclient is legacy code, and we should
 focus on specific regressions and bugs without doing to much code change.
 
 The future should be more focussed on openstacksdk and openstackclient.

IMO, openstackclient has an impossible task: taking the varied (and flawed) CLI 
clients and unite them under a single CLI interface. It is better to create the 
clean separation of the API wrapping into a Python library, and keep the CLI 
completely separate.


-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] Graduating oslo.reports: Request to review clean copy

2015-02-24 Thread Solly Ross
Hello All,

I've finally had some time to finish up the graduation work for oslo.reports 
(previously
openstack.common.report), and it should be ready for review by the Oslo team.  
The only thing
that I was unclear about was the sync required tools from oslo-incubator part.
oslo.reports does not use any modules from oslo-incubator, and it is unclear 
what
constitutes an appropriate script.

Best Regards,
Solly Ross

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] python-ceilometerclient 1.0.13 broke the gate

2015-02-24 Thread Joe Gordon
We saw the same issue elsewhere, and Doug had great explanation on how it
broke semver

https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg46533.html

On Tue, Feb 24, 2015 at 12:11 PM, Matt Riedemann mrie...@linux.vnet.ibm.com
 wrote:

 https://bugs.launchpad.net/python-ceilometerclient/+bug/1425262

 mtreinish adjusted the cap on stable/icehouse here:

 https://review.openstack.org/#/c/158842/

 jogo now has a change to explicitly pin all clients in stable/icehouse to
 the version currently gated on:

 https://review.openstack.org/#/c/158846/

 --

 Thanks,

 Matt Riedemann


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] novaclient functional test guidelines

2015-02-24 Thread Sean Dague
On 02/24/2015 03:28 PM, Ed Leafe wrote:
 On Feb 24, 2015, at 1:49 PM, Sean Dague s...@dague.net wrote:
 
 IMHO the CLI should have an option to returned raw JSON back instead of
 pretty tabled results as well.

 Um... isn't that just the API calls?

 I'm not sure creating a 3rd functional surface is really the answer
 here, because we still need to actually test the CLI / pretty table output.
 
 The python-openstacksdk project was originally envisioned to wrap the API 
 calls and return usable Python objects. The nova client CLI (or any other 
 CLI, for that matter) would then just provide the command line input parsing 
 and output presentation. It's been a while since I was involved with that 
 project, but it seems that decoupling the command line interface from the 
 Python API wrapper would make testing much, much easier.

Right, I think to some degree novaclient is legacy code, and we should
focus on specific regressions and bugs without doing to much code change.

The future should be more focussed on openstacksdk and openstackclient.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] client library release versions

2015-02-24 Thread Robert Collins
Hi, in the cross project meeting a small but important thing came up.

Most (all?) of our client libraries run with semver: x.y.z version
numbers. http://semver.org/ and
http://docs.openstack.org/developer/pbr/semver.html

However we're seeing recent releases that are bumping .z inappropriately.

This makes the job of folk writing version constraints harder :(.

*most* of our releases should be an increment of .y - so 1.2.0, 1.3.0
etc. The only time a .z increase is expected is for
backwards-compatible bug fixes. [1]

In particular, changing a dependency version is probably never a .z
increase, except - perhaps - when the dependency itself only changed
.z, and so on transitively.

Adding or removing a dependency really can't ever be a .z increase.

We're nearly finished on the pbr support to help automate the decision
making process, but the rule of thumb - expect to do .y increases - is
probably good enough for a while yet.

-Rob

[1]: The special case is for projects that have not yet committed to a
public API - 0.x.y versions. Don't do that. Commit to a public API :)

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [Cinder-GlusterFS CI] centos7 gate job abrupt failures

2015-02-24 Thread Bharat Kumar

Ran the job manually on rax VM, provided by Jeremy. (Thank you Jeremy).

After running 971 test cases VM inaccessible for 569 ticks, then 
continues... (Look at the console.log [1])

And also have a look at dstat log. [2]

The summary is:
==
Totals
==
Ran: 1125 tests in 5835. sec.
 - Passed: 960
 - Skipped: 88
 - Expected Fail: 0
 - Unexpected Success: 0
 - Failed: 77
Sum of execute time for each test: 13603.6755 sec.


[1] https://etherpad.openstack.org/p/rax_console.txt
[2] https://etherpad.openstack.org/p/rax_dstat.log

On 02/24/2015 07:03 PM, Deepak Shetty wrote:
FWIW, we tried to run our job in a rax provider VM (provided by ianw 
from his personal account)
and we ran the tempest tests twice, but the OOM did not re-create. Of 
the 2 runs, one of the run
used the same PYTHONHASHSEED as we had in one of the failed runs, 
still no oom.


Jeremy graciously agreed to provide us 2 VMs , one each from rax and 
hpcloud provider

to see if provider platform has anything to do with it.

So we plan to run again wtih the VMs given from Jeremy , post which i 
will send

next update here.

thanx,
deepak


On Tue, Feb 24, 2015 at 4:50 AM, Jeremy Stanley fu...@yuggoth.org 
mailto:fu...@yuggoth.org wrote:


Due to an image setup bug (I have a fix proposed currently), I was
able to rerun this on a VM in HPCloud with 30GB memory and it
completed in about an hour with a couple of tempest tests failing.
Logs at: http://fungi.yuggoth.org/tmp/logs3.tar

Rerunning again on another 8GB Rackspace VM with the job timeout
increased to 5 hours, I was able to recreate the network
connectivity issues exhibited previously. The job itself seems to
have run for roughly 3 hours while failing 15 tests, and the worker
was mostly unreachable for a while at the end (I don't know exactly
how long) until around the time it completed. The OOM condition is
present this time too according to the logs, occurring right near
the end of the job. Collected logs are available at:
http://fungi.yuggoth.org/tmp/logs4.tar

Given the comparison between these two runs, I suspect this is
either caused by memory constraints or block device I/O performance
differences (or perhaps an unhappy combination of the two).
Hopefully a close review of the logs will indicate which.
--
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Warm Regards,
Bharat Kumar Kobagana
Software Engineer
OpenStack Storage – RedHat India
Mobile - +91 9949278005

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] novaclient functional test guidelines

2015-02-24 Thread Sean Dague
On 02/24/2015 01:47 PM, Joe Gordon wrote:
 
 
 On Tue, Feb 24, 2015 at 9:47 AM, Sean Dague s...@dague.net
 mailto:s...@dague.net wrote:
 
 Towards the end of merging the regression test for the nova
 volume-attach bug - https://review.openstack.org/#/c/157959/ there was a
 discussion around what style the functional tests should take.
 Especially as that had a mix of CLI and API calls in it.
 
 
 
 Thanks for starting this thread.  Once we reach general agreement lets
 put this in a in tree README for record keeping.

Absolutely.

 Here are my thoughts for why that test ended up that way:
 
 1) All resource setup that is table stakes for the test should be done
 via the API, regardless if it's a CLI or API test.
 
 The reason for this is that structured data is returned, which removes
 one possible error in the tests by parsing incorrectly. The API objects
 returned also include things like .delete methods in most cases, which
 means that addCleanup is a little more clean.
 
 
 IMHO the CLI should have an option to returned raw JSON back instead of
 pretty tabled results as well.

Um... isn't that just the API calls?

I'm not sure creating a 3rd functional surface is really the answer
here, because we still need to actually test the CLI / pretty table output.

 2) Main logic should touch which ever interface you are trying to test.
 This was demonstrating a CLI regression, so the volume-attach call was
 important to be done over the CLI.
 
 
 Now... here's where theory runs into issues.
 
 #1 - nova boot is table stakes. Under the above guidelines it should be
 called via API. However --poll is a CLI construct and so saved a custom
 wait loop here. We should implement that custom wait loop down the road
 and make that an API call
 
 
 https://github.com/openstack/python-novaclient/blob/master/novaclient/tests/functional/test_instances.py#L116
 
 
 This issue is from a real short coming in the python client. So this
 isn't really an issue with your theory, just an issue with novaclient.

Sure, though, if we support the async form in the API we need to test
it. So while adding poll support is a good UX add, it doesn't actually
mean we get to not test the async versions. It just adds another feature
set we need to test.

 #2 - the volume create command is table stakes. It should be an API
 call. However, it can't be because the service catalog redirection only
 works at the CLI layer. This is actually also the crux of bug #1423695.
 The same reason the completion cache code failed is the reason we can't
 use the API for that.
 
 
 https://github.com/openstack/python-novaclient/blob/master/novaclient/tests/functional/test_instances.py#L129
 
 
 Issues like this are why I wrote the read only nova CLI tests in the
 first place. Unit testing the python API is doable, but unit testing the
 CLI is a little bit more tricky. So I expect issues like this to come up
 over and over again.

So this is actually an issue with the API code. We have a chunk of code
that's directly callable by consumers, but if they do, it will error.

 #3 - the cleanup of the volume should have been API call. See reason
 for #2.
 
 
 https://github.com/openstack/python-novaclient/blob/master/novaclient/tests/functional/test_instances.py#L131
 
 #4 - the cleanup of the attachment should be an addCleanup via the API.
 See reason for #2 why it's not.
 
 
 https://github.com/openstack/python-novaclient/blob/master/novaclient/tests/functional/test_instances.py#L155
 
 
 I'm happy if there are other theories about how we do these things,
 being the first functional test in the python-novaclient tree that
 creates and destroys real resources, there isn't an established pattern
 yet. But I think doing all CLI calls in CLI tests is actually really
 cumbersome, especially in the amount of output parsing code needed if
 you are going to setup any complicated resource structure.
 
 
 Here is an alternate theory:
 
 We should have both python API and CLI functional tests. But they should
 be kept separate.
 
 This is to help us make sure both the CLI and python API are actually
 usable interfaces. As the exercise above has shown, they both have
 really major short comings.  I think having in tree functional testing
 covering both the CLI and python API will make it easier for us to
 review new client features in terms of usability.
 
 Here is a very rough proof of concept patch showing the same
 tests: 
 https://review.openstack.org/#/c/157974/2/novaclient/tests/functional/test_volumes.py,cm
 
 No matter how we define this functional testing model, I think its clear
 novaclient needs a decent amount of work before it can really be usable.

Agreed, but I think we should focus on testing the code we actually have
to prevent regressions. I think functional changes to put polling
throughout are nice UX, but 

Re: [openstack-dev] [stable][all] Revisiting the 6 month release cycle

2015-02-24 Thread Joe Gordon
On Tue, Feb 24, 2015 at 2:59 AM, Daniel P. Berrange berra...@redhat.com
wrote:

 On Mon, Feb 23, 2015 at 04:14:36PM -0800, Joe Gordon wrote:
  Was:
 
 http://lists.openstack.org/pipermail/openstack-dev/2015-February/057578.html
 
  There has been frustration with our current 6 month development cadence.
  This is an attempt to explain those frustrations and propose a very rough
  outline of a possible alternative.
 
 
  Currently we follow a 6 month release cadence, with 3 intermediate
  milestones (6 weeks apart), as explained here:
  https://wiki.openstack.org/wiki/Kilo_Release_Schedule
 
 
  Current issues
 
 - 3 weeks of feature freeze for all projects at the end of each cycle
 (3
 out of the 26 feature blocked)
 - 3 weeks of release candidates. Once a candidate is cut development
 is
 open for next release. While this is good in theory, not much work
 actually
 starts on next release.
 - some projects have non priority feature freezes and at Milestone 2
 (so
 9 out of 26 weeks restricted in those projects)
 - vicious development circle
- vicious circle:
   - big push right to land lots of features right before the
 release
   - a lot of effort is spent getting the release ready
   - after the release people are a little burnt out and take it
 easy
   until the next summit
   - Blueprints have to be re-discussed and re-approved for the
 next
   cycle
   - makes it hard to land blueprints early in the cycle casing the
   bug rush at the end of the cycle, see step 1
- Makes it hard to plan things across a release
- This actually destabilizes things right as we go into the
stabilization period (We actually have great data on this too)
- Means postponing blueprints that miss a deadline several months
 
 
  Requirements for a new model
 
 - Keep 6 month release cadence. Not everyone is willing to deploy from
 trunk
 - Keep stable branches for at least 6 months. In order to test
 upgrades
 from the last stable branch, we need a stable branch to test against
 - Keep supporting continuous deployment. Some people really want to
 deploy from trunk
 
 
  What We can drop
 
 - While we need to keep releasing a stable branch every 6 months, we
 don't have to do all of our development planning around it. We can
 treat it
 as just another milestone
 
 
  I think a lot of the frustration with our current cadence comes out of
 the
  big stop everything (development, planning etc.), and stabilize the
 release
  process. Which in turn isn't really making things more stable. So I
 propose
  we keep the 6 month release cycle, but change the development cycle from
 a
  6 month one with 3 intermediate milestones to a 6 week one with a
 milestone
  at the end of it.

 You're solving some issues around developer experiance by letting
 developers
 iterate on a faster cycle which is something I agree with, but by keeping
 the 6 month release cycle I think you're missing the bigger opportunity
 here.
 Namely, the chance to get the features to the users faster, which is
 ultimtely
 the reason why contributors currently push us so hard towards the end of
 the
 release. I think we have to be more ambitious here and actually make the
 release
 cycle itself faster, putting it on a 2 month cycle. More detail about why
 I think
 this is needed is here:


 http://lists.openstack.org/pipermail/openstack-dev/2015-February/057614.html


Nothing like having to concurrent threads on the same thing with very
similar subjects.

[openstack-dev] [all] Re-evaluating the suitability of the 6 month release
cycle

vs

[openstack-dev][stable][all] Revisiting the 6 month release cycle

I'll respond to your proposal, on the other thread.



  What this actually means:
 
 - Stop approving blueprints for specific stable releases, instead just
 approve them and target them to milestones.
- Milestones stop becoming Kilo-1, Kilo-2, Kilo-3 etc. and just
become 1,2,3,4,5,6,7,8,9 etc.
- If something misses what was previously known as Kilo-3 it has to
wait a week for what for milestone 4.
 - Development focuses on milestones only. So 6 week cycle with say 1
 week of stabilization, finish things up before each milestone
 - Process of cutting a stable branch (planning, making the branch,
 doing
 release candidates, testing etc.) is done by a dedicated stable branch
 team. And it should be done based on a specific milestone
 - Goal: Change the default development planning mode to ignore stable
 branches, and allow for us to think of things in terms of the number
 of
 milestones needed, not will it make the stable branch or not
 
 
  Gotchas, questions etc:
 
 - Some developers will still care about getting a feature into a
 specific stable release, so we may still get a small rush for the
 milestone
 before each 

Re: [openstack-dev] [nova] novaclient functional test guidelines

2015-02-24 Thread Ed Leafe
On Feb 24, 2015, at 1:49 PM, Sean Dague s...@dague.net wrote:

 IMHO the CLI should have an option to returned raw JSON back instead of
 pretty tabled results as well.
 
 Um... isn't that just the API calls?
 
 I'm not sure creating a 3rd functional surface is really the answer
 here, because we still need to actually test the CLI / pretty table output.

The python-openstacksdk project was originally envisioned to wrap the API calls 
and return usable Python objects. The nova client CLI (or any other CLI, for 
that matter) would then just provide the command line input parsing and output 
presentation. It's been a while since I was involved with that project, but it 
seems that decoupling the command line interface from the Python API wrapper 
would make testing much, much easier.


-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Infra cloud: infra running a cloud for nodepool

2015-02-24 Thread James E. Blair
A group of folks from HP is interested in starting an effort to run a
cloud as part of the Infrastructure program with the purpose of
providing resources to nodepool for OpenStack testing.  HP is supplying
two racks of machines, and we will operate each as an independent cloud.
I think this is a really good idea, and will do a lot for OpenStack.

Here's what we would get out of it:

1) More test resources.  The primary goal of this cloud will be to
provide more instances to nodepool.  This would extend our pool to
include a third provider meaning that we are more resilient to service
disruptions, and increase our aggregate capacity meaning we can perform
more testing more quickly.  It's hard to say for certain until we have
something spun up that we can benchmark, but we are hoping for somewhere
between an additional 50% to 100% of our current capacity.

2) Closing the loop between OpenStack developers and ops.  This cloud
will be deployed as often as we are able (perhaps daily, perhaps less
often, depending on technology) meaning that if it is not behaving in a
way developers like, they can fix it fairly quickly.

3) A fully open deployment.  The infra team already runs a large
logstash and elasticsearch system for finding issues in devstack runs.
We will deploy the same technology (and perhaps more) to make sure that
anyone who wants to inspect the operational logs from the running
production cloud is able to do so.  We can even run the same
elastic-recheck queries to see if known bugs are visible in production.
The cloud will be deployed using the same tools and processes as the
rest of the project infrastructure, meaning anyone can edit the modules
that deploy the cloud to make changes.

How is this different from the TripleO cloud?

The primary goal of the TripleO cloud is to provide test infrastructure
so that the TripleO project can run tests that require real hardware and
complex environments.  The primary purpose of the infra cloud will be to
run a production service that will stand alongside other cloud providers
to supply virtual machines to nodepool.

What about the infra team's aversion to real hardware?

It's true that all of our current resources are virtual, and this would
be adding the first real, bare-metal, machines to the infra project.
However, there are a number of reasons I feel we're ready to take that
step now:

* This cloud will stand alongside two others to provide resources to
  nodepool.  If it completely fails, infra will continue to operate; so
  we don't need to be overly concerned with uptime and being on-call,
  etc.

* The deployment and operation of the cloud will use the same technology
  and processes as the infra project currently uses, so there should be
  minimal barriers for existing team members.

* A bunch of new people will be joining the team to help with this.  We
  expect them to become fully integrated with the rest of infra, so that
  they are able to help out in other areas and the whole team expands
  its collective capacity and expertise.

If this works well, it may become a template for incorporating other
hardware contributions into the system.

Next steps:

We've started the process of identifying the steps to make this happen,
as well as answering some deployment questions (specifics about
technology, topology, etc).  There is a StoryBoard story for the effort:

  https://storyboard.openstack.org/#!/story/2000175

And some notes that we took at a recent meeting to bootstrap the effort:

  https://etherpad.openstack.org/p/InfraCloudBootcamp

I think one of the next steps is to actually write all that up and push
it up as a change to the system-config documentation.  Once we're
certain we agree on all of that, it should be safe to divide up many of
the remaining tasks.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][all] Revisiting the 6 month release cycle

2015-02-24 Thread Joe Gordon
On Tue, Feb 24, 2015 at 10:00 AM, Johannes Erdfelt johan...@erdfelt.com
wrote:

 On Tue, Feb 24, 2015, Thierry Carrez thie...@openstack.org wrote:
  Agree on the pain of maintaining milestone plans though, which is why I
  propose we get rid of most of it in Liberty. That will actually be
  discussed at the cross-project meeting today:
 
 
 https://wiki.openstack.org/wiki/Release_Cycle_Management/Liberty_Tracking

 I'm happy to see this.


++



 Assignees may target their blueprint to a future milestone, as an
 indication of when they intend to land it (not mandatory)

 That seems useless to me. I have no control over when things land. I can
 only control when my code is put up for review.

 Recently, I have spent a lot more time waiting on reviews than I have
 spent writing the actual code.


I think this is a symptom of a much deeper problem. And adjusting the
release cadence won't make a big impact on this.



 JE


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][all] Revisiting the 6 month release cycle

2015-02-24 Thread Ed Leafe
On Feb 24, 2015, at 1:50 PM, Joe Gordon joe.gord...@gmail.com wrote:

 I think the release candidate
 period is the only thing that makes your code drops actually usable.
 It's the only moment in the cycle where integrators test. It's the only
 moment in the cycle where developers work on bugs they did not file
 themselves, but focus on a project-wide priority list of release
 blockers. If you remove that period, then nobody will ever work on
 release blockers that do not directly affect them. Shorten that period
 to one week, and no integrator will have the time to run a proper QA
 program to detect those release blockers.
 
 I still think the 3 week RC candidate cycle needs to happen, the difference 
 is it would be done by stable maintenance. And I agree, the RC candidate 
 period is one of the few moments where developers work on bugs they did not 
 file themselves. So I am not sure how this would actually work.  Perhaps the 
 answer is we have deeper issues if we don't want to fix bugs until the last 
 minute.

I like the notion that there isn't an overall release that all development is 
tied to, but that at some more or less regular interval, a new stable release 
is cut, and intense integration testing is done on that, with bug fixes as 
needed. But how to get developers who are intent on coding their new features 
to hold off and work on fixing bugs identified by the stable testing is a big 
question mark to me.


-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] python-ceilometerclient 1.0.13 broke the gate

2015-02-24 Thread Matt Riedemann

https://bugs.launchpad.net/python-ceilometerclient/+bug/1425262

mtreinish adjusted the cap on stable/icehouse here:

https://review.openstack.org/#/c/158842/

jogo now has a change to explicitly pin all clients in stable/icehouse 
to the version currently gated on:


https://review.openstack.org/#/c/158846/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Re-evaluating the suitability of the 6 month release cycle

2015-02-24 Thread Joe Gordon
On Tue, Feb 24, 2015 at 6:57 AM, Daniel P. Berrange berra...@redhat.com
wrote:

 On Tue, Feb 24, 2015 at 08:50:45AM -0500, Sean Dague wrote:
  On 02/24/2015 07:48 AM, Russell Bryant wrote:
   On 02/24/2015 12:54 PM, Daniel P. Berrange wrote:
   On Tue, Feb 24, 2015 at 11:48:29AM +, Chris Dent wrote:
   On Tue, 24 Feb 2015, Daniel P. Berrange wrote:
  
   need to do more work. If this is so, then I don't think this is a
 blocker,
   it is just a sign that the project needs to focus on providing more
 resources
   to the teams impacted in that way.
  
   What are the mechanisms whereby the project provides more resources
   to teams?
  
   The technical committee and / or foundation board can highlight the
 need
   for investment of resources in critical areas of the project, to
 either
   the community members or vendors involved. As an example, this was
 done
   successfully recently to increase involvement in maintaining the EC2
   API support.  There are plenty of vendors involved in OpenStack which
   have the ability to target resources, if they can learn where those
   resources are best spent.
  
   Indeed ... and if horizontal teams are the ones hit the most by the
   extra work, each project should help with that burden.  For example,
   projects may need to take their responsibility for documentation more
   seriously and require documentation with features (content at least,
 not
   necessarily integration into the proper documentation deliverables)
   instead of assuming it magically gets written later.
 
  Right, and I think this actually hits at the most important part of the
  discussion. The question of:
 
  1) what would we need to do to make different release cadences viable?
  2) are those good things to do regardless of release cadence?
 
  The horizontal teams really can't function at different cadences. It
  completely breaks any flow and planning at turns them even further into
  firefighting because now everyone has crunch time at different times,
  and the horizontal efforts are required to just play catch up. I know
  what that future looks like, the horizontal teams dry up because no one
  wants that job.
 
  Ok, so that being said, what we'd need to do is have horizontal teams
  move to more of a self supporting model. So that the relevant content
  for a project (docs, full stack tests, requirements, etc) all live
  within that project itself, and aren't centrally synchronized.
  Installation of projects needs to be fully isolated from each other so
  that upgrading project A can be done independent of project B, as their
  release cadences might all bit disparate. Basically, ever OpenStack
  project needs to reabsorb the cross project efforts they've externalized.
 
  Then if project A decided to move off the coupled release, it's impact
  to the rest would be minimal. These are robust components that stand on
  their own, and work well with robust other components.
 
  Which... is basically the point of the big tent / new project governance
  model. Decompose OpenStack from a giant blob of goo into Robust elements
  that are more loosely coupled (so independently robust, and robust in
  their interaction with others). Move the horizontal teams into
  infrastructure vs. content roles, have projects own more of this content
  themselves.
 
  But it is a long hard process. Devstack external plugins was implement
  to support this kind of model, but having walked a bunch of different
  teams through this (at all skill levels) there ends up being a lot of
  work to get this right, and a lot of rethinking by teams that assumed
  their interaction with full stack testing is something they'd get to
  contribute once and have someone else maintain (instead of something
  they now need dedicated watchful eye on).
 
  The amount of full stack configurations immediately goes beyond anywhere
  near testable, so it requires more robust project testing to ensure
  every exposed interface is more robust (i.e. the testing in pyramids
  from https://review.openstack.org/#/c/150653/).
 
  And, I think the answer to #2 is: yes, this just makes it all better.
 
  So, honestly, I'm massively supportive of the end game. I've been
  carving out the bits of this I can for the last six months. But I think
  the way we get there is to actually get the refactoring of the
  horizontal efforts first.

 I pretty much fully agree that refactoring the horizontal efforts to
 distribute responsbility across the individual projects is the way
 forward. I don't think it needs to be a pre-requisite step for changing
 the release cycle. We can do both in parallel if we put our minds to
 it.

 My biggest fear is that we just keeping debating problems and alternatives,
 but continue to be too afraid of theoretical problems, to actually take the
 risk of effecting meaningful improvemnts in the operation of the project.


I tend to agree with you on this.  I don't think completing the refactoring
of the horizontal 

Re: [openstack-dev] [nova] Outcome of the nova FFE meeting for Kilo

2015-02-24 Thread Daniel P. Berrange
On Mon, Feb 23, 2015 at 03:24:01PM -0500, Jay Pipes wrote:
 Here's another thought: is the big-bang integrated 6-month fixed release
 cycle useful any more? Can we talk about using more of a moving train model
 that doesn't have these long freeze cycles? At least for some of the
 projects, I think that would ease some of the minds of folks who are
 dismayed at having to wait yet another 6-9 months to see their code in the
 Nova release.

I entirely agree - the 6 month cycle creates massive pain for contributors
who think openstack is supposed to be a fast moving agile project. I have
co-incidentally just proposed we switch to a 2 month cycle, since we pretty
much have everything in place to achieve that with little fuss, since we
already do 3 milestone releases during that 6 month cycle.

http://lists.openstack.org/pipermail/openstack-dev/2015-February/057614.html

Lets continue discussion in the separate thread, as many people probably
ignore this thread due to its [nova] tag and subject

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Outcome of the nova FFE meeting for Kilo

2015-02-24 Thread Duncan Thomas
Agreed. It causes two problems:

1) 9 month delays in getting code into a release
2) Some projects consider something to be breakable, from a back
compatibility point of view, until it has made a formal release, which
means anybody cutting releases from anything other than final/stable is
facing the possibility of tenant facing API breakage. The attitude to this
seems to differ between projects and indeed PTLs within the same project,
but is quite worrying for distributors who want to release something more
cutting edge than final/stable.

Is there any evidence that our long freeze significantly increases
stability or indeed testing? Or do people just start working on their
features for the next release?

On 23 February 2015 at 22:45, Dan Smith d...@danplanet.com wrote:

  Seriously, what is the point of 6-month releases again? We are a
  free-form open source set of projects, with a lot of intelligent
  engineers. Why are we stuck using an outdated release model?

 I've been wondering this myself for quite a while now. I'm really
 interested to hear what things would look like in a no-release model.
 I'm sure it would be initially met with a lot of resistance, but I think
 that in the end, it makes more sense to move to that sort of model and
 let vendors/deployers more flexibly decide when to roll out new stuff
 based on what has changed and what they value.

 --Dan


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] [QA] Automated tests for all Murano applications

2015-02-24 Thread Timur Nurlygayanov
Hi Boris,

this idea is about the JSON files for each Murano application, which can be
used in Murano functional tests or in Rally tests with Rally jobs for each
commit to Murano engine and Murano applications repositories.

I like the idea to configure Rally jobs for Murano repositories and then
add JSON files for all Murano applications.

Thank you!

On Tue, Feb 24, 2015 at 1:31 PM, Boris Pavlovic bo...@pavlovic.me wrote:

 Timur,

 We are on very last steps before implementing base for benchmarking Murano
 in Rally + we are implementing benchmarks that can test exactly this.

 Maybe it makes sense just to use it in gates and avoid implementing
 anything else?
   1.  https://review.openstack.org/#/c/137650/
   2. https://review.openstack.org/#/c/137661/

 Best regards,
 Boris Pavlovic

 On Tue, Feb 24, 2015 at 12:18 PM, Timur Nurlygayanov 
 tnurlygaya...@mirantis.com wrote:

 Hi all,

 I have an idea about the automated testing of all Murano applications
 with the automated tests and continuous integration processes,
 here are the links:

 blueprint:
 https://blueprints.launchpad.net/murano-applications/+spec/deployment-json-file-for-each-application
 and etherpad: https://etherpad.openstack.org/p/auto-test-murano-app

 The implementation of the idea is simple and very useful for Murano
 applications testing.

 Please, feel free to comment, edit etherpad and blueprint.
 And, of course, I will be happy if someone will implement it :)

 Thank you!

 --

 Timur,
 Senior QA Engineer
 OpenStack Projects
 Mirantis Inc

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Timur,
Senior QA Engineer
OpenStack Projects
Mirantis Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Re-evaluating the suitability of the 6 month release cycle

2015-02-24 Thread Russell Bryant
On 02/24/2015 01:28 PM, Kashyap Chamarthy wrote:
 On Tue, Feb 24, 2015 at 11:54:31AM +, Daniel P. Berrange wrote:
 On Tue, Feb 24, 2015 at 11:48:29AM +, Chris Dent wrote:
 On Tue, 24 Feb 2015, Daniel P. Berrange wrote:

 need to do more work. If this is so, then I don't think this is a blocker,
 it is just a sign that the project needs to focus on providing more 
 resources
 to the teams impacted in that way.

 What are the mechanisms whereby the project provides more resources
 to teams?
 
 Along with the below, if push comes to shove, OpenStack Foundation could
 probably try a milder variant (obviously, not all activities can be
 categorized as 'critical path') of Linux Foundation's Critical
 Infrastructure Protection Initiative[1] to fund certain project
 activities in need.

The OpenStack Foundation effectively already does this.  In particular,
the Foundation is helping fund critical horizontal efforts like release
management, infrastructure, and community management.

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Re-evaluating the suitability of the 6 month release cycle

2015-02-24 Thread Kashyap Chamarthy
On Tue, Feb 24, 2015 at 11:54:31AM +, Daniel P. Berrange wrote:
 On Tue, Feb 24, 2015 at 11:48:29AM +, Chris Dent wrote:
  On Tue, 24 Feb 2015, Daniel P. Berrange wrote:
  
  need to do more work. If this is so, then I don't think this is a blocker,
  it is just a sign that the project needs to focus on providing more 
  resources
  to the teams impacted in that way.
  
  What are the mechanisms whereby the project provides more resources
  to teams?

Along with the below, if push comes to shove, OpenStack Foundation could
probably try a milder variant (obviously, not all activities can be
categorized as 'critical path') of Linux Foundation's Critical
Infrastructure Protection Initiative[1] to fund certain project
activities in need.
 
 The technical committee and / or foundation board can highlight the
 need for investment of resources in critical areas of the project, to
 either the community members or vendors involved. As an example, this
 was done successfully recently to increase involvement in maintaining
 the EC2 API support.  There are plenty of vendors involved in
 OpenStack which have the ability to target resources, if they can
 learn where those resources are best spent.
 

[1] http://www.linuxfoundation.org/programs/core-infrastructure-initiative

-- 
/kashyap

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Re-evaluating the suitability of the 6 month release cycle

2015-02-24 Thread Flavio Percoco

On 24/02/15 11:02 +, Daniel P. Berrange wrote:

On Tue, Feb 24, 2015 at 10:44:57AM +, Chris Dent wrote:

On Tue, 24 Feb 2015, Daniel P. Berrange wrote:

I was writing this mail for the past few days, but the nova thread
today prompted me to finish it off  send it :-)

Thanks for doing this. I think you're probably right that the current
release cycle has many negative impacts on the development process and
deserve at least some critical thinking if not outright changing.

Thanks especially for listing some expected questions (with answers).

One additional question I'd like to see answered in some depth is:
Why have unified release schedules? That is, why should Nova and Glance
or anything else release on the same schedule as any other
OpenStack-related project?

Disentangling the release cycles might lead to stronger encapsulation
of, and stronger contracts between, the projects. It might also lead to
a total mess.


For peripheral projects I don't think co-ordinate release cycle is needed,
but for the core projects I think it is helpful in general to have some
co-ordination of releases. It allows marketing to more effectively promote
the new project releases, it helps when landing features that span across
projects to know they'll be available to users at the same time in general
and minimize burden on devs  users to remember many different dates. It
is hard enough remembering the dates for our coordinated release cycle,
let alone dates for 30 different project cycles. IME predictable dates is
a really important  useful thing to have for a planning POV. This is why
I suggested, we do a 2 month cycle, with a strict date of 1st of the month
in Feb, Apr, Jun, Aug, Oct, Dec.


To this I'd also add that bug fixing is way easier when you have
aligned releases for projects that are expected to be deployed
together. It's easier to know what the impact of a change/bug is
throughout the infrastructure.

Flavio



Regards,
Daniel
--
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


pgpFaVXWQm9Ii.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Re-evaluating the suitability of the 6 month release cycle

2015-02-24 Thread Daniel P. Berrange
On Tue, Feb 24, 2015 at 11:48:29AM +, Chris Dent wrote:
 On Tue, 24 Feb 2015, Daniel P. Berrange wrote:
 
 need to do more work. If this is so, then I don't think this is a blocker,
 it is just a sign that the project needs to focus on providing more resources
 to the teams impacted in that way.
 
 What are the mechanisms whereby the project provides more resources
 to teams?

The technical committee and / or foundation board can highlight the need
for investment of resources in critical areas of the project, to either
the community members or vendors involved. As an example, this was done
successfully recently to increase involvement in maintaining the EC2
API support.  There are plenty of vendors involved in OpenStack which
have the ability to target resources, if they can learn where those
resources are best spent.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Re-evaluating the suitability of the 6 month release cycle

2015-02-24 Thread Chris Dent

On Tue, 24 Feb 2015, Daniel P. Berrange wrote:


I was writing this mail for the past few days, but the nova thread
today prompted me to finish it off  send it :-)


Thanks for doing this. I think you're probably right that the current
release cycle has many negative impacts on the development process and
deserve at least some critical thinking if not outright changing.

Thanks especially for listing some expected questions (with answers).

One additional question I'd like to see answered in some depth is:
Why have unified release schedules? That is, why should Nova and Glance
or anything else release on the same schedule as any other
OpenStack-related project?

Disentangling the release cycles might lead to stronger encapsulation
of, and stronger contracts between, the projects. It might also lead to
a total mess.

Thanks.
--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Re-evaluating the suitability of the 6 month release cycle

2015-02-24 Thread Chris Dent

On Tue, 24 Feb 2015, Daniel P. Berrange wrote:


need to do more work. If this is so, then I don't think this is a blocker,
it is just a sign that the project needs to focus on providing more resources
to the teams impacted in that way.


What are the mechanisms whereby the project provides more resources
to teams?

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][all] Revisiting the 6 month release cycle

2015-02-24 Thread Daniel P. Berrange
On Mon, Feb 23, 2015 at 04:14:36PM -0800, Joe Gordon wrote:
 Was:
 http://lists.openstack.org/pipermail/openstack-dev/2015-February/057578.html
 
 There has been frustration with our current 6 month development cadence.
 This is an attempt to explain those frustrations and propose a very rough
 outline of a possible alternative.
 
 
 Currently we follow a 6 month release cadence, with 3 intermediate
 milestones (6 weeks apart), as explained here:
 https://wiki.openstack.org/wiki/Kilo_Release_Schedule
 
 
 Current issues
 
- 3 weeks of feature freeze for all projects at the end of each cycle (3
out of the 26 feature blocked)
- 3 weeks of release candidates. Once a candidate is cut development is
open for next release. While this is good in theory, not much work actually
starts on next release.
- some projects have non priority feature freezes and at Milestone 2 (so
9 out of 26 weeks restricted in those projects)
- vicious development circle
   - vicious circle:
  - big push right to land lots of features right before the release
  - a lot of effort is spent getting the release ready
  - after the release people are a little burnt out and take it easy
  until the next summit
  - Blueprints have to be re-discussed and re-approved for the next
  cycle
  - makes it hard to land blueprints early in the cycle casing the
  bug rush at the end of the cycle, see step 1
   - Makes it hard to plan things across a release
   - This actually destabilizes things right as we go into the
   stabilization period (We actually have great data on this too)
   - Means postponing blueprints that miss a deadline several months
 
 
 Requirements for a new model
 
- Keep 6 month release cadence. Not everyone is willing to deploy from
trunk
- Keep stable branches for at least 6 months. In order to test upgrades
from the last stable branch, we need a stable branch to test against
- Keep supporting continuous deployment. Some people really want to
deploy from trunk
 
 
 What We can drop
 
- While we need to keep releasing a stable branch every 6 months, we
don't have to do all of our development planning around it. We can treat it
as just another milestone
 
 
 I think a lot of the frustration with our current cadence comes out of the
 big stop everything (development, planning etc.), and stabilize the release
 process. Which in turn isn't really making things more stable. So I propose
 we keep the 6 month release cycle, but change the development cycle from a
 6 month one with 3 intermediate milestones to a 6 week one with a milestone
 at the end of it.

You're solving some issues around developer experiance by letting developers
iterate on a faster cycle which is something I agree with, but by keeping
the 6 month release cycle I think you're missing the bigger opportunity here.
Namely, the chance to get the features to the users faster, which is ultimtely
the reason why contributors currently push us so hard towards the end of the
release. I think we have to be more ambitious here and actually make the release
cycle itself faster, putting it on a 2 month cycle. More detail about why I 
think
this is needed is here:

  http://lists.openstack.org/pipermail/openstack-dev/2015-February/057614.html

 What this actually means:
 
- Stop approving blueprints for specific stable releases, instead just
approve them and target them to milestones.
   - Milestones stop becoming Kilo-1, Kilo-2, Kilo-3 etc. and just
   become 1,2,3,4,5,6,7,8,9 etc.
   - If something misses what was previously known as Kilo-3 it has to
   wait a week for what for milestone 4.
- Development focuses on milestones only. So 6 week cycle with say 1
week of stabilization, finish things up before each milestone
- Process of cutting a stable branch (planning, making the branch, doing
release candidates, testing etc.) is done by a dedicated stable branch
team. And it should be done based on a specific milestone
- Goal: Change the default development planning mode to ignore stable
branches, and allow for us to think of things in terms of the number of
milestones needed, not will it make the stable branch or not
 
 
 Gotchas, questions etc:
 
- Some developers will still care about getting a feature into a
specific stable release, so we may still get a small rush for the milestone
before each stable branch
- Requires significantly more work from the stable maintenance team

I think we can increase the release cycle to 2 months without impacting the
stable team to any great extent. We simply don't have to provide stabel branches
for every single release - compare with Linux, only a subset of major releases
get stable branches  that generally works pretty well.

- Naming the stable branches becomes less fun, as we refer to the stable
branches less
  

[openstack-dev] [cinder] Resuming of workflows/tasks

2015-02-24 Thread Dulko, Michal
Hi all,

I was working on spec[1] and prototype[2] to make Cinder to be able to resume 
workflows in case of server or service failure. Problem of requests lost and 
resources left in unresolved states in case of failure was signaled at the 
Paris Summit[3].

What I was able to prototype was to resume running tasks locally after service 
restart using persistence API provided by TaskFlow. However core team agreed 
that we should aim at resuming workflows globally even by other service 
instances (which I think is a good decision).

There are few major problems blocking this approach:

1. Need of distributed lock to avoid same task being resumed by two instances 
of a service. Do we need tooz to do that or is there any other solution?
2. Are we going to step out from using TaskFlow? Such idea came up at the 
mid-cycle meetup, what's the status of it? Without TaskFlow's persistence 
implementing task resumptions would be a lot more difficult.
3. In case of cinder-api service we're unable to monitor it's state using 
servicegroup API. Do we have alternatives here to make decision if particular 
workflow being processed by cinder-api is abandoned?

As this topic is deferred to Liberty release I want to start discussion here to 
be continued at the summit.

[1] https://review.openstack.org/#/c/147879/
[2] https://review.openstack.org/#/c/152200/
[3] https://etherpad.openstack.org/p/kilo-crossproject-ha-integration

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Re-evaluating the suitability of the 6 month release cycle

2015-02-24 Thread Russell Bryant
On 02/24/2015 12:54 PM, Daniel P. Berrange wrote:
 On Tue, Feb 24, 2015 at 11:48:29AM +, Chris Dent wrote:
 On Tue, 24 Feb 2015, Daniel P. Berrange wrote:

 need to do more work. If this is so, then I don't think this is a blocker,
 it is just a sign that the project needs to focus on providing more 
 resources
 to the teams impacted in that way.

 What are the mechanisms whereby the project provides more resources
 to teams?
 
 The technical committee and / or foundation board can highlight the need
 for investment of resources in critical areas of the project, to either
 the community members or vendors involved. As an example, this was done
 successfully recently to increase involvement in maintaining the EC2
 API support.  There are plenty of vendors involved in OpenStack which
 have the ability to target resources, if they can learn where those
 resources are best spent.

Indeed ... and if horizontal teams are the ones hit the most by the
extra work, each project should help with that burden.  For example,
projects may need to take their responsibility for documentation more
seriously and require documentation with features (content at least, not
necessarily integration into the proper documentation deliverables)
instead of assuming it magically gets written later.

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Re-evaluating the suitability of the 6 month release cycle

2015-02-24 Thread Daniel P. Berrange
On Tue, Feb 24, 2015 at 08:50:45AM -0500, Sean Dague wrote:
 On 02/24/2015 07:48 AM, Russell Bryant wrote:
  On 02/24/2015 12:54 PM, Daniel P. Berrange wrote:
  On Tue, Feb 24, 2015 at 11:48:29AM +, Chris Dent wrote:
  On Tue, 24 Feb 2015, Daniel P. Berrange wrote:
 
  need to do more work. If this is so, then I don't think this is a 
  blocker,
  it is just a sign that the project needs to focus on providing more 
  resources
  to the teams impacted in that way.
 
  What are the mechanisms whereby the project provides more resources
  to teams?
 
  The technical committee and / or foundation board can highlight the need
  for investment of resources in critical areas of the project, to either
  the community members or vendors involved. As an example, this was done
  successfully recently to increase involvement in maintaining the EC2
  API support.  There are plenty of vendors involved in OpenStack which
  have the ability to target resources, if they can learn where those
  resources are best spent.
  
  Indeed ... and if horizontal teams are the ones hit the most by the
  extra work, each project should help with that burden.  For example,
  projects may need to take their responsibility for documentation more
  seriously and require documentation with features (content at least, not
  necessarily integration into the proper documentation deliverables)
  instead of assuming it magically gets written later.
 
 Right, and I think this actually hits at the most important part of the
 discussion. The question of:
 
 1) what would we need to do to make different release cadences viable?
 2) are those good things to do regardless of release cadence?
 
 The horizontal teams really can't function at different cadences. It
 completely breaks any flow and planning at turns them even further into
 firefighting because now everyone has crunch time at different times,
 and the horizontal efforts are required to just play catch up. I know
 what that future looks like, the horizontal teams dry up because no one
 wants that job.
 
 Ok, so that being said, what we'd need to do is have horizontal teams
 move to more of a self supporting model. So that the relevant content
 for a project (docs, full stack tests, requirements, etc) all live
 within that project itself, and aren't centrally synchronized.
 Installation of projects needs to be fully isolated from each other so
 that upgrading project A can be done independent of project B, as their
 release cadences might all bit disparate. Basically, ever OpenStack
 project needs to reabsorb the cross project efforts they've externalized.
 
 Then if project A decided to move off the coupled release, it's impact
 to the rest would be minimal. These are robust components that stand on
 their own, and work well with robust other components.
 
 Which... is basically the point of the big tent / new project governance
 model. Decompose OpenStack from a giant blob of goo into Robust elements
 that are more loosely coupled (so independently robust, and robust in
 their interaction with others). Move the horizontal teams into
 infrastructure vs. content roles, have projects own more of this content
 themselves.
 
 But it is a long hard process. Devstack external plugins was implement
 to support this kind of model, but having walked a bunch of different
 teams through this (at all skill levels) there ends up being a lot of
 work to get this right, and a lot of rethinking by teams that assumed
 their interaction with full stack testing is something they'd get to
 contribute once and have someone else maintain (instead of something
 they now need dedicated watchful eye on).
 
 The amount of full stack configurations immediately goes beyond anywhere
 near testable, so it requires more robust project testing to ensure
 every exposed interface is more robust (i.e. the testing in pyramids
 from https://review.openstack.org/#/c/150653/).
 
 And, I think the answer to #2 is: yes, this just makes it all better.
 
 So, honestly, I'm massively supportive of the end game. I've been
 carving out the bits of this I can for the last six months. But I think
 the way we get there is to actually get the refactoring of the
 horizontal efforts first.

I pretty much fully agree that refactoring the horizontal efforts to
distribute responsbility across the individual projects is the way
forward. I don't think it needs to be a pre-requisite step for changing
the release cycle. We can do both in parallel if we put our minds to
it.

My biggest fear is that we just keeping debating problems and alternatives,
but continue to be too afraid of theoretical problems, to actually take the
risk of effecting meaningful improvemnts in the operation of the project.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ 

Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-24 Thread Doug Hellmann


On Mon, Feb 23, 2015, at 06:31 PM, Joe Gordon wrote:
 On Mon, Feb 23, 2015 at 11:04 AM, Doug Hellmann d...@doughellmann.com
 wrote:
 
 
 
  On Mon, Feb 23, 2015, at 12:26 PM, Joe Gordon wrote:
   On Mon, Feb 23, 2015 at 8:49 AM, Ihar Hrachyshka ihrac...@redhat.com
   wrote:
  
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
   
On 02/20/2015 07:16 PM, Joshua Harlow wrote:
 Sean Dague wrote:
 On 02/20/2015 12:26 AM, Adam Gandelman wrote:
 Its more than just the naming.  In the original proposal,
 requirements.txt is the compiled list of all pinned deps
 (direct and transitive), while
 requirements.inhttp://requirements.in  reflects what people
 will actually use.  Whatever is in requirements.txt affects the
 egg's requires.txt. Instead, we can keep requirements.txt
 unchanged and have it still be the canonical list of
 dependencies, while
 reqiurements.out/requirements.gate/requirements.whatever is an
 upstream utility we produce and use to keep things sane on our
 slaves.

 Maybe all we need is:

 * update the existing post-merge job on the requirements repo
 to produce a requirements.txt (as it does now) as well the
 compiled version.

 * modify devstack in some way with a toggle to have it process
 dependencies from the compiled version when necessary

 I'm not sure how the second bit jives with the existing
 devstack installation code, specifically with the libraries
 from git-or-master but we can probably add something to warm
 the system with dependencies from the compiled version prior to
 calling pip/setup.py/etc http://setup.py/etc

 It sounds like you are suggesting we take the tool we use to
 ensure that all of OpenStack is installable together in a unified
 way, and change it's installation so that it doesn't do that any
 more.

 Which I'm fine with.

 But if we are doing that we should just whole hog give up on the
 idea that OpenStack can be run all together in a single
 environment, and just double down on the devstack venv work
 instead.

 It'd be interesting to see what a distribution (canonical,
 redhat...) would think about this movement. I know yahoo! has been
 looking into it for similar reasons (but we are more flexibly then
 I think a packager such as canonical/redhat/debian/... would/culd
 be). With a move to venv's that seems like it would just offload
 the work to find the set of dependencies that work together (in a
 single-install) to packagers instead.

 Is that ok/desired at this point?

   
Honestly, I failed to track all the different proposals. Just saying
from packager perspective: we absolutely rely on requirements.txt not
being a list of hardcoded values from pip freeze, but present us a
reasonable freedom in choosing versions we want to run in packaged
products.
   
   
   in short the current proposal for stable branches is:
  
   keep requirements.txt as is, except maybe put some upper bounds on the
   requirements.
  
   Add requirements.gate to specify the *exact* versions we are gating
   against
   (this would be a full list including all transitive dependencies).
 
  The gate syncs requirements into projects before installing them. Would
  we change the sync script for the gate to work from the
  requirements.gate file, or keep it pulling from requirements.txt?
 
 
 We would only add requirements.gate for stable branches (because we don't
 want to cap/pin  things on master). So I think the answer is sync script
 should work for both.  I am not sure on the exact mechanics of how this
 would work. Whoever ends up driving this bit of work (I think Adam G),
 will
 have to sort out the details.

OK. I think it's probably worth a spec, then, so we can think it through
before starting work. Maybe in the cross-project specs repo, to avoid
having to create one just for requirements? Or we could modify the
README or something, but the specs repo seems more visible.

Doug

 
 
  Doug
 
  
  
That's why I asked before we should have caps and not pins.
   
/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v1
   
iQEcBAEBAgAGBQJU61oJAAoJEC5aWaUY1u57T7cIALySnlpLV0tjrsTH2gZxskH+
zY+L6E/DukFNZsWxB2XSaOuVdVaP3Oj4eYCZ2iL8OoxLrBotiOYyRFH29f9vjNSX
h++dErBr0SwIeUtcnEjbk9re6fNP6y5Hqhk1Ac+NSxwL75KlS3bgKnJAhLA08MVB
5xkGRR7xl2cuYf9eylPlQaAy9rXPCyyRdxZs6mNjZ2vlY6hZx/w/Y7V28R/V4gO4
qsvMg6Kv+3urDTRuJdEsV6HbN/cXr2+o543Unzq7gcPpDYXRFTLkpCRV2k8mnmA1
pO9W10F1FCQZiBnLk0c6OypFz9rQmKxpwlNUN5MTMF15Et6DOxGBxMcfr7TpRaQ=
=WHOH
-END PGP SIGNATURE-
   
   
  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [nova] bp serial-ports *partly* implemented?

2015-02-24 Thread Markus Zoeller
Sahid Orentino Ferdjaoui sahid.ferdja...@redhat.com wrote on 02/23/2015 
11:13:12 AM:

 From: Sahid Orentino Ferdjaoui sahid.ferdja...@redhat.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: 02/23/2015 11:17 AM
 Subject: Re: [openstack-dev] [nova] bp serial-ports *partly* 
implemented?
 
 On Fri, Feb 20, 2015 at 06:03:46PM +0100, Markus Zoeller wrote:
  It seems to me that the blueprint serial-ports[1] didn't implement
  everything which was described in its spec. If one of you could have a 

  look at the following examples and help me to understand if these 
  observations are right/wrong that would be great.
  
  Example 1:
  The flavor provides the extra_spec hw:serial_port_count and the 
image
  the property hw_serial_port_count. This is used to decide how many
  serial devices (with different ports) should be defined for an 
instance.
  But the libvirt driver returns always only the *first* defined port 
  (see method get_serial_console [2]). I didn't find anything in the 
  code which uses the other defined ports.
 
 The method you are referencing [2] is used to return the first well
 binded and not connected port in the domain.

Is that the intention behind the code ``mode='bind'`` in said method?
In my test I created an instance with 2 ports with the default cirros
image with a flavor which has the hw:serial_port_count=2 property. 
The domain XML has this snippet:
serial type=tcp
  source host=127.0.0.1 mode=bind service=1/
/serial
serial type=tcp
  source host=127.0.0.1 mode=bind service=10001/
/serial
My expectation was to be able to connect to the same instance via both 
ports at the same time. But the second connection is blocked as long 
as the first connection is established. A debug trace in the code shows 
that both times the first port is returned. IOW I was not able to create
a scenario where the *second* port was returned and that confuses me
a little. Any thoughts about this?

 When defining the domain '{hw_|hw:}serial_port_count' are well take
 into account as you can see:
 
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/
 driver.py#L3702
 
 (The method looks to have been refactored and include several parts
 not related to serial-console)

  Example 2:
  If a user is already connected, then reject the attempt of a 
second
  user to access the console, but have an API to forceably 
disconnect
  an existing session. This would be particularly important to cope
  with hung sessions where the client network went away before the
  console was cleanly closed. [1]
  I couldn't find the described API. If there is a hung session one 
cannot
  gracefully recover from that. This could lead to a bad UX in horizons
  serial console client implementation[3].
 
 This API is not implemented, I will see what I can do on that
 part. Thanks for this.

Sounds great, thanks for that! Please keep me in the loop when 
reviews or help with coding are needed.

  [1] nova bp serial-ports;
  
  https://github.com/openstack/nova-specs/blob/master/specs/juno/
 implemented/serial-ports.rst
  [2] libvirt driver; return only first port; 
  
  https://github.com/openstack/nova/blob/master/nova/virt/libvirt/
 driver.py#L2518
  [3] horizon bp serial-console; 
  https://blueprints.launchpad.net/horizon/+spec/serial-console
  
  
  
__
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] bp serial-ports *partly* implemented?

2015-02-24 Thread Markus Zoeller
Tony Breeds t...@bakeyournoodle.com wrote on 02/21/2015 08:35:32 AM:

 From: Tony Breeds t...@bakeyournoodle.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: 02/21/2015 08:41 AM
 Subject: Re: [openstack-dev] [nova] bp serial-ports *partly* 
implemented?
 
 On Fri, Feb 20, 2015 at 06:03:46PM +0100, Markus Zoeller wrote:
  It seems to me that the blueprint serial-ports[1] didn't implement
  everything which was described in its spec. If one of you could have a 

  look at the following examples and help me to understand if these 
  observations are right/wrong that would be great.
 
 Nope I think you're pretty much correct.  The implementation doesn't
 match the details in the spec.

Thanks Tony for your feedback. I'm still new to nova and it's sometimes
hard for me to grasp the intention of the code.

 Yours Tony.
 
__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] [cinder] CI via infra for the DRBD Cinder driver

2015-02-24 Thread James E. Blair
Anita Kuno ante...@anteaya.info writes:

 I'd like to make sure that if Infra is saying that CI jobs on drivers
 can gate that this is a substantial difference from what I have been
 saying for a year, specifically that they can't and won't.

For the purposes of this conversation, the distinction between drivers
and any other type of project is not important.  The only thing that can
not gate is a CI system other than the one we run.  That's not a change.

Again, most driver CI systems are third-party systems because they must
be for technical reasons (and therefore, they can not gate for that
reason).  If they can be run upstream, there's no reason they can't
gate.  And there are some instances where we would be strongly advised
to -- the default open-source driver for a system, for instance.  Most
of those happen to be in-tree at the moment.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] [all] Vancouver Summit: Glance Friday Sprint

2015-02-24 Thread Nikhil Komawar
Hi,

For those who are interested and are planning to book travel soon, please do 
keep in mind that Glance team is planning to have a halfday sprint on Friday of 
the week of the summit. Besides, we will have fishbowl and work sessions during 
the week.

Please feel free to reach out to me, if you've any questions, concerns or need 
more information.

Cheers
-Nikhil
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [Cinder-GlusterFS CI] centos7 gate job abrupt failures

2015-02-24 Thread Deepak Shetty
FWIW, we tried to run our job in a rax provider VM (provided by ianw from
his personal account)
and we ran the tempest tests twice, but the OOM did not re-create. Of the 2
runs, one of the run
used the same PYTHONHASHSEED as we had in one of the failed runs, still no
oom.

Jeremy graciously agreed to provide us 2 VMs , one each from rax and
hpcloud provider
to see if provider platform has anything to do with it.

So we plan to run again wtih the VMs given from Jeremy , post which i will
send
next update here.

thanx,
deepak


On Tue, Feb 24, 2015 at 4:50 AM, Jeremy Stanley fu...@yuggoth.org wrote:

 Due to an image setup bug (I have a fix proposed currently), I was
 able to rerun this on a VM in HPCloud with 30GB memory and it
 completed in about an hour with a couple of tempest tests failing.
 Logs at: http://fungi.yuggoth.org/tmp/logs3.tar

 Rerunning again on another 8GB Rackspace VM with the job timeout
 increased to 5 hours, I was able to recreate the network
 connectivity issues exhibited previously. The job itself seems to
 have run for roughly 3 hours while failing 15 tests, and the worker
 was mostly unreachable for a while at the end (I don't know exactly
 how long) until around the time it completed. The OOM condition is
 present this time too according to the logs, occurring right near
 the end of the job. Collected logs are available at:
 http://fungi.yuggoth.org/tmp/logs4.tar

 Given the comparison between these two runs, I suspect this is
 either caused by memory constraints or block device I/O performance
 differences (or perhaps an unhappy combination of the two).
 Hopefully a close review of the logs will indicate which.
 --
 Jeremy Stanley

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [Cinder-GlusterFS CI] centos7 gate job abrupt failures

2015-02-24 Thread Daniel P. Berrange
On Fri, Feb 20, 2015 at 10:49:29AM -0800, Joe Gordon wrote:
 On Fri, Feb 20, 2015 at 7:29 AM, Deepak Shetty dpkshe...@gmail.com wrote:
 
  Hi Jeremy,
Couldn't find anything strong in the logs to back the reason for OOM.
  At the time OOM happens, mysqld and java processes have the most RAM hence
  OOM selects mysqld (4.7G) to be killed.
 
  From a glusterfs backend perspective, i haven't found anything suspicious,
  and we don't have the logs of glusterfs (which is typically in
  /var/log/glusterfs) so can't delve inside glusterfs too much :(
 
  BharatK (in CC) also tried to re-create the issue in local VM setup, but
  it hasn't yet!
 
  Having said that,* we do know* that we started seeing this issue after we
  enabled the nova-assisted-snapshot tests (by changing nova' s policy.json
  to enable non-admin to create hyp-assisted snaps). We think that enabling
  online snaps might have added to the number of tests and memory load 
  thats the only clue we have as of now!
 
 
 It looks like OOM killer hit while qemu was busy and during
 a ServerRescueTest. Maybe libvirt logs would be useful as well?
 
 And I don't see any tempest tests calling assisted-volume-snapshots
 
 Also this looks odd: Feb 19 18:47:16
 devstack-centos7-rax-iad-916633.slave.openstack.org libvirtd[3753]: missing
 __com.redhat_reason in disk io error event

So that specific error message is harmless - the __com.redhat_reason field
is nothing important from OpenStack's POV.

However, it is interesting that QEMU is seeing an I/O error in the first
place. This occurs when you have a grow on demand file, and the underlying
storage is full, so unable to allocate more blocks to cope with a guest
write. It can also occur if the underlying storage has a fatal I/O problem,
eg dead sector in harddisk, or the some equivalent.

IOW, I'd not expect to see any I/O errors raised from OpenStack in a normal
scenario. So this is something to consider investigating.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Midcycle Summary

2015-02-24 Thread James Slagle
Hi Everyone,

TripleO held a midcycle meetup from February 18th-20th in Seattle. Thanks to HP
for hosting the event! I wanted to send out a summary of what went on. We also
captured some notes on an etherpad[0].

The first order of business was that I volunteered to serve as PTL of TripleO
for the remainder of the Kilo cycle after Clint Byrum announced that he was
stepping down due to a change in focus. Thanks Clint for serving as PTL so far
throughout Kilo!

We moved on to talking about the state of TripleO in general. An immediate
topic of discussion was CI stability, especially as all of our jobs were
currently failing at the time. It appeared that most agreed that our actual CI
stability was pretty good overall and that most of the failures continue to be
caused by finding bugs in our own code and regressions in other projects that
end up breaking TripleO. There was a lot of agreement that the TripleO CI was
very useful and continues to find real breakages in OpenStack that are otherwise
missed.

We talked a bit about streamlining the CI jobs that are run by getting rid of
the undercloud jobs entirely or using the jenkins worker as the seed itself.

As it typically tends to do, the discussion around improving our CI drifted
into the topic of QuintupleO. Everyone seems to continue to agree that
QuintupleO would be really helpful to CI and development environments, but that
no one has time to work on it. The idea of skipping the Ironic PXE/iscsi
deployment process entirely and just nova boot'ing our instances as regular vm
images was brought up as a potential way to get QuintupleO off the ground
initially. You'd lose out on the coverage around Ironic, but it could still be
very valuable for testing all the other stuff such as large HA deployments
using Heat, template changes, devtest, etc.

We moved onto talking about diskimage-builder. Due to some shifts in focus,
there were some questions about any needed changes to the core team
of diskimage-builder. In the end, it was more or less decided that any such
changes would just be disruptive at this point, and that we could instead be
reactive to any changes that might be needed in the future.

There were lots of good ideas about how to improve functional testing of
diskimage-builder and giving it a proper testsuite outside of TripleO CI.
Functional and unit testing of the individual elements and hook scripts is also
desired. While there was half a session devoted to the unit testing aspect at
the Paris summit, we haven't yet made a huge amount of progress in this area,
but it sounds like that might soon change.

The tripleo-heat-templates was the next topic of discussion. With having
multiple implementations in tree, we agreed it was time to deprecate the
merge.py templates[1]. This will also free up some CI capacity for new jobs
after the removal of those templates.

We talked about backwards compatibility as well. The desire here was around
maintaining the ability to deploy stable versions of OpenStack for the
Overcloud with the TripleO tooling. Also, it was pointed out that the new
features that have been rolling out to the TripleO templates are for the
Overcloud only, so we're not breaking any ability to upgrade the Undercloud.

Dan Prince gave a detailed overview of the Puppet and TripleO integration
that's been ongoing since a little before Paris. A lot of progress has been
made very quickly and there is now a CI job in place exercising a deployment
via Puppet using the stackforge puppet modules. I don't think I need to go into
too much more detail here, because Dan already summarized it previously on
list[2].

The Puppet talk led into a discussion around the Heat breakpoints feature and
how that might be used to provide some aspect of workflow while doing a
deployment. There were some concerns raised that using breakpoints in that way
was odd, especially since they're not represented in the templates at all. In
the end, I think most agreed that there was an opportunity here to drive
further features in Heat to meet the use cases that are trying to be solved
around Overcloud deployments using breakpoints.

One theme that resurfaced a few times throughout the midcycle was ways that
TripleO could better define it's interfaces to make different parts pluggable,
even if that's just documentation initially. Doing so would allow TripleO to
integrate more easily with existing solutions that are already in use.

Thanks again to everyone who was able to participate in the midcycle, and as
well to those who stayed home and did actual work...such as fixing CI.

For other folks who attended, feel free to add some details, fill in
any gaps, or
disagree with my recollection of events :-).

[0] https://etherpad.openstack.org/p/kilo-tripleo-midcycle-meetup
[1] https://review.openstack.org/#/c/158410/
[2] http://lists.openstack.org/pipermail/openstack-dev/2015-February/056618.html


-- 
-- James Slagle
--


Re: [openstack-dev] [thirdpartyCI][cinder] Question about certification

2015-02-24 Thread Eduard Matei
Thanks Duncan.
It's commenting and it works (meaning it runs and it's able to detect
errors). Sometimes it gives a FAILURE but that's either due to the patch
(rare) or due to devstack (often).

Also logs are visible and included in the comment.

Eduard

On Tue, Feb 24, 2015 at 2:31 PM, Duncan Thomas duncan.tho...@gmail.com
wrote:

 If it is commenting (and not failing all the time) then you're fine.

 On 24 February 2015 at 12:47, Eduard Matei eduard.ma...@cloudfounders.com
  wrote:

 Hi,

 With the deadline for ThirdPartyCI coming we were wondering what should
 we do next to ensure our CI is validated.

 One of the requirements of having a driver accepted in the repo is to
 provide proof of a working CI.

 The question is: does the CI need voting rights (validated) , or just
 check/comment to be considered working? If it needs validation, how do we
 do that (i've asked a couple of mailing lists but either got no response or
 got redirected to IRC - which due to timezone differences i found mostly
 empty).

 Thanks,

 Eduard

 --

 *Eduard Biceri Matei, Senior Software Developer*
 www.cloudfounders.com
  | eduard.ma...@cloudfounders.com



 *CloudFounders, The Private Cloud Software Company*

 Disclaimer:
 This email and any files transmitted with it are confidential and intended 
 solely for the use of the individual or entity to whom they are addressed.
 If you are not the named addressee or an employee or agent responsible for 
 delivering this message to the named addressee, you are hereby notified that 
 you are not authorized to read, print, retain, copy or disseminate this 
 message or any part of it. If you have received this email in error we 
 request you to notify us by reply e-mail and to delete all electronic files 
 of the message. If you are not the intended recipient you are notified that 
 disclosing, copying, distributing or taking any action in reliance on the 
 contents of this information is strictly prohibited.
 E-mail transmission cannot be guaranteed to be secure or error free as 
 information could be intercepted, corrupted, lost, destroyed, arrive late or 
 incomplete, or contain viruses. The sender therefore does not accept 
 liability for any errors or omissions in the content of this message, and 
 shall have no liability for any loss or damage suffered by the user, which 
 arise as a result of e-mail transmission.


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Duncan Thomas

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

*Eduard Biceri Matei, Senior Software Developer*
www.cloudfounders.com
 | eduard.ma...@cloudfounders.com



*CloudFounders, The Private Cloud Software Company*

Disclaimer:
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are addressed.
If you are not the named addressee or an employee or agent responsible
for delivering this message to the named addressee, you are hereby
notified that you are not authorized to read, print, retain, copy or
disseminate this message or any part of it. If you have received this
email in error we request you to notify us by reply e-mail and to
delete all electronic files of the message. If you are not the
intended recipient you are notified that disclosing, copying,
distributing or taking any action in reliance on the contents of this
information is strictly prohibited.
E-mail transmission cannot be guaranteed to be secure or error free as
information could be intercepted, corrupted, lost, destroyed, arrive
late or incomplete, or contain viruses. The sender therefore does not
accept liability for any errors or omissions in the content of this
message, and shall have no liability for any loss or damage suffered
by the user, which arise as a result of e-mail transmission.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Re-evaluating the suitability of the 6 month release cycle

2015-02-24 Thread Chris Dent

On Tue, 24 Feb 2015, Sean Dague wrote:


That also provides a very concrete answer to will people show up.
Because if they do, and we get this horizontal refactoring happening,
then we get to the point of being able to change release cadences
faster. If they don't, we remain with the existing system. Vs changing
the system and hoping someone is going to run in and backfill the breaks.


Isn't this the way of the world? People only put halon in the
machine room after the fire.

I agree that people showing up is a real concern, but I also think
that we shy away too much from the productive energy of stuff
breaking. It's the breakage that shows where stuff isn't good
enough.

[Flavio said]:

To this I'd also add that bug fixing is way easier when you have
aligned releases for projects that are expected to be deployed
together. It's easier to know what the impact of a change/bug is
throughout the infrastructure.


Can't this be interpreted as an excuse for making software which
does not have a low surface area and a good API?

(Note I'm taking a relatively unrealistic position for sake of
conversation.)

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] [cinder] CI via infra for the DRBD Cinder driver

2015-02-24 Thread Anita Kuno
On 02/24/2015 04:40 AM, Duncan Thomas wrote:
 For cinder, we don't want to gate on drivers, and will be unhappy if that
 gets turned on, so the story hasn't changed, Anita.
 
 I think Jim was pointing out the technical possibility, which is a fine and
 valid thing to point out. Cinder core are likely to continue not to want
 it, for the foreseeable future.
Fair enough.

My concern is that by making it technically possible and permissible
from infra's side we are just opening up the projects to have to engage
in yet another game of whack-a-mole since driver testing ops can and
will try to turn it on.

Thanks,
Anita.

 
 On 24 February 2015 at 10:30, Anita Kuno ante...@anteaya.info wrote:
 
 On 02/23/2015 07:00 PM, James E. Blair wrote:
 Clark Boylan cboy...@sapwetik.org writes:

The other thing is that we don't have the zuul code to vote with
a different account deployed/merged yet. So initially you could run
your
job but it wouldn't vote against, say, cinder.»
 The stack that adds the necessary zuul stuff ends with
 https://review.openstack.org/#/c/121528/

 I don't believe that we have any current plans to use that code in
 infra.  I don't believe that it makes sense for us to create multiple
 accounts in Gerrit for our single system.  It's quite a bit of overhead,
 and I don't believe it is necessary.

 To be clear, I think any policy that says that drivers must have
 third-party CI is an oversight.  I believe that it's fine to require
 them to have CI, but if the CI can be provided by infra, it should be.
 I believe this misunderstanding comes from the fact that most
 out-of-tree drivers require specialized hardware, or non-free software,
 that can not be run in our system.

 As mentioned elsewhere, we're generally quite happy to have any open
 source component tested in the upstream infrastructure.  Since this
 qualifies, I think the quickest and simplest way to proceed is to create
 a job that runs on the driver repo, and then create a non-voting version
 to run on the cinder repo.

 Additionally, if it proves stable, the Cinder developers could certainly
 choose to gate on this job as well.
 I'd like to make sure that if Infra is saying that CI jobs on drivers
 can gate that this is a substantial difference from what I have been
 saying for a year, specifically that they can't and won't.

 Every CI system/job author/operator that I have interacted with for the
 past year has been trying to figure out how to be in the gate. Since
 that implies a relationship in which development on the service can't
 proceed without a driver's say so (by passing the test in the gate) this
 is exactly the relationship I have been trying to ensure doesn't happen.
 By stating that driver jobs can gate on the service, this opens the
 flood gates of every CI operator increasing pressure to be in the gate
 as well (the pressure never stopped, it just took different forms).

 I disagree with the relationship where a service's development can only
 proceed at the speed of a driver (or driver's because where there is
 one, there are plenty more).

 Thanks,
 Anita.


 That's entirely up to them, but
 there's no policy or technical reason it can not.

 -Jim


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][all] Revisiting the 6 month release cycle

2015-02-24 Thread Johannes Erdfelt
On Mon, Feb 23, 2015, Joe Gordon joe.gord...@gmail.com wrote:
 What this actually means:
 
- Stop approving blueprints for specific stable releases, instead just
approve them and target them to milestones.
   - Milestones stop becoming Kilo-1, Kilo-2, Kilo-3 etc. and just
   become 1,2,3,4,5,6,7,8,9 etc.
   - If something misses what was previously known as Kilo-3 it has to
   wait a week for what for milestone 4.
- Development focuses on milestones only. So 6 week cycle with say 1
week of stabilization, finish things up before each milestone

What is the motiviation for having milestones at all?

At least in the Nova world, it seems like milestones mean nothing at
all. It's just something John Garbutt spends a lot of his time updating
that doesn't appear to provide any value to anyone.

JE


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-24 Thread Kyle Mestery
On Tue, Feb 24, 2015 at 3:19 AM, Salvatore Orlando sorla...@nicira.com
wrote:

 On 24 February 2015 at 01:34, Kyle Mestery mest...@mestery.com wrote:

 Russel and I have already merged the initial ML2 skeleton driver [1].

 The thinking is that we can always revert to a non-ML2 driver if needed.


 If nothing else an authoritative decision on a design direction saves us
 the hassle of going through iterations and discussions.
 The integration through ML2 is definitely viable. My opinion however is
 that since OVN implements a full control plane, the control plane bits
 provided by ML2 are not necessary, and a plugin which provides only
 management layer capabilities might be the best solution. Note: this does
 not mean it has to be monolithic. We can still do L3 with a service plugin.
 However, since the same kind of approach has been adopted for ODL I guess
 this provides some sort of validation.


To be honest, after thinking about this last night, I'm now leaning towards
doing this as a full plugin. I don't really envision OVN running with other
plugins, as OVN is implementing it's own control plane, as you say. So the
value of using ML2 is quesitonable.


 I'm not sure how useful having using OVN with other drivers will be, and
 that was my initial concern with doing ML2 vs. full plugin. With the HW
 VTEP support in OVN+OVS, you can tie in physical devices this way. Anyways,
 this is where we're at for now. Comments welcome, of course.


 That was also kind of my point regarding the control plane bits provided
 by ML2 which OVN does not need. Still, the fact that we do not use a
 function does not make any harm.
 Also i'm not sure if OVN needs at all a type manager. If not, we can
 always implement a no-op type manager, I guess.

 See above. I'd like to propose we move OVN to a full plugin instead of an
ML2 MechanismDriver.

Kyle


 Salvatore



 Thanks,
 Kyle

 [1] https://github.com/stackforge/networking-ovn

 On Mon, Feb 23, 2015 at 4:09 PM, Kevin Benton blak...@gmail.com wrote:

 I want to emphasize Salvatore's last two points a bit more. If you go
 with a monolithic plugin, you eliminate the possibility of heterogenous
 deployments.

 One example of this that is common now is having the current OVS driver
 responsible for setting up the vswitch and then having a ToR driver (e.g.
 Big Switch, Arista, etc) responsible for setting up the fabric.
 Additionally, there is a separate L3 plugin (e.g. the reference one,
 Vyatta, etc) for providing routing.

 I suppose with an overlay it's easier to take the route that you don't
 want to be compatible with other networking stuff at the Neutron layer
 (e.g. integration with the 3rd parties is orchestrated somewhere else). In
 that case, the above scenario wouldn't make much sense to support, but it's
 worth keeping in mind.

 On Mon, Feb 23, 2015 at 10:28 AM, Salvatore Orlando sorla...@nicira.com
  wrote:

 I think there are a few factors which influence the ML2 driver vs
 monolithic plugin debate, and they mostly depend on OVN rather than
 Neutron.
 From a Neutron perspective both plugins and drivers (as long at they
 live in their own tree) will be supported in the foreseeable future. If a
 ML2 mech driver is not the best option for OVN that would be ok - I don't
 think the Neutron community advices development of a ML2 driver as the
 preferred way for integrating with new backends.

 The ML2 framework provides a long list of benefits that mechanism
 driver developer can leverage.
 Among those:
 - The ability of leveraging Type drivers which relieves driver
 developers from dealing with network segment allocation
 - Post-commit and (for most operations) pre-commit hooks for performing
 operation on the backend
 - The ability to leverage some of the features offered by Neutron's
 built-in control-plane such as L2-population
 - A flexible mechanism for enabling driver-specific API extensions
 - Promotes modular development and integration with higher-layer
 services, such as L3. For instance OVN could provide the L2 support for
 Neutron's built-in L3 control plane
 - The (potential afaict) ability of interacting with other mechanism
 driver such as those operating on physical appliances on the data center
 - add your benefit here

 In my opinion OVN developers should look at ML2 benefits, and check
 which ones apply to this specific platform. I'd say that if there are 1 or
 2 checks in the above list, maybe it would be the case to look at
 developing a ML2 mechanism driver, and perhaps a L3 service plugin.
 It is worth nothing that ML2, thanks to its type and mechanism driver
 provides also some control plane capabilities. If those capabilities are
 however on OVN's roadmap it might be instead worth looking at a
 monolithic plugin, which can also be easily implemented by inheriting
 from neutron.db.db_base_plugin_v2.NeutronDbPluginV2, and then adding all
 the python mixins for the extensions the plugin needs to support.

 Salvatore


 On 23 February 2015 at 

[openstack-dev] [Fuel][Library][Fuel-CI] verify-fuel-library-python job enabled

2015-02-24 Thread Aleksandra Fedorova
Hi everyone,
we've enabled job verify-fuel-library-python (see [1]) to test Python
scripts in Fuel Library code.
It is triggered for all commits to master branch in stackforge/fuel-library

Master currently passes the test, see [2]. Please check and rebase
your patchsets for this job to work.

In case of unexpected failures contact devops team at #fuel-devops IRC chat.

[1] https://review.fuel-infra.org/#/c/3810/
[2] https://fuel-jenkins.mirantis.com/job/verify-fuel-library-python/23/

-- 
Aleksandra Fedorova
Fuel Devops Engineer
bookwar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-24 Thread Kevin Benton
OVN implementing it's own control plane isn't a good reason to make it a
monolithic plugin. Many of the ML2 drivers are for technologies with their
own control plane.

Going with the monolithic plugin only makes sense if you are certain that
you never want interoperability with other technologies at the Neutron
level. Instead of ruling that out this early, why not make it as an ML2
driver and then change to a monolithic plugin if you run into some
fundamental issue with ML2?
On Feb 24, 2015 8:16 AM, Kyle Mestery mest...@mestery.com wrote:

 On Tue, Feb 24, 2015 at 3:19 AM, Salvatore Orlando sorla...@nicira.com
 wrote:

 On 24 February 2015 at 01:34, Kyle Mestery mest...@mestery.com wrote:

 Russel and I have already merged the initial ML2 skeleton driver [1].

 The thinking is that we can always revert to a non-ML2 driver if needed.


 If nothing else an authoritative decision on a design direction saves us
 the hassle of going through iterations and discussions.
 The integration through ML2 is definitely viable. My opinion however is
 that since OVN implements a full control plane, the control plane bits
 provided by ML2 are not necessary, and a plugin which provides only
 management layer capabilities might be the best solution. Note: this does
 not mean it has to be monolithic. We can still do L3 with a service plugin.
 However, since the same kind of approach has been adopted for ODL I guess
 this provides some sort of validation.


 To be honest, after thinking about this last night, I'm now leaning
 towards doing this as a full plugin. I don't really envision OVN running
 with other plugins, as OVN is implementing it's own control plane, as you
 say. So the value of using ML2 is quesitonable.


 I'm not sure how useful having using OVN with other drivers will be, and
 that was my initial concern with doing ML2 vs. full plugin. With the HW
 VTEP support in OVN+OVS, you can tie in physical devices this way. Anyways,
 this is where we're at for now. Comments welcome, of course.


 That was also kind of my point regarding the control plane bits provided
 by ML2 which OVN does not need. Still, the fact that we do not use a
 function does not make any harm.
 Also i'm not sure if OVN needs at all a type manager. If not, we can
 always implement a no-op type manager, I guess.

 See above. I'd like to propose we move OVN to a full plugin instead of an
 ML2 MechanismDriver.

 Kyle


 Salvatore



 Thanks,
 Kyle

 [1] https://github.com/stackforge/networking-ovn

 On Mon, Feb 23, 2015 at 4:09 PM, Kevin Benton blak...@gmail.com wrote:

 I want to emphasize Salvatore's last two points a bit more. If you go
 with a monolithic plugin, you eliminate the possibility of heterogenous
 deployments.

 One example of this that is common now is having the current OVS driver
 responsible for setting up the vswitch and then having a ToR driver (e.g.
 Big Switch, Arista, etc) responsible for setting up the fabric.
 Additionally, there is a separate L3 plugin (e.g. the reference one,
 Vyatta, etc) for providing routing.

 I suppose with an overlay it's easier to take the route that you don't
 want to be compatible with other networking stuff at the Neutron layer
 (e.g. integration with the 3rd parties is orchestrated somewhere else). In
 that case, the above scenario wouldn't make much sense to support, but it's
 worth keeping in mind.

 On Mon, Feb 23, 2015 at 10:28 AM, Salvatore Orlando 
 sorla...@nicira.com wrote:

 I think there are a few factors which influence the ML2 driver vs
 monolithic plugin debate, and they mostly depend on OVN rather than
 Neutron.
 From a Neutron perspective both plugins and drivers (as long at they
 live in their own tree) will be supported in the foreseeable future. If a
 ML2 mech driver is not the best option for OVN that would be ok - I don't
 think the Neutron community advices development of a ML2 driver as the
 preferred way for integrating with new backends.

 The ML2 framework provides a long list of benefits that mechanism
 driver developer can leverage.
 Among those:
 - The ability of leveraging Type drivers which relieves driver
 developers from dealing with network segment allocation
 - Post-commit and (for most operations) pre-commit hooks for
 performing operation on the backend
 - The ability to leverage some of the features offered by Neutron's
 built-in control-plane such as L2-population
 - A flexible mechanism for enabling driver-specific API extensions
 - Promotes modular development and integration with higher-layer
 services, such as L3. For instance OVN could provide the L2 support for
 Neutron's built-in L3 control plane
 - The (potential afaict) ability of interacting with other mechanism
 driver such as those operating on physical appliances on the data center
 - add your benefit here

 In my opinion OVN developers should look at ML2 benefits, and check
 which ones apply to this specific platform. I'd say that if there are 1 or
 2 checks in the above list, maybe it 

Re: [openstack-dev] [nova] novaclient functional test guidelines

2015-02-24 Thread Joe Gordon
On Tue, Feb 24, 2015 at 9:47 AM, Sean Dague s...@dague.net wrote:

 Towards the end of merging the regression test for the nova
 volume-attach bug - https://review.openstack.org/#/c/157959/ there was a
 discussion around what style the functional tests should take.
 Especially as that had a mix of CLI and API calls in it.



Thanks for starting this thread.  Once we reach general agreement lets put
this in a in tree README for record keeping.



 Here are my thoughts for why that test ended up that way:

 1) All resource setup that is table stakes for the test should be done
 via the API, regardless if it's a CLI or API test.

 The reason for this is that structured data is returned, which removes
 one possible error in the tests by parsing incorrectly. The API objects
 returned also include things like .delete methods in most cases, which
 means that addCleanup is a little more clean.


IMHO the CLI should have an option to returned raw JSON back instead of
pretty tabled results as well.



 2) Main logic should touch which ever interface you are trying to test.
 This was demonstrating a CLI regression, so the volume-attach call was
 important to be done over the CLI.


 Now... here's where theory runs into issues.

 #1 - nova boot is table stakes. Under the above guidelines it should be
 called via API. However --poll is a CLI construct and so saved a custom
 wait loop here. We should implement that custom wait loop down the road
 and make that an API call


 https://github.com/openstack/python-novaclient/blob/master/novaclient/tests/functional/test_instances.py#L116


This issue is from a real short coming in the python client. So this isn't
really an issue with your theory, just an issue with novaclient.




 #2 - the volume create command is table stakes. It should be an API
 call. However, it can't be because the service catalog redirection only
 works at the CLI layer. This is actually also the crux of bug #1423695.
 The same reason the completion cache code failed is the reason we can't
 use the API for that.


 https://github.com/openstack/python-novaclient/blob/master/novaclient/tests/functional/test_instances.py#L129


Issues like this are why I wrote the read only nova CLI tests in the first
place. Unit testing the python API is doable, but unit testing the CLI is a
little bit more tricky. So I expect issues like this to come up over and
over again.




 #3 - the cleanup of the volume should have been API call. See reason for
 #2.


 https://github.com/openstack/python-novaclient/blob/master/novaclient/tests/functional/test_instances.py#L131

 #4 - the cleanup of the attachment should be an addCleanup via the API.
 See reason for #2 why it's not.


 https://github.com/openstack/python-novaclient/blob/master/novaclient/tests/functional/test_instances.py#L155


 I'm happy if there are other theories about how we do these things,
 being the first functional test in the python-novaclient tree that
 creates and destroys real resources, there isn't an established pattern
 yet. But I think doing all CLI calls in CLI tests is actually really
 cumbersome, especially in the amount of output parsing code needed if
 you are going to setup any complicated resource structure.


Here is an alternate theory:

We should have both python API and CLI functional tests. But they should be
kept separate.

This is to help us make sure both the CLI and python API are actually
usable interfaces. As the exercise above has shown, they both have really
major short comings.  I think having in tree functional testing covering
both the CLI and python API will make it easier for us to review new client
features in terms of usability.

Here is a very rough proof of concept patch showing the same tests:
https://review.openstack.org/#/c/157974/2/novaclient/tests/functional/test_volumes.py,cm

No matter how we define this functional testing model, I think its clear
novaclient needs a decent amount of work before it can really be usable.



 -Sean

 --
 Sean Dague
 http://dague.net

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Kerberos in OpenStack

2015-02-24 Thread Sanket Lawangare
Hello  Everyone,

My name is Sanket Lawangare. I am a graduate Student studying at The
University of Texas, at San Antonio. For my Master’s Thesis I am working on
the Identity component of OpenStack. My research is to investigate external
authentication with Identity(keystone) using Kerberos.

Based on reading Jammie lennox's Blogs on Kerberos implementation in
OpenStack and my understanding of Kerberos I have come up with a figure
explaining possible interaction of KDC with the OpenStack client, keystone
and the OpenStack services(Nova, Cinder, Swift...).

These are the Blogs -

http://www.jamielennox.net/blog/2015/02/12/step-by-step-kerberized-keystone/

http://www.jamielennox.net/blog/2013/10/22/keystone-token-binding/

I am trying to understand the working of Kerberos in OpenStack.

Please click this link to view the figure:
https://docs.google.com/drawings/d/1re0lNbiMDTbnkrqGMjLq6oNoBtR_GA0x7NWacf0Ulbs/edit?usp=sharing

P.S. - [The steps in this figure are self explanatory the basic
understanding of Kerberos is expected]

Based on the figure i had couple of questions:


   1.

   Is Nova or other services registered with the KDC?



   1.

   What does keystone do with Kerberos ticket/credentials? Does Keystone
   authenticates the users and gives them direct access to other services such
   as Nova, Swift etc..



   1.

   After receiving the Ticket from the KDC does keystone embed some
   kerberos credential information in the token?



   1.

   What information does the service (e.g.Nova) see in the Ticket and the
   token (Does the token have some kerberos info or some customized info
   inside it?).


If you could share your insights and guide me on this. I would be really
appreciate it. Thank you all for your time.

Regards,

Sanket Lawangare
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ECMP on Neutron virtual router

2015-02-24 Thread Carl Baldwin
It doesn't support this at this time.  There are no current plans to
make it work.  I'm curious to know how you would like for this to work
in your deployment.

Carl

On Tue, Feb 24, 2015 at 11:32 AM, NAPIERALA, MARIA H mn1...@att.com wrote:
 Does Neutron router support ECMP across multiple static routes to the same
 destination network but with different next-hops?

 Maria


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-24 Thread Salvatore Orlando
I think we're speculating a lot about what would be best for OVN whereas we
should probably just expose pro and cons of ML2 drivers vs standalone
plugin (as I said earlier on indeed it does not necessarily imply
monolithic *)

I reckon the job of the Neutron community is to provide a full picture to
OVN developers - so that they could make a call on the integration strategy
that best suits them.
On the other hand, if we're planning to commit to a model where ML2 is not
anymore a plugin but the interface with the API layer, then any choice
which is not a ML2 driver does not make any sense. Personally I'm not sure
we ever want to do that, at least not in the near/medium term, but I'm one
and hardly representative of the developer/operator communities.

Salvatore


* In particular with the advanced service split out the term monolithic
simply does not mean anything anymore.

On 24 February 2015 at 17:48, Robert Kukura kuk...@noironetworks.com
wrote:

  Kyle, What happened to the long-term potential goal of ML2 driver APIs
 becoming neutron's core APIs? Do we really want to encourage new monolithic
 plugins?

 ML2 is not a control plane - its really just an integration point for
 control planes. Although co-existence of multiple mechanism drivers is
 possible, and sometimes very useful, the single-driver case is fully
 supported. Even with hierarchical bindings, its not really ML2 that
 controls what happens - its the drivers within the framework. I don't think
 ML2 really limits what drivers can do, as long as a virtual network can be
 described as a set of static and possibly dynamic network segments. ML2 is
 intended to impose as few constraints on drivers as possible.

 My recommendation would be to implement an ML2 mechanism driver for OVN,
 along with any needed new type drivers or extension drivers. I believe this
 will result in a lot less new code to write and maintain.

 Also, keep in mind that even if multiple driver co-existence doesn't sound
 immediately useful, there are several potential use cases to consider. One
 is that it allows new technology to be introduced into an existing cloud
 alongside what previously existed. Migration from one ML2 driver to another
 may be a lot simpler (and/or flexible) than migration from one plugin to
 another. Another is that additional drivers can support special cases, such
 as bare metal, appliances, etc..

 -Bob


 On 2/24/15 11:11 AM, Kyle Mestery wrote:

  On Tue, Feb 24, 2015 at 3:19 AM, Salvatore Orlando sorla...@nicira.com
 wrote:

  On 24 February 2015 at 01:34, Kyle Mestery mest...@mestery.com wrote:

  Russel and I have already merged the initial ML2 skeleton driver [1].

   The thinking is that we can always revert to a non-ML2 driver if
 needed.


  If nothing else an authoritative decision on a design direction saves
 us the hassle of going through iterations and discussions.
 The integration through ML2 is definitely viable. My opinion however is
 that since OVN implements a full control plane, the control plane bits
 provided by ML2 are not necessary, and a plugin which provides only
 management layer capabilities might be the best solution. Note: this does
 not mean it has to be monolithic. We can still do L3 with a service plugin.
  However, since the same kind of approach has been adopted for ODL I
 guess this provides some sort of validation.


 To be honest, after thinking about this last night, I'm now leaning
 towards doing this as a full plugin. I don't really envision OVN running
 with other plugins, as OVN is implementing it's own control plane, as you
 say. So the value of using ML2 is quesitonable.


I'm not sure how useful having using OVN with other drivers will be,
 and that was my initial concern with doing ML2 vs. full plugin. With the HW
 VTEP support in OVN+OVS, you can tie in physical devices this way. Anyways,
 this is where we're at for now. Comments welcome, of course.


  That was also kind of my point regarding the control plane bits
 provided by ML2 which OVN does not need. Still, the fact that we do not use
 a function does not make any harm.
 Also i'm not sure if OVN needs at all a type manager. If not, we can
 always implement a no-op type manager, I guess.

See above. I'd like to propose we move OVN to a full plugin instead
 of an ML2 MechanismDriver.

  Kyle


   Salvatore



  Thanks,
  Kyle

 [1] https://github.com/stackforge/networking-ovn

 On Mon, Feb 23, 2015 at 4:09 PM, Kevin Benton blak...@gmail.com wrote:

 I want to emphasize Salvatore's last two points a bit more. If you go
 with a monolithic plugin, you eliminate the possibility of heterogenous
 deployments.

  One example of this that is common now is having the current OVS
 driver responsible for setting up the vswitch and then having a ToR driver
 (e.g. Big Switch, Arista, etc) responsible for setting up the fabric.
 Additionally, there is a separate L3 plugin (e.g. the reference one,
 Vyatta, etc) for providing routing.

  I suppose with 

Re: [openstack-dev] [stable][all] Revisiting the 6 month release cycle

2015-02-24 Thread Thierry Carrez
Johannes Erdfelt wrote:
 On Mon, Feb 23, 2015, Joe Gordon joe.gord...@gmail.com wrote:
 What this actually means:

- Stop approving blueprints for specific stable releases, instead just
approve them and target them to milestones.
   - Milestones stop becoming Kilo-1, Kilo-2, Kilo-3 etc. and just
   become 1,2,3,4,5,6,7,8,9 etc.
   - If something misses what was previously known as Kilo-3 it has to
   wait a week for what for milestone 4.
- Development focuses on milestones only. So 6 week cycle with say 1
week of stabilization, finish things up before each milestone
 
 What is the motiviation for having milestones at all?
 
 At least in the Nova world, it seems like milestones mean nothing at
 all. It's just something John Garbutt spends a lot of his time updating
 that doesn't appear to provide any value to anyone.

It has *some* value in cross-project coordination. It's a way to
describe common points in time. Saying it should be done by kilo-1 is
easier than using random dates that vary across projects.

Another value is to exercise the release automation more regularly.

Agree on the pain of maintaining milestone plans though, which is why I
propose we get rid of most of it in Liberty. That will actually be
discussed at the cross-project meeting today:

https://wiki.openstack.org/wiki/Release_Cycle_Management/Liberty_Tracking

Cheers,

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Prefix delegation using dibbler client

2015-02-24 Thread John Davidge (jodavidg)
Hello all,

We now have a work-in-progress patch up for review:

https://review.openstack.org/#/c/158697/


Feedback on our approach is much appreciated.

Many thanks,

John Davidge
OpenStack@Cisco




On 20/02/2015 18:28, Ihar Hrachyshka ihrac...@redhat.com wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Those are good news!

I commented on the pull request. Briefly, we can fetch from git, but
would prefer an official release.

Thanks,
/Ihar

On 02/19/2015 11:26 PM, Robert Li (baoli) wrote:
 Hi Kyle, Ihar,
 
 It looks promising to have our patch upstreamed. Please take a look
 at this pull request
 
https://github.com/tomaszmrugalski/dibbler/pull/26#issuecomment-75144912.
 Most importantly, Tomek asked if it’s sufficient to have the code
 up in his master branch. I guess you guys may be able to help
 answer that question since I’m not familiar with openstack release
 process.
 
 Cheers, Robert
 
 On 2/13/15, 12:16 PM, Kyle Mestery mest...@mestery.com
 mailto:mest...@mestery.com wrote:
 
 On Fri, Feb 13, 2015 at 10:57 AM, John Davidge (jodavidg)
 jodav...@cisco.com mailto:jodav...@cisco.com wrote:
 
 Hi Ihar,
 
 To answer your questions in order:
 
 1. Yes, you are understanding the intention correctly. Dibbler
 doesn¹t currently support client restart, as doing so causes all
 existing delegated prefixes to be released back to the PD server.
 All subnets belonging to the router would potentially receive a new
 cidr every time a subnet is added/removed.
 
 2. Option 2 cannot be implemented using the current version of
 dibbler, but it can be done using the version we have modified.
 Option 3 could possibly be done with the current version of
 dibbler, but with some major limitations - only one single router
 namespace would be supported.
 
 Once the dibbler changes linked below are reviewed and finalised we
 will only need to merge a single patch into the upstream dibbler
 repo. No further patches are anticipated.
 
 Yes, you are correct that dibbler is not needed unless prefix
 delegation is enabled by the deployer. It is intended as an
 optional feature that can be easily disabled (and probably will be
 by default). A test to check for the correct dibbler version would
 certainly be necessary.
 
 Testing in the gate will be an issue until the new version of
 dibbler is merged and packaged in the various distros. I¹m not sure
 if there is a way to avoid this problem, unless we have devstack
 install from our updated repo while we wait.
 
 To me, this seems like a pretty huge problem. We can't expect
 distributions to package side-changes to upstream projects. The
 correct way to solve this problem is to work to get the changes
 required in the dependent packages upstream into those projects
 first (dibbler, in this case), and then propose the changes into
 Neutron to make use of those changes. I don't see how we can
 proceed with this work until the issues around dibbler has been
 resolved.
 
 
 John Davidge OpenStack@Cisco
 
 
 
 
 On 13/02/2015 16:01, Ihar Hrachyshka ihrac...@redhat.com
 mailto:ihrac...@redhat.com wrote:
 
 Thanks for the write-up! See inline.
 
 On 02/13/2015 04:34 PM, Robert Li (baoli) wrote:
 Hi,
 
 while trying to integrate dibbler client with neutron to support
 PD, we countered a few issues with the dibbler client (and
 server). With a neutron router, we have the qg-xxx interface that
 is connected to the public network, on which a dhcp server is
 running on the delegating router. For each subnet with PD
 enabled, a router port will be created in the neutron router. As
 a result, a new PD request will be sent that asks for a prefix
 from the delegating router. Keep in mind that the subnet is added
 into the router dynamically.
 
 We thought about the following options:
 
 1. use a single dibbler client to support the above requirement.
 This means, the client should be able to accept new requests on
 the fly either through configuration reload or other interfaces.
 Unfortunately, dibbler client doesn¹t support it.
 
 Sorry for my ignorance on PD implementation (I will definitely look
 at it the next week), but what does this entry above mean? Do you
 want a single dibbler instance running per router serving all
 subnets plugged into it? And you want to get configuration updates
 when a new subnet is plugged in, or removed from the router?
 
 If that's the case, why not just restarting the client?
 
 2. start a dibbler client per subnet. All of the dibbler clients
 will be using the same outgoing interface (which is the qg-xxx
 interface). Unfortunately, dibbler client uses /etc/dibbler and
 /var/lib/dibbler for its state (in which it saves duid file, pid
 file, and other internal states). This means it can only support
 one client per network node. 3. run a single dibbler client that
 requests a smaller prefix (say /56) and splits it among the
 subnets with PD enabled (neutron subnet requires /64). Depending
 on the neutron router setup, this may result in significant waste

Re: [openstack-dev] [all] Re-evaluating the suitability of the 6 month release cycle

2015-02-24 Thread David Kranz

On 02/24/2015 09:37 AM, Chris Dent wrote:

On Tue, 24 Feb 2015, Sean Dague wrote:


That also provides a very concrete answer to will people show up.
Because if they do, and we get this horizontal refactoring happening,
then we get to the point of being able to change release cadences
faster. If they don't, we remain with the existing system. Vs changing
the system and hoping someone is going to run in and backfill the 
breaks.


Isn't this the way of the world? People only put halon in the
machine room after the fire.

I agree that people showing up is a real concern, but I also think
that we shy away too much from the productive energy of stuff
breaking. It's the breakage that shows where stuff isn't good
enough.

[Flavio said]:

To this I'd also add that bug fixing is way easier when you have
aligned releases for projects that are expected to be deployed
together. It's easier to know what the impact of a change/bug is
throughout the infrastructure.


Can't this be interpreted as an excuse for making software which
does not have a low surface area and a good API?

(Note I'm taking a relatively unrealistic position for sake of
conversation.)
I'm not so sure about that. IMO, much of this goes back to the question 
of whether OpenStack services are APIs or implementations. This was 
debated with much heat at the Diablo summit (Hi Jay). I frequently have 
conversations where there is an issue about release X vs Y when it is 
really about api versions. Even if we say that we are about 
implementations as well as apis, we can start to organize our processes 
and code as if we were just apis. If each service had a well-defined, 
versioned, discoverable, well-tested api, then projects could follow 
their own release schedule, relying on distros or integrators to put the 
pieces together and verify the quality of the whole stack to the users. 
Such entities could still collaborate on that task, and still identify 
longer release cycles, using stable branches. The upstream project 
could still test the latest released versions together. Some of these 
steps are now being taken to resolve gate issues and horizontal resource 
issues. Doing this would vastly increase agility but with some costs:


1. The upstream project would likely have to give up on the worthy goal 
of providing an actual deployable stack that could be used as an 
alternative to AWS, etc. That saddens me, but for various reasons, 
including that we do no scale/performance testing on the upstream code, 
we are not achieving that goal anyway. The big tent proposals are also a 
move away from that goal.


2. We would have to give up on incompatible api changes. But with the 
replacement of nova v3 with microversions we are already doing that. 
Massive adoption with release agility is simply incompatible with 
allowing incompatible api changes.


Most of this is just echoing what Jay said. I think this is the way any 
SOA would be designed. If we did this, and projects released frequently, 
would there be a reason for any one to be chasing master?


 -David


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] [glance] conclusion needed on functional API

2015-02-24 Thread michael mccune

On 02/24/2015 03:09 AM, Flavio Percoco wrote:

On 22/02/15 22:43 -0500, Jay Pipes wrote:

On 02/18/2015 06:37 PM, Brian Rosmaita wrote:

Thanks for your comment, Miguel.  Your suggestion is indeed very close
to the RESTful ideal.

However, I have a question for the entire API-WG.  Our (proposed)
mission is To improve the developer experience of API users by
converging the OpenStack API to a consistent and pragmatic RESTful
design. [1]  My question is: what is the sense of pragmatic in this
sentence?  I thought it meant that we advise the designers of OpenStack
APIs to adhere to RESTful design as much as possible, but allow them to
diverge where appropriate.  The proposed functional call to deactivate
an image seems to be an appropriate place to deviate from the ideal.
 Creating a task or action object so that the POST request will create
a new resource does not seem very pragmatic.  I believe that a necessary
component of encouraging OpenStack APIs to be consistent is to allow
some pragmatism.


Hi Brian,

I'm sure you're not surprised by my lack of enthusiasm for the
functional Glance API spec for activating/deactivating an image :)

As for the role of the API WG in this kind of thing, you're absolutely
correct that the goal of the WG is to improve the developer experience
of API users with a consistent and pragmatic RESTful design.

I feel the proposed `PUT /images/{image_id}/actions/deactivate` is
neither consistent (though to be fair, the things this would be
consistent with in the Nova API -- i.e. the os-actions API -- are
monstrosities IMHO) nor pragmatic.

This kind of thing, IMHO, is not something that belongs in the same
REST API as the other Glance image API calls. It's purely an
administrative thing and belongs in a separate API, and doesn't even
need to be RESTful. The glance-manage command would be more
appropriate, with direct calls to backend database systems to flip the
status to activate/deactivate.

If this functionality really does need to be in the main user RESTful
API, I believe it should follow the existing v2 Glance API's /tasks
resource model for consistency and design reasons.

That said, I'm only one little voice on the API WG. Happy to hear
other's views on this topic and go with the majority's view (after
arguing for my points of course ;)



many thanks to Jay and Miguel, i think you guys are spot on in terms of 
the most RESTful way to solve this issue. i voted to support the C 
option but, given the arguments, i can see the wisdom of D.


i think the idea of pragmatism and best practices really comes into 
intersection with this issue. it seems that the most ideal solution 
would be for glance to extend their tasks resource to allow this 
activate|deactivate functionality. but it seemed like this approach was 
non-ideal given the state of the project and the plans for future 
development.


this is partially why i felt a little weird voting on what we think 
glance should do. mainly because there are really good ideas about how 
this could be solved in the most ideal RESTful fashion. but, those 
suggestions might lead to a large amount of work that will only delay 
the requested feature.




I've been hacking on the task side of Glance lately and I believe
this could actually be implemented. Something we definitely need to
figure out before this happens is whether we can make some tasks run
in a serial engine while others run in a workers-based one, for
example.

I believe there are tasks we don't want to delegate to other nodes
because they don't do anything that is heavy compute-wise.

The benefit I see from doing this using tasks is that we don't
introduce yet-another-endpoint and it gives us more flexibility from a
maintenance perspective. It'd be a matter of adding a new task to
Glance and register it.

However, the CLI implementation for tasks is P.A.I.N.F.U.L.L and it
requires you to write json in the terminal. This can defeinitely be
improved, though.



this response from Flavio really drives home the point for me concerning 
our recommendations to projects, unintended consequences, and pragmatic 
convergence on best practices. there are many details that we do not 
have visibility on, and this just indicates that we need to be fully 
involved with the projects we advise (duh). we should definitely be 
creating guidelines that are the best practices and designs available, 
but i think we should also make some effort to help show others the path 
from where they are currently to where we think they should be. which, 
arguably, is what we are doing =)


i think this issue is a great example of a situation where we have ideas 
about a most RESTful solution that happens to intersect poorly with the 
current state of the project. it makes me wonder about how we will 
provide guidance for a project that wants to move towards a better API 
while making incremental steps.


in this case we are creating dissonance with respect to convergence. we 
are recommending a path 

Re: [openstack-dev] [all] Re-evaluating the suitability of the 6 month release cycle

2015-02-24 Thread John Griffith
On Tue, Feb 24, 2015 at 10:04 AM, David Kranz dkr...@redhat.com wrote:

 On 02/24/2015 09:37 AM, Chris Dent wrote:

 On Tue, 24 Feb 2015, Sean Dague wrote:

  That also provides a very concrete answer to will people show up.
 Because if they do, and we get this horizontal refactoring happening,
 then we get to the point of being able to change release cadences
 faster. If they don't, we remain with the existing system. Vs changing
 the system and hoping someone is going to run in and backfill the breaks.


 Isn't this the way of the world? People only put halon in the
 machine room after the fire.

 I agree that people showing up is a real concern, but I also think
 that we shy away too much from the productive energy of stuff
 breaking. It's the breakage that shows where stuff isn't good
 enough.

 [Flavio said]:

 To this I'd also add that bug fixing is way easier when you have
 aligned releases for projects that are expected to be deployed
 together. It's easier to know what the impact of a change/bug is
 throughout the infrastructure.


 Can't this be interpreted as an excuse for making software which
 does not have a low surface area and a good API?

 (Note I'm taking a relatively unrealistic position for sake of
 conversation.)

 I'm not so sure about that. IMO, much of this goes back to the question of
 whether OpenStack services are APIs or implementations. This was debated
 with much heat at the Diablo summit (Hi Jay). I frequently have
 conversations where there is an issue about release X vs Y when it is
 really about api versions. Even if we say that we are about implementations
 as well as apis, we can start to organize our processes and code as if we
 were just apis. If each service had a well-defined, versioned,
 discoverable, well-tested api, then projects could follow their own release
 schedule, relying on distros or integrators to put the pieces together and
 verify the quality of the whole stack to the users. Such entities could
 still collaborate on that task, and still identify longer release cycles,
 using stable branches. The upstream project could still test the latest
 released versions together. Some of these steps are now being taken to
 resolve gate issues and horizontal resource issues. Doing this would vastly
 increase agility but with some costs:

 1. The upstream project would likely have to give up on the worthy goal of
 providing an actual deployable stack that could be used as an alternative
 to AWS, etc. That saddens me, but for various reasons, including that we do
 no scale/performance testing on the upstream code, we are not achieving
 that goal anyway. The big tent proposals are also a move away from that
 goal.

 2. We would have to give up on incompatible api changes. But with the
 replacement of nova v3 with microversions we are already doing that.
 Massive adoption with release agility is simply incompatible with allowing
 incompatible api changes.

 Most of this is just echoing what Jay said. I think this is the way any
 SOA would be designed. If we did this, and projects released frequently,
 would there be a reason for any one to be chasing master?

  -David



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



​Seems like some of the proposals around frequency (increasing in
particular) just sort of move the bottlenecks around.  Honestly, I thought
we were already on a path with some of the ideas that Sean and David (and
indirectly Jay) proposed.  Get rid of the whole coordinated release
altogether.  I think there should still be some sort of tagging or
something at some interval that just says here's a point in time
collection that we call X.

Another proposal I think I've talked to some folks about is a true CI/Train
model.  Cut out some of the artificial milestone deadlines etc, just keep
rolling and what's ready at the release point is what's ready; you make a
cut of what's there at that time and roll on.  Basically eliminate the
feature freeze and other components and hopefully keep feature commit
distributed.  There are certainly all sorts of gotchas here, but I don't
think it's very interesting to most so I won't go into a bunch of theory on
it.

Regardless, I do think that no matter the direction we all seem to be of
the opinion that we need to move more towards projects being responsible
for more of the horizontal functions (as Sean put it).  Personally I think
this is a great direction for a number of reasons, and I also think that it
might force us to be better about our API's and requirements than we have
been in the past.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [TripleO] Midcycle Summary

2015-02-24 Thread Ben Nemec
Thanks for the summary.  A couple of comments inline.

On 02/24/2015 08:48 AM, James Slagle wrote:
 Hi Everyone,
 
 TripleO held a midcycle meetup from February 18th-20th in Seattle. Thanks to 
 HP
 for hosting the event! I wanted to send out a summary of what went on. We also
 captured some notes on an etherpad[0].
 
 The first order of business was that I volunteered to serve as PTL of TripleO
 for the remainder of the Kilo cycle after Clint Byrum announced that he was
 stepping down due to a change in focus. Thanks Clint for serving as PTL so far
 throughout Kilo!
 
 We moved on to talking about the state of TripleO in general. An immediate
 topic of discussion was CI stability, especially as all of our jobs were
 currently failing at the time. It appeared that most agreed that our actual CI
 stability was pretty good overall and that most of the failures continue to be
 caused by finding bugs in our own code and regressions in other projects that
 end up breaking TripleO. There was a lot of agreement that the TripleO CI was
 very useful and continues to find real breakages in OpenStack that are 
 otherwise
 missed.
 
 We talked a bit about streamlining the CI jobs that are run by getting rid of
 the undercloud jobs entirely or using the jenkins worker as the seed itself.
 
 As it typically tends to do, the discussion around improving our CI drifted
 into the topic of QuintupleO. Everyone seems to continue to agree that
 QuintupleO would be really helpful to CI and development environments, but 
 that
 no one has time to work on it. The idea of skipping the Ironic PXE/iscsi
 deployment process entirely and just nova boot'ing our instances as regular vm
 images was brought up as a potential way to get QuintupleO off the ground
 initially. You'd lose out on the coverage around Ironic, but it could still be
 very valuable for testing all the other stuff such as large HA deployments
 using Heat, template changes, devtest, etc.

I should note that just nova booting doesn't get you all the way to a
working devtest.  You'll still have networking issues running Neutron
inside of an OpenStack instance.

Also, Devananda pinged me recently about the Ironic IPMI listener
service that would probably let us start using this in CI, even if it
requires patches that wouldn't be available on real public clouds.  As
you noted, I haven't had a chance to followup with him about it though. :-(

 
 We moved onto talking about diskimage-builder. Due to some shifts in focus,
 there were some questions about any needed changes to the core team
 of diskimage-builder. In the end, it was more or less decided that any such
 changes would just be disruptive at this point, and that we could instead be
 reactive to any changes that might be needed in the future.
 
 There were lots of good ideas about how to improve functional testing of
 diskimage-builder and giving it a proper testsuite outside of TripleO CI.
 Functional and unit testing of the individual elements and hook scripts is 
 also
 desired. While there was half a session devoted to the unit testing aspect at
 the Paris summit, we haven't yet made a huge amount of progress in this area,
 but it sounds like that might soon change.

It's worse than that.  The half session was actually in Atlanta. ;-)

Again, time.  I started implementing some test code and it's even being
used in dib already [1], but I just haven't had time to look into what
other functionality will be required for more complete test coverage.

[1]:
https://github.com/openstack/diskimage-builder/blob/master/diskimage_builder/tests/base.py

 
 The tripleo-heat-templates was the next topic of discussion. With having
 multiple implementations in tree, we agreed it was time to deprecate the
 merge.py templates[1]. This will also free up some CI capacity for new jobs
 after the removal of those templates.
 
 We talked about backwards compatibility as well. The desire here was around
 maintaining the ability to deploy stable versions of OpenStack for the
 Overcloud with the TripleO tooling. Also, it was pointed out that the new
 features that have been rolling out to the TripleO templates are for the
 Overcloud only, so we're not breaking any ability to upgrade the Undercloud.
 
 Dan Prince gave a detailed overview of the Puppet and TripleO integration
 that's been ongoing since a little before Paris. A lot of progress has been
 made very quickly and there is now a CI job in place exercising a deployment
 via Puppet using the stackforge puppet modules. I don't think I need to go 
 into
 too much more detail here, because Dan already summarized it previously on
 list[2].
 
 The Puppet talk led into a discussion around the Heat breakpoints feature and
 how that might be used to provide some aspect of workflow while doing a
 deployment. There were some concerns raised that using breakpoints in that way
 was odd, especially since they're not represented in the templates at all. In
 the end, I think most agreed that 

[openstack-dev] [nova] novaclient functional test guidelines

2015-02-24 Thread Sean Dague
Towards the end of merging the regression test for the nova
volume-attach bug - https://review.openstack.org/#/c/157959/ there was a
discussion around what style the functional tests should take.
Especially as that had a mix of CLI and API calls in it.

Here are my thoughts for why that test ended up that way:

1) All resource setup that is table stakes for the test should be done
via the API, regardless if it's a CLI or API test.

The reason for this is that structured data is returned, which removes
one possible error in the tests by parsing incorrectly. The API objects
returned also include things like .delete methods in most cases, which
means that addCleanup is a little more clean.

2) Main logic should touch which ever interface you are trying to test.
This was demonstrating a CLI regression, so the volume-attach call was
important to be done over the CLI.


Now... here's where theory runs into issues.

#1 - nova boot is table stakes. Under the above guidelines it should be
called via API. However --poll is a CLI construct and so saved a custom
wait loop here. We should implement that custom wait loop down the road
and make that an API call

https://github.com/openstack/python-novaclient/blob/master/novaclient/tests/functional/test_instances.py#L116

#2 - the volume create command is table stakes. It should be an API
call. However, it can't be because the service catalog redirection only
works at the CLI layer. This is actually also the crux of bug #1423695.
The same reason the completion cache code failed is the reason we can't
use the API for that.

https://github.com/openstack/python-novaclient/blob/master/novaclient/tests/functional/test_instances.py#L129

#3 - the cleanup of the volume should have been API call. See reason for #2.

https://github.com/openstack/python-novaclient/blob/master/novaclient/tests/functional/test_instances.py#L131

#4 - the cleanup of the attachment should be an addCleanup via the API.
See reason for #2 why it's not.

https://github.com/openstack/python-novaclient/blob/master/novaclient/tests/functional/test_instances.py#L155


I'm happy if there are other theories about how we do these things,
being the first functional test in the python-novaclient tree that
creates and destroys real resources, there isn't an established pattern
yet. But I think doing all CLI calls in CLI tests is actually really
cumbersome, especially in the amount of output parsing code needed if
you are going to setup any complicated resource structure.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Midcycle Summary

2015-02-24 Thread Jason Rist
On 02/24/2015 07:48 AM, James Slagle wrote:
 Hi Everyone,

 TripleO held a midcycle meetup from February 18th-20th in Seattle. Thanks to 
 HP
 for hosting the event! I wanted to send out a summary of what went on. We also
 captured some notes on an etherpad[0].

 The first order of business was that I volunteered to serve as PTL of TripleO
 for the remainder of the Kilo cycle after Clint Byrum announced that he was
 stepping down due to a change in focus. Thanks Clint for serving as PTL so far
 throughout Kilo!

 We moved on to talking about the state of TripleO in general. An immediate
 topic of discussion was CI stability, especially as all of our jobs were
 currently failing at the time. It appeared that most agreed that our actual CI
 stability was pretty good overall and that most of the failures continue to be
 caused by finding bugs in our own code and regressions in other projects that
 end up breaking TripleO. There was a lot of agreement that the TripleO CI was
 very useful and continues to find real breakages in OpenStack that are 
 otherwise
 missed.

 We talked a bit about streamlining the CI jobs that are run by getting rid of
 the undercloud jobs entirely or using the jenkins worker as the seed itself.

 As it typically tends to do, the discussion around improving our CI drifted
 into the topic of QuintupleO. Everyone seems to continue to agree that
 QuintupleO would be really helpful to CI and development environments, but 
 that
 no one has time to work on it. The idea of skipping the Ironic PXE/iscsi
 deployment process entirely and just nova boot'ing our instances as regular vm
 images was brought up as a potential way to get QuintupleO off the ground
 initially. You'd lose out on the coverage around Ironic, but it could still be
 very valuable for testing all the other stuff such as large HA deployments
 using Heat, template changes, devtest, etc.

 We moved onto talking about diskimage-builder. Due to some shifts in focus,
 there were some questions about any needed changes to the core team
 of diskimage-builder. In the end, it was more or less decided that any such
 changes would just be disruptive at this point, and that we could instead be
 reactive to any changes that might be needed in the future.

 There were lots of good ideas about how to improve functional testing of
 diskimage-builder and giving it a proper testsuite outside of TripleO CI.
 Functional and unit testing of the individual elements and hook scripts is 
 also
 desired. While there was half a session devoted to the unit testing aspect at
 the Paris summit, we haven't yet made a huge amount of progress in this area,
 but it sounds like that might soon change.

 The tripleo-heat-templates was the next topic of discussion. With having
 multiple implementations in tree, we agreed it was time to deprecate the
 merge.py templates[1]. This will also free up some CI capacity for new jobs
 after the removal of those templates.

 We talked about backwards compatibility as well. The desire here was around
 maintaining the ability to deploy stable versions of OpenStack for the
 Overcloud with the TripleO tooling. Also, it was pointed out that the new
 features that have been rolling out to the TripleO templates are for the
 Overcloud only, so we're not breaking any ability to upgrade the Undercloud.

 Dan Prince gave a detailed overview of the Puppet and TripleO integration
 that's been ongoing since a little before Paris. A lot of progress has been
 made very quickly and there is now a CI job in place exercising a deployment
 via Puppet using the stackforge puppet modules. I don't think I need to go 
 into
 too much more detail here, because Dan already summarized it previously on
 list[2].

 The Puppet talk led into a discussion around the Heat breakpoints feature and
 how that might be used to provide some aspect of workflow while doing a
 deployment. There were some concerns raised that using breakpoints in that way
 was odd, especially since they're not represented in the templates at all. In
 the end, I think most agreed that there was an opportunity here to drive
 further features in Heat to meet the use cases that are trying to be solved
 around Overcloud deployments using breakpoints.

 One theme that resurfaced a few times throughout the midcycle was ways that
 TripleO could better define it's interfaces to make different parts pluggable,
 even if that's just documentation initially. Doing so would allow TripleO to
 integrate more easily with existing solutions that are already in use.

 Thanks again to everyone who was able to participate in the midcycle, and as
 well to those who stayed home and did actual work...such as fixing CI.

 For other folks who attended, feel free to add some details, fill in
 any gaps, or
 disagree with my recollection of events :-).

 [0] https://etherpad.openstack.org/p/kilo-tripleo-midcycle-meetup
 [1] https://review.openstack.org/#/c/158410/
 [2] 
 

Re: [openstack-dev] [all] Re-evaluating the suitability of the 6 month release cycle

2015-02-24 Thread Sean Dague
On 02/24/2015 12:33 PM, John Griffith wrote:
 
 
 On Tue, Feb 24, 2015 at 10:04 AM, David Kranz dkr...@redhat.com
 mailto:dkr...@redhat.com wrote:
 
 On 02/24/2015 09:37 AM, Chris Dent wrote:
 
 On Tue, 24 Feb 2015, Sean Dague wrote:
 
 That also provides a very concrete answer to will people
 show up.
 Because if they do, and we get this horizontal refactoring
 happening,
 then we get to the point of being able to change release
 cadences
 faster. If they don't, we remain with the existing system.
 Vs changing
 the system and hoping someone is going to run in and
 backfill the breaks.
 
 
 Isn't this the way of the world? People only put halon in the
 machine room after the fire.
 
 I agree that people showing up is a real concern, but I also think
 that we shy away too much from the productive energy of stuff
 breaking. It's the breakage that shows where stuff isn't good
 enough.
 
 [Flavio said]:
 
 To this I'd also add that bug fixing is way easier when you have
 aligned releases for projects that are expected to be deployed
 together. It's easier to know what the impact of a change/bug is
 throughout the infrastructure.
 
 
 Can't this be interpreted as an excuse for making software which
 does not have a low surface area and a good API?
 
 (Note I'm taking a relatively unrealistic position for sake of
 conversation.)
 
 I'm not so sure about that. IMO, much of this goes back to the
 question of whether OpenStack services are APIs or implementations.
 This was debated with much heat at the Diablo summit (Hi Jay). I
 frequently have conversations where there is an issue about release
 X vs Y when it is really about api versions. Even if we say that we
 are about implementations as well as apis, we can start to organize
 our processes and code as if we were just apis. If each service had
 a well-defined, versioned, discoverable, well-tested api, then
 projects could follow their own release schedule, relying on distros
 or integrators to put the pieces together and verify the quality of
 the whole stack to the users. Such entities could still collaborate
 on that task, and still identify longer release cycles, using
 stable branches. The upstream project could still test the latest
 released versions together. Some of these steps are now being taken
 to resolve gate issues and horizontal resource issues. Doing this
 would vastly increase agility but with some costs:
 
 1. The upstream project would likely have to give up on the worthy
 goal of providing an actual deployable stack that could be used as
 an alternative to AWS, etc. That saddens me, but for various
 reasons, including that we do no scale/performance testing on the
 upstream code, we are not achieving that goal anyway. The big tent
 proposals are also a move away from that goal.
 
 2. We would have to give up on incompatible api changes. But with
 the replacement of nova v3 with microversions we are already doing
 that. Massive adoption with release agility is simply incompatible
 with allowing incompatible api changes.
 
 Most of this is just echoing what Jay said. I think this is the way
 any SOA would be designed. If we did this, and projects released
 frequently, would there be a reason for any one to be chasing master?
 
  -David
 
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 OpenStack-dev-request@lists.__openstack.org?subject:__unsubscribe
 http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ​Seems like some of the proposals around frequency (increasing in
 particular) just sort of move the bottlenecks around.  Honestly, I
 thought we were already on a path with some of the ideas that Sean and
 David (and indirectly Jay) proposed.  Get rid of the whole coordinated
 release altogether.  I think there should still be some sort of tagging
 or something at some interval that just says here's a point in time
 collection that we call X.
 
 Another proposal I think I've talked to some folks about is a true
 CI/Train model.  Cut out some of the artificial milestone deadlines etc,
 just keep rolling and what's ready at the release point is what's ready;
 you make a cut of what's there at that time and roll on.  Basically
 eliminate the feature freeze and other components and hopefully keep
 feature commit distributed.  There are certainly all sorts of gotchas
 

Re: [openstack-dev] [stable][all] Revisiting the 6 month release cycle

2015-02-24 Thread Johannes Erdfelt
On Tue, Feb 24, 2015, Thierry Carrez thie...@openstack.org wrote:
 Agree on the pain of maintaining milestone plans though, which is why I
 propose we get rid of most of it in Liberty. That will actually be
 discussed at the cross-project meeting today:
 
 https://wiki.openstack.org/wiki/Release_Cycle_Management/Liberty_Tracking

I'm happy to see this.

Assignees may target their blueprint to a future milestone, as an
indication of when they intend to land it (not mandatory)

That seems useless to me. I have no control over when things land. I can
only control when my code is put up for review.

Recently, I have spent a lot more time waiting on reviews than I have
spent writing the actual code.

JE


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] ECMP on Neutron virtual router

2015-02-24 Thread NAPIERALA, MARIA H
Does Neutron router support ECMP across multiple static routes to the same 
destination network but with different next-hops?

Maria

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [thirdpartyCI][cinder] Question about certification

2015-02-24 Thread Mike Perez
On 12:47 Tue 24 Feb , Eduard Matei wrote:
 The question is: does the CI need voting rights (validated) , or just
 check/comment to be considered working?

See:

https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers#When_thirdparty_CI_voting_will_be_required.3F

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ECMP on Neutron virtual router

2015-02-24 Thread Kevin Benton
I wonder if there is a way we can easily abuse the extra routes extension
to do this? Maybe two routes to the same network would imply ECMP.

If not, maybe this can fit into a larger refactoring for route management
(dynamic routing, etc).
On Feb 24, 2015 11:02 AM, Carl Baldwin c...@ecbaldwin.net wrote:

 It doesn't support this at this time.  There are no current plans to
 make it work.  I'm curious to know how you would like for this to work
 in your deployment.

 Carl

 On Tue, Feb 24, 2015 at 11:32 AM, NAPIERALA, MARIA H mn1...@att.com
 wrote:
  Does Neutron router support ECMP across multiple static routes to the
 same
  destination network but with different next-hops?
 
  Maria
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Tempest] Tempest will deny extra properties on Nova v2/v2.1 API

2015-02-24 Thread David Kranz

On 02/24/2015 06:55 AM, Ken'ichi Ohmichi wrote:

Hi Ghanshyam,

2015-02-24 20:28 GMT+09:00 GHANSHYAM MANN ghanshyamm...@gmail.com:

On Tue, Feb 24, 2015 at 6:48 PM, Ken'ichi Ohmichi ken1ohmi...@gmail.com
wrote:

Hi

Nova team is developing Nova v2.1 API + microversions in this cycle,
and the status of Nova v2.1 API has been changed to CURRENT from
EXPERIMENTAL.
That said new API properties should be added via microversions, and
v2/v2.1 API(*without* microversions) should return the same response
without any new properties.
Now Tempest allows extra properties of a Nova API response because we
thought Tempest should not block Nova API development.

However, I think Tempest needs to deny extra properties in
non-microversions test cases because we need to block accidental
changes of v2/v2.1 API and encourage to use microversions for API
changes.
https://review.openstack.org/#/c/156130/ is trying to do that, but I'd
like to get opinions before that.

If the above change is merged, we can not use Tempest on OpenStack
environments which provide the original properties.


I think that will be nice to block additional properties.

Do you mean OpenStack environment with micro-versions enabled?
In those cases too tempest should run successfully as it requests on V2 or
V2.1 endpoint not on microversion.

My previous words were unclear, sorry.
The above OpenStack environment means the environment which is
customized by a cloud service provider and it returns a response which
includes the provider original properties.

On microversions discussion, we considered the customized API by
a cloud service provider for the design. Then I guess there are some
environments return extra properties and Tempest will deny them if
the patch is merged. I'd like to know the situation is acceptable or not
as Tempest purpose.
Ken'ichi, can you please provide a pointer to the referenced 
microversions discussion and/or summarize the conclusion?


The commit message is saying that returning extra values without a new 
microversion is an incompatible (disallowed) change. This was already 
true, unless creating a new extension, according to 
https://wiki.openstack.org/wiki/APIChangeGuidelines.


Seems to me that extra properties (unless using a syntax marking them as 
such), are either allowed or not. If not, tempest should fail on them. 
If service providers are allowed to add returned properties, and not 
required to use some special syntax to distinguish them, that is a bad 
api. If tempest can't tell the difference between a legitimate added 
property and some one misspelling while returning an optional property, 
I'm not sure how we test for the unintentional change case.


 -David



Thanks
Ken Ohmichi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] tracking bugs superseded by blueprints

2015-02-24 Thread Aleksey Kasatkin
I think it is better to keep such bugs open. Please see
https://blueprints.launchpad.net/fuel/+spec/granular-network-functions .
There are some related bugs here. One is fixed, another one is in progress,
two are closed. If bug is strictly coherent with blueprint (like
https://bugs.launchpad.net/fuel/+bug/1355764 for this BP) is can be closed
almost without doubt. But some of them can be solved separately somehow or
have workarounds. Sometimes scope of BP can be changed (e.g. split to
several BPs) or its timeline is changed so bugs should not be lost without
care.



Aleksey Kasatkin


On Tue, Feb 24, 2015 at 12:01 AM, Mike Scherbakov mscherba...@mirantis.com
wrote:

 Bogdan,
 I think we should keep bugs open and not supersed them by blueprint. I
 see following reasons for it.

 Often, we can find workaround in order to fix the bug. Even if bug
 naturally seems to be falling into some blueprint's scope. Then problem is
 that when you close the bug, you don't even try to think about workaround -
 and project gets shipped with some serious gaps from release to release.

 Another issue is that you lose real technical requirements for blueprints.
 If you keep bugs open and associated with blueprint, you will pass by bugs
 a few times before you deliver blueprint's functionality, and finally close
 bugs if code covers bug's case. At least, I'd like it to be so.

 Finally, QA and users will keep opening duplicates, as no one will be
 happy with won't fix. You can vote for bug (by affecting it), and you
 can't for blueprint in LP, unfortunately. This just keeps door open for
 getting feedback.

 I don't really see any cons of moving bugs into Won't fix state instead.

 Examples of bugs which I would certainly avoid putting into Won't fix:
 https://bugs.launchpad.net/bugs/1398817 - disable computes by default
 during scale up
 https://bugs.launchpad.net/fuel/+bug/1422856 - separate /var  /var/log
 on master node

 Thanks,

 On Wed, Feb 18, 2015 at 8:46 PM, Andrew Woodward xar...@gmail.com wrote:

 Bogdan,

 Yes I think tracking the bugs like this would be beneficial. We should
 also link them from the BP so that the imperilmenter can track them. It
 adds related blueprints in the bottom of the right column under the
 subscribers so we probably should also edit the description so that the
 data is easy to see

 On Wed, Feb 18, 2015 at 8:12 AM, Bogdan Dobrelya bdobre...@mirantis.com
 wrote:

 Hello.
 There is inconsistency in the triage process for Fuel bugs superseded by
 blueprints.
 The current approach is to set won't fix status for such bugs.
 But there are some cases we should clarify [0], [1].

 I vote to not track superseded bugs separately and keep them as won't
 fix but update the status back to confirmed in case of regression
 discovered. And if we want to backport an improvement tracked by a
 blueprint (just for an exceptional case) let's assign milestones for
 related bugs.

 If we want to change the triage rules, let's announce that so the people
 won't get confused.

 [0] https://bugs.launchpad.net/fuel/+bug/1383741
 [1] https://bugs.launchpad.net/fuel/+bug/1422856

 --
 Best regards,
 Bogdan Dobrelya,
 Skype #bogdando_at_yahoo.com http://bogdando_at_yahoo.com
 Irc #bogdando




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Andrew
 Mirantis
 Fuel community ambassador
 Ceph community

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Mike Scherbakov
 #mihgen


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Kerberos in OpenStack

2015-02-24 Thread Tim Bell
You may also get some information from how we set up Kerberos at CERN at 
http://openstack-in-production.blogspot.fr/2014/10/kerberos-and-single-sign-on-with.html

From my understanding, the only connection is between Keystone and KDC. There 
is a standard Keystone token issues based off the Kerberos ticket and the rest 
is the same as if a password had been supplied.

Tim

From: Sanket Lawangare [mailto:sanket.lawang...@gmail.com]
Sent: 24 February 2015 19:53
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] Kerberos in OpenStack

Hello  Everyone,


My name is Sanket Lawangare. I am a graduate Student studying at The University 
of Texas, at San Antonio. For my Master’s Thesis I am working on the Identity 
component of OpenStack. My research is to investigate external authentication 
with Identity(keystone) using Kerberos.


Based on reading Jammie lennox's Blogs on Kerberos implementation in OpenStack 
and my understanding of Kerberos I have come up with a figure explaining 
possible interaction of KDC with the OpenStack client, keystone and the 
OpenStack services(Nova, Cinder, Swift...).

These are the Blogs -

http://www.jamielennox.net/blog/2015/02/12/step-by-step-kerberized-keystone/

http://www.jamielennox.net/blog/2013/10/22/keystone-token-binding/

I am trying to understand the working of Kerberos in OpenStack.


Please click this link to view the figure: 
https://docs.google.com/drawings/d/1re0lNbiMDTbnkrqGMjLq6oNoBtR_GA0x7NWacf0Ulbs/edit?usp=sharing


P.S. - [The steps in this figure are self explanatory the basic understanding 
of Kerberos is expected]


Based on the figure i had couple of questions:


1. Is Nova or other services registered with the KDC?


2. What does keystone do with Kerberos ticket/credentials? Does Keystone 
authenticates the users and gives them direct access to other services such as 
Nova, Swift etc..


3. After receiving the Ticket from the KDC does keystone embed some 
kerberos credential information in the token?


4. What information does the service (e.g.Nova) see in the Ticket and the 
token (Does the token have some kerberos info or some customized info inside 
it?).


If you could share your insights and guide me on this. I would be really 
appreciate it. Thank you all for your time.


Regards,

Sanket Lawangare
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Kerberos in OpenStack

2015-02-24 Thread Adam Young

On 02/24/2015 01:53 PM, Sanket Lawangare wrote:

Hello  Everyone,

My name is Sanket Lawangare. I am a graduate Student studying at The 
University of Texas, at San Antonio.For my Master’s Thesis I am 
working on the Identity component of OpenStack. My research is to 
investigate external authentication with Identity(keystone) using 
Kerberos.



Based on reading Jammie lennox's Blogs on Kerberos implementation in 
OpenStack and my understanding of Kerberos I have come up with a 
figure explaining possible interaction of KDC with the OpenStack 
client, keystone and the OpenStack services(Nova, Cinder, Swift...).


These are the Blogs -

http://www.jamielennox.net/blog/2015/02/12/step-by-step-kerberized-keystone/

http://www.jamielennox.net/blog/2013/10/22/keystone-token-binding/

I am trying to understand the working of Kerberos in OpenStack.


Please click this link to view the figure: 
https://docs.google.com/drawings/d/1re0lNbiMDTbnkrqGMjLq6oNoBtR_GA0x7NWacf0Ulbs/edit?usp=sharing



P.S. - [The steps in this figure are self explanatory the basic 
understanding of Kerberos is expected]



Based on the figure i had couple of questions:


1.

Is Nova or other services registered with the KDC?

Not yet.  Kerberos is only used for Keystone at the moment, with work 
underway to make Horizon work with Keystone.  Since many of the services 
only run in Eventlet, not in HTTPD, Kerberos support is hard to 
support.  Ideally, yes, we would do Kerberos direct to Nova, and weither 
use the token binding mechanism, or better yet, not even provide a 
token...but that is more work.






2.

What does keystone do with Kerberos ticket/credentials? Does
Keystone authenticates the users and gives them direct access to
other services such as Nova, Swift etc..


THey are used for authentication, and then the Keystone server uses the 
principal to resolve the username and user id.  The rest of the data 
comes out of LDAP.




3.

After receiving the Ticket from the KDC does keystone embed some
kerberos credential information in the token?


No, it is mapped to the Openstack userid and username



4.

What information does the service (e.g.Nova) see in the Ticket and
the token (Does the token have some kerberos info or some
customized info inside it?).



No kerberos ticket goes to Nova.



If you could share your insights and guide me on this. I would be 
really appreciate it. Thank you all for your time.





Let me know if you have more questions.  Really let me know if you want 
to help coding.




Regards,

Sanket Lawangare



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Graduating oslo.reports: Request to review clean copy

2015-02-24 Thread Doug Hellmann


On Tue, Feb 24, 2015, at 04:42 PM, Solly Ross wrote:
 Hello All,
 
 I've finally had some time to finish up the graduation work for
 oslo.reports (previously
 openstack.common.report), and it should be ready for review by the Oslo
 team.  The only thing
 that I was unclear about was the sync required tools from
 oslo-incubator part.
 oslo.reports does not use any modules from oslo-incubator, and it is
 unclear what
 constitutes an appropriate script.

Those scripts have mostly moved into oslotest, so they don't need to be
synced any more to be used. If you have all of your code, and it follows
the cookiecutter template, we can look at it and propose post-import
tweaks. What's the URL for the repository?

Doug


 
 Best Regards,
 Solly Ross
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Name field marked mandatory on Horizon

2015-02-24 Thread Yamini Sardana
Hello all,

For creating Networks, Images, Volumes etc, the name field is marked as 
mandatory on the Horizon UI whereas, in their corresponding create 
commands on the cli it is an optional field ( which is as per the API 
reference documents)

Why is this inconsistency?

Secondly, when we create a network/Image/Volume from the cli without any 
name and view it on Horizon, it shows its auto-generated ID in the name 
column which is confusing. Can we not display another ID column and 
display the ID in that column, similar to what we do in cli 'show' 
commands.

I have already raised this bug on Horizon for network create option ( 
https://bugs.launchpad.net/horizon/+bug/1424595 ) and if this is a valid 
one will raise on the rest as well. 

Please suggest.

Best Regards
Yamini Sardana
Tata Consultancy Services
Ground to 8th Floors, Building No. 1  2,
Skyview Corporate Park, Sector 74A,NH 8
Gurgaon - 122 004,Haryana
India
Ph:- +91 124 6213082
Mailto: yamini.sard...@tcs.com
Website: http://www.tcs.com

Experience certainty.   IT Services
Business Solutions
Consulting

=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Infra cloud: infra running a cloud for nodepool

2015-02-24 Thread Asselin, Ramy
I think this is really neat. As a 3rd party ci operator managing a small 
nodepool cloud, leveraging #3 would be really great!

Ramy

-Original Message-
From: James E. Blair [mailto:cor...@inaugust.com] 
Sent: Tuesday, February 24, 2015 1:19 PM
To: openstack-in...@lists.openstack.org
Cc: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [infra] Infra cloud: infra running a cloud for nodepool

A group of folks from HP is interested in starting an effort to run a cloud as 
part of the Infrastructure program with the purpose of providing resources to 
nodepool for OpenStack testing.  HP is supplying two racks of machines, and we 
will operate each as an independent cloud.
I think this is a really good idea, and will do a lot for OpenStack.

Here's what we would get out of it:

1) More test resources.  The primary goal of this cloud will be to provide more 
instances to nodepool.  This would extend our pool to include a third provider 
meaning that we are more resilient to service disruptions, and increase our 
aggregate capacity meaning we can perform more testing more quickly.  It's hard 
to say for certain until we have something spun up that we can benchmark, but 
we are hoping for somewhere between an additional 50% to 100% of our current 
capacity.

2) Closing the loop between OpenStack developers and ops.  This cloud will be 
deployed as often as we are able (perhaps daily, perhaps less often, depending 
on technology) meaning that if it is not behaving in a way developers like, 
they can fix it fairly quickly.

3) A fully open deployment.  The infra team already runs a large logstash and 
elasticsearch system for finding issues in devstack runs.
We will deploy the same technology (and perhaps more) to make sure that anyone 
who wants to inspect the operational logs from the running production cloud is 
able to do so.  We can even run the same elastic-recheck queries to see if 
known bugs are visible in production.
The cloud will be deployed using the same tools and processes as the rest of 
the project infrastructure, meaning anyone can edit the modules that deploy the 
cloud to make changes.

How is this different from the TripleO cloud?

The primary goal of the TripleO cloud is to provide test infrastructure so that 
the TripleO project can run tests that require real hardware and complex 
environments.  The primary purpose of the infra cloud will be to run a 
production service that will stand alongside other cloud providers to supply 
virtual machines to nodepool.

What about the infra team's aversion to real hardware?

It's true that all of our current resources are virtual, and this would be 
adding the first real, bare-metal, machines to the infra project.
However, there are a number of reasons I feel we're ready to take that step now:

* This cloud will stand alongside two others to provide resources to
  nodepool.  If it completely fails, infra will continue to operate; so
  we don't need to be overly concerned with uptime and being on-call,
  etc.

* The deployment and operation of the cloud will use the same technology
  and processes as the infra project currently uses, so there should be
  minimal barriers for existing team members.

* A bunch of new people will be joining the team to help with this.  We
  expect them to become fully integrated with the rest of infra, so that
  they are able to help out in other areas and the whole team expands
  its collective capacity and expertise.

If this works well, it may become a template for incorporating other hardware 
contributions into the system.

Next steps:

We've started the process of identifying the steps to make this happen, as well 
as answering some deployment questions (specifics about technology, topology, 
etc).  There is a StoryBoard story for the effort:

  https://storyboard.openstack.org/#!/story/2000175

And some notes that we took at a recent meeting to bootstrap the effort:

  https://etherpad.openstack.org/p/InfraCloudBootcamp

I think one of the next steps is to actually write all that up and push it up 
as a change to the system-config documentation.  Once we're certain we agree on 
all of that, it should be safe to divide up many of the remaining tasks.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Re-evaluating the suitability of the 6 month release cycle

2015-02-24 Thread Mark Atwood
On Tue, Feb 24, 2015, at 04:28, Kashyap Chamarthy wrote:
 
 Along with the below, if push comes to shove, OpenStack Foundation could
 probably try a milder variant (obviously, not all activities can be
 categorized as 'critical path') of Linux Foundation's Critical
 Infrastructure Protection Initiative[1] to fund certain project
 activities in need.

Speaking as a person who sits on the LF CII board meetings,
and helps turn the crank on that particular sausage mill,
no, we really don't want to go down that path at this point in
time.

-- 
Mark Atwood, Director of Open Source Engagement, HP

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Tempest] Tempest will deny extra properties on Nova v2/v2.1 API

2015-02-24 Thread Kenichi Oomichi
Hi David,

 -Original Message-
 From: David Kranz [mailto:dkr...@redhat.com]
 Sent: Wednesday, February 25, 2015 4:19 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Nova][Tempest] Tempest will deny extra 
 properties on Nova v2/v2.1 API
 
  2015-02-24 20:28 GMT+09:00 GHANSHYAM MANN ghanshyamm...@gmail.com:
  On Tue, Feb 24, 2015 at 6:48 PM, Ken'ichi Ohmichi ken1ohmi...@gmail.com
  wrote:
  Hi
 
  Nova team is developing Nova v2.1 API + microversions in this cycle,
  and the status of Nova v2.1 API has been changed to CURRENT from
  EXPERIMENTAL.
  That said new API properties should be added via microversions, and
  v2/v2.1 API(*without* microversions) should return the same response
  without any new properties.
  Now Tempest allows extra properties of a Nova API response because we
  thought Tempest should not block Nova API development.
 
  However, I think Tempest needs to deny extra properties in
  non-microversions test cases because we need to block accidental
  changes of v2/v2.1 API and encourage to use microversions for API
  changes.
  https://review.openstack.org/#/c/156130/ is trying to do that, but I'd
  like to get opinions before that.
 
  If the above change is merged, we can not use Tempest on OpenStack
  environments which provide the original properties.
 
  I think that will be nice to block additional properties.
 
  Do you mean OpenStack environment with micro-versions enabled?
  In those cases too tempest should run successfully as it requests on V2 or
  V2.1 endpoint not on microversion.
  My previous words were unclear, sorry.
  The above OpenStack environment means the environment which is
  customized by a cloud service provider and it returns a response which
  includes the provider original properties.
 
  On microversions discussion, we considered the customized API by
  a cloud service provider for the design. Then I guess there are some
  environments return extra properties and Tempest will deny them if
  the patch is merged. I'd like to know the situation is acceptable or not
  as Tempest purpose.

 Ken'ichi, can you please provide a pointer to the referenced
 microversions discussion and/or summarize the conclusion?

OK, Christopher Yeoh mentioned about future microversions on
http://lists.openstack.org/pipermail/openstack-dev/2015-February/057390.html

Now there are two API versions in Nova: v2 and v2.1.
v2 is implemented with old framework and we will remove it in long term.
v2.1 is implemented with new framework and it is the same as v2 as API
behaviors for providing full v2 compatibility.
Microversions will be v2.2, v2.3, ... and we can change API behaviors which
are both backwards compatible and incompatible.
Clients need to specify the preferred microversion in the request header if
they want to use microversioned API.
If not specifying it, Nova works as default API version which is v2.1.
Tempest doesn't specifies it now. So Tempest should check the behavior which
is the same as v2 on default API(v2.1) tests.

As Chris said no changes should be accepted to the old v2 api code on the
above mail, that means backwards compatible changes also should not be applied
to v2 and v2.1.

 The commit message is saying that returning extra values without a new
 microversion is an incompatible (disallowed) change.
 This was already true, unless creating a new extension, according to
 https://wiki.openstack.org/wiki/APIChangeGuidelines.

Yeah, a nice point.
We added a new dummy extension when adding new extra properties for the
above guideline, and extension numbers increased. We can use microversions
instead of adding more dummy extensions.

 Seems to me that extra properties (unless using a syntax marking them as
 such), are either allowed or not. If not, tempest should fail on them.
 If service providers are allowed to add returned properties, and not
 required to use some special syntax to distinguish them, that is a bad
 api. If tempest can't tell the difference between a legitimate added
 property and some one misspelling while returning an optional property,
 I'm not sure how we test for the unintentional change case.

We had discussed *vendor* flag in Kilo summit, and we have decided drop
of the flag[1]. So we don't have a standard way to know the difference.
I feel now Tempest can deny extra properties as upstream development.

[1]: https://etherpad.openstack.org/p/kilo-nova-microversions


Thanks
Ken Ohmichi


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [python-novaclient] [python-client] Queries regarding how to run test cases of python-client in juno release

2015-02-24 Thread Rattenpal Amandeep
 Hi 

I am unable to find script for run test cases of python-clients in juno  
release.
novaclients are shilfed into dist-packages but there is no  script for run the 
test cases.
Please help me to come out from this  problem.

Thanks,
Regards 
Amandeep Rattenpal
Asst. System Engineer,
Mail to: rattenpal.amand...@tcs.com 
Web site: www.tcs.com 
=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-24 Thread Amit Kumar Saha (amisaha)
Hi,

I am new to OpenStack (and am particularly interested in networking). I am 
getting a bit confused by this discussion. Aren’t there already a few 
monolithic plugins (that is what I could understand from reading the Networking 
chapter of the OpenStack Cloud Administrator Guide. Table 7.3 Available 
networking plugi-ins)? So how do we have interoperability between those (or do 
we not intend to)?

BTW, it is funny that the acronym ML can also be used for “monolithic” ☺

Regards,
Amit Saha
Cisco, Bangalore



From: Sukhdev Kapur [mailto:sukhdevka...@gmail.com]
Sent: Wednesday, February 25, 2015 6:04 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

Folks,

A great discussion. I am not expert at OVN, hence, want to ask a question. The 
answer may make a  case that it should probably be a ML2 driver as oppose to 
monolithic plugin.

Say a customer want to deploy an OVN based solution and use HW devices from one 
vendor for L2 and L3 (e.g. Arista or Cisco), and want to use another vendor for 
services (e.g. F5 or A10) - how can that be supported?

If OVN goes in as ML2 driver, I can then run ML2 and Service plugin to achieve 
above solution. For a monolithic plugin, don't I have an issue?

regards..
-Sukhdev


On Tue, Feb 24, 2015 at 8:58 AM, Salvatore Orlando 
sorla...@nicira.commailto:sorla...@nicira.com wrote:
I think we're speculating a lot about what would be best for OVN whereas we 
should probably just expose pro and cons of ML2 drivers vs standalone plugin 
(as I said earlier on indeed it does not necessarily imply monolithic *)

I reckon the job of the Neutron community is to provide a full picture to OVN 
developers - so that they could make a call on the integration strategy that 
best suits them.
On the other hand, if we're planning to commit to a model where ML2 is not 
anymore a plugin but the interface with the API layer, then any choice which is 
not a ML2 driver does not make any sense. Personally I'm not sure we ever want 
to do that, at least not in the near/medium term, but I'm one and hardly 
representative of the developer/operator communities.

Salvatore


* In particular with the advanced service split out the term monolithic simply 
does not mean anything anymore.

On 24 February 2015 at 17:48, Robert Kukura 
kuk...@noironetworks.commailto:kuk...@noironetworks.com wrote:
Kyle, What happened to the long-term potential goal of ML2 driver APIs becoming 
neutron's core APIs? Do we really want to encourage new monolithic plugins?

ML2 is not a control plane - its really just an integration point for control 
planes. Although co-existence of multiple mechanism drivers is possible, and 
sometimes very useful, the single-driver case is fully supported. Even with 
hierarchical bindings, its not really ML2 that controls what happens - its the 
drivers within the framework. I don't think ML2 really limits what drivers can 
do, as long as a virtual network can be described as a set of static and 
possibly dynamic network segments. ML2 is intended to impose as few constraints 
on drivers as possible.

My recommendation would be to implement an ML2 mechanism driver for OVN, along 
with any needed new type drivers or extension drivers. I believe this will 
result in a lot less new code to write and maintain.

Also, keep in mind that even if multiple driver co-existence doesn't sound 
immediately useful, there are several potential use cases to consider. One is 
that it allows new technology to be introduced into an existing cloud alongside 
what previously existed. Migration from one ML2 driver to another may be a lot 
simpler (and/or flexible) than migration from one plugin to another. Another is 
that additional drivers can support special cases, such as bare metal, 
appliances, etc..

-Bob

On 2/24/15 11:11 AM, Kyle Mestery wrote:
On Tue, Feb 24, 2015 at 3:19 AM, Salvatore Orlando 
sorla...@nicira.commailto:sorla...@nicira.com wrote:
On 24 February 2015 at 01:34, Kyle Mestery 
mest...@mestery.commailto:mest...@mestery.com wrote:
Russel and I have already merged the initial ML2 skeleton driver [1].
The thinking is that we can always revert to a non-ML2 driver if needed.

If nothing else an authoritative decision on a design direction saves us the 
hassle of going through iterations and discussions.
The integration through ML2 is definitely viable. My opinion however is that 
since OVN implements a full control plane, the control plane bits provided by 
ML2 are not necessary, and a plugin which provides only management layer 
capabilities might be the best solution. Note: this does not mean it has to be 
monolithic. We can still do L3 with a service plugin.
However, since the same kind of approach has been adopted for ODL I guess this 
provides some sort of validation.

To be honest, after thinking about this last night, I'm now leaning towards 
doing this as a full plugin. I don't really envision OVN 

Re: [openstack-dev] [Neutron] (RE: Change in openstack/neutron-specs[master]: Introducing Tap-as-a-Service)

2015-02-24 Thread Kyle Mestery
There is a -2 (from me). And this was done from the auto-abandon script
which I try to run once a month.

As Kevin said, the suggestion multiple times was to do a StackForge project
for this work, that's the best way forward here.

On Tue, Feb 24, 2015 at 5:01 PM, CARVER, PAUL pc2...@att.com wrote:

 Maybe I'm misreading review.o.o, but I don't see the -2. There was a -2
 from Salvatore Orlando with the comment The -2 on this patch is only to
 deter further comments and a link to 140292, but 140292 has a comment from
 Kyle saying it's been abandoned in favor of going back to 96149. Are we in
 a loop here?

 We're moving forward internally with proprietary mechanisms for attaching
 analyzers but it sure would be nice if there were a standard API. Anybody
 who thinks switches don't need SPAN/mirror ports has probably never working
 in Operations on a real production network where SLAs were taken seriously
 and enforced.

 I know there's been a lot of heated discussion around this spec for a
 variety of reasons, but there isn't an enterprise class hardware switch on
 the market that doesn't support SPAN/mirror. Lack of this capability is a
 glaring omission in Neutron that keeps Operations type folks opposed to
 using it because it causes them to lose visibility that they've had for
 ages. We're getting a lot of pressure to continue deploying hardware
 analyzers and/or deploy non-OpenStack mechanisms for implementing
 tap/SPAN/mirror capability when I'd much rather integrate the analyzers
 into OpenStack.


 -Original Message-
 From: Kyle Mestery (Code Review) [mailto:rev...@openstack.org]
 Sent: Tuesday, February 24, 2015 17:37
 To: vinay yadhav
 Cc: CARVER, PAUL; Marios Andreou; Sumit Naiksatam; Anil Rao; Carlos
 Gonçalves; YAMAMOTO Takashi; Ryan Moats; Pino de Candia; Isaku Yamahata;
 Tomoe Sugihara; Stephen Wong; Kanzhe Jiang; Bao Wang; Bob Melander;
 Salvatore Orlando; Armando Migliaccio; Mohammad Banikazemi; mark mcclain;
 Henry Gessau; Adrian Hoban; Hareesh Puthalath; Subrahmanyam Ongole; Fawad
 Khaliq; Baohua Yang; Maruti Kamat; Stefano Maffulli 'reed'; Akihiro Motoki;
 ijw-ubuntu; Stephen Gordon; Rudrajit Tapadar; Alan Kavanagh; Zoltán Lajos
 Kis
 Subject: Change in openstack/neutron-specs[master]: Introducing
 Tap-as-a-Service

 Kyle Mestery has abandoned this change.

 Change subject: Introducing Tap-as-a-Service
 ..


 Abandoned

 This review is  4 weeks without comment and currently blocked by a core
 reviewer with a -2. We are abandoning this for now. Feel free to reactivate
 the review by pressing the restore button and contacting the reviewer with
 the -2 on this review to ensure you address their concerns.

 --
 To view, visit https://review.openstack.org/96149
 To unsubscribe, visit https://review.openstack.org/settings

 Gerrit-MessageType: abandon
 Gerrit-Change-Id: I087d9d2a802ea39c02259f17d2b8c4e2f6d8d714
 Gerrit-PatchSet: 8
 Gerrit-Project: openstack/neutron-specs
 Gerrit-Branch: master
 Gerrit-Owner: vinay yadhav vinay.yad...@ericsson.com
 Gerrit-Reviewer: Adrian Hoban adrian.ho...@intel.com
 Gerrit-Reviewer: Akihiro Motoki amot...@gmail.com
 Gerrit-Reviewer: Alan Kavanagh alan.kavan...@ericsson.com
 Gerrit-Reviewer: Anil Rao arao...@gmail.com
 Gerrit-Reviewer: Armando Migliaccio arma...@gmail.com
 Gerrit-Reviewer: Bao Wang baowan...@yahoo.com
 Gerrit-Reviewer: Baohua Yang bao...@linux.vnet.ibm.com
 Gerrit-Reviewer: Bob Melander bob.melan...@gmail.com
 Gerrit-Reviewer: Carlos Gonçalves m...@cgoncalves.pt
 Gerrit-Reviewer: Fawad Khaliq fa...@plumgrid.com
 Gerrit-Reviewer: Hareesh Puthalath hareesh.puthal...@gmail.com
 Gerrit-Reviewer: Henry Gessau ges...@cisco.com
 Gerrit-Reviewer: Isaku Yamahata yamahata.rev...@gmail.com
 Gerrit-Reviewer: Jenkins
 Gerrit-Reviewer: Kanzhe Jiang kan...@gmail.com
 Gerrit-Reviewer: Kyle Mestery mest...@mestery.com
 Gerrit-Reviewer: Marios Andreou mar...@redhat.com
 Gerrit-Reviewer: Maruti Kamat maruti.ka...@hp.com
 Gerrit-Reviewer: Mohammad Banikazemi m...@us.ibm.com
 Gerrit-Reviewer: Paul Carver pcar...@att.com
 Gerrit-Reviewer: Pino de Candia gdecan...@midokura.com
 Gerrit-Reviewer: Rudrajit Tapadar rudrajit.tapa...@gmail.com
 Gerrit-Reviewer: Ryan Moats rmo...@us.ibm.com
 Gerrit-Reviewer: Salvatore Orlando salv.orla...@gmail.com
 Gerrit-Reviewer: Stefano Maffulli 'reed' stef...@openstack.org
 Gerrit-Reviewer: Stephen Gordon sgor...@redhat.com
 Gerrit-Reviewer: Stephen Wong stephen.kf.w...@gmail.com
 Gerrit-Reviewer: Subrahmanyam Ongole song...@oneconvergence.com
 Gerrit-Reviewer: Sumit Naiksatam sumitnaiksa...@gmail.com
 Gerrit-Reviewer: Tomoe Sugihara to...@midokura.com
 Gerrit-Reviewer: Welcome, new contributor!
 Gerrit-Reviewer: YAMAMOTO Takashi yamam...@valinux.co.jp
 Gerrit-Reviewer: Zoltán Lajos Kis zoltan.lajos@ericsson.com
 Gerrit-Reviewer: ijw-ubuntu iawe...@cisco.com
 Gerrit-Reviewer: mark mcclain m...@mcclain.xyz
 Gerrit-Reviewer: vinay yadhav 

Re: [openstack-dev] [oslo] Graduating oslo.reports: Request to review clean copy

2015-02-24 Thread Solly Ross
 
 Those scripts have mostly moved into oslotest, so they don't need to be
 synced any more to be used. If you have all of your code, and it follows
 the cookiecutter template, we can look at it and propose post-import
 tweaks. What's the URL for the repository?

Heh, whoops.  I should probably have included that.  It's at
https://github.com/directxman12/oslo.reports

Thanks!

Best Regards,
Solly

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver

2015-02-24 Thread henry hly
So are we talking about using script to eliminate unnecessary new vif types?

Then, a little confusion that why this BP[1] is postponed to L, and
this BP[2] is merged in K.

[1]  https://review.openstack.org/#/c/146914/
[2]  https://review.openstack.org/#/c/148805/

In fact [2] can be replaced by [1] with a customized vouter script, no
need for a totally new vif types introduced by K cycle.

On Thu, Feb 19, 2015 at 3:42 AM, Brent Eagles beag...@redhat.com wrote:
 Hi,

 On 18/02/2015 1:53 PM, Maxime Leroy wrote:
 Hi Brent,

 snip/

 Thanks for your help on this feature. I have just created a channel
 irc: #vif-plug-script-support to speak about it.
 I think it will help to synchronize effort on vif_plug_script
 development. Anyone is welcome on this channel!

 Cheers,
 Maxime

 Thanks Maxime. I've made some updates to the etherpad.
 (https://etherpad.openstack.org/p/nova_vif_plug_script_spec)
 I'm going to start some proof of concept work these evening. If I get
 anything worth reading, I'll put it up as a WIP/Draft review. Whatever
 state it is in I will be pushing up bits and pieces to github.

 https://github.com/beagles/neutron_hacking vif-plug-script
 https://github.com/beagles/nova vif-plug-script

 Cheers,

 Brent



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] (RE: Change in openstack/neutron-specs[master]: Introducing Tap-as-a-Service)

2015-02-24 Thread CARVER, PAUL
Maybe I'm misreading review.o.o, but I don't see the -2. There was a -2 from 
Salvatore Orlando with the comment The -2 on this patch is only to deter 
further comments and a link to 140292, but 140292 has a comment from Kyle 
saying it's been abandoned in favor of going back to 96149. Are we in a loop 
here?

We're moving forward internally with proprietary mechanisms for attaching 
analyzers but it sure would be nice if there were a standard API. Anybody who 
thinks switches don't need SPAN/mirror ports has probably never working in 
Operations on a real production network where SLAs were taken seriously and 
enforced.

I know there's been a lot of heated discussion around this spec for a variety 
of reasons, but there isn't an enterprise class hardware switch on the market 
that doesn't support SPAN/mirror. Lack of this capability is a glaring omission 
in Neutron that keeps Operations type folks opposed to using it because it 
causes them to lose visibility that they've had for ages. We're getting a lot 
of pressure to continue deploying hardware analyzers and/or deploy 
non-OpenStack mechanisms for implementing tap/SPAN/mirror capability when I'd 
much rather integrate the analyzers into OpenStack.


-Original Message-
From: Kyle Mestery (Code Review) [mailto:rev...@openstack.org] 
Sent: Tuesday, February 24, 2015 17:37
To: vinay yadhav
Cc: CARVER, PAUL; Marios Andreou; Sumit Naiksatam; Anil Rao; Carlos Gonçalves; 
YAMAMOTO Takashi; Ryan Moats; Pino de Candia; Isaku Yamahata; Tomoe Sugihara; 
Stephen Wong; Kanzhe Jiang; Bao Wang; Bob Melander; Salvatore Orlando; Armando 
Migliaccio; Mohammad Banikazemi; mark mcclain; Henry Gessau; Adrian Hoban; 
Hareesh Puthalath; Subrahmanyam Ongole; Fawad Khaliq; Baohua Yang; Maruti 
Kamat; Stefano Maffulli 'reed'; Akihiro Motoki; ijw-ubuntu; Stephen Gordon; 
Rudrajit Tapadar; Alan Kavanagh; Zoltán Lajos Kis
Subject: Change in openstack/neutron-specs[master]: Introducing Tap-as-a-Service

Kyle Mestery has abandoned this change.

Change subject: Introducing Tap-as-a-Service
..


Abandoned

This review is  4 weeks without comment and currently blocked by a core 
reviewer with a -2. We are abandoning this for now. Feel free to reactivate the 
review by pressing the restore button and contacting the reviewer with the -2 
on this review to ensure you address their concerns.

-- 
To view, visit https://review.openstack.org/96149
To unsubscribe, visit https://review.openstack.org/settings

Gerrit-MessageType: abandon
Gerrit-Change-Id: I087d9d2a802ea39c02259f17d2b8c4e2f6d8d714
Gerrit-PatchSet: 8
Gerrit-Project: openstack/neutron-specs
Gerrit-Branch: master
Gerrit-Owner: vinay yadhav vinay.yad...@ericsson.com
Gerrit-Reviewer: Adrian Hoban adrian.ho...@intel.com
Gerrit-Reviewer: Akihiro Motoki amot...@gmail.com
Gerrit-Reviewer: Alan Kavanagh alan.kavan...@ericsson.com
Gerrit-Reviewer: Anil Rao arao...@gmail.com
Gerrit-Reviewer: Armando Migliaccio arma...@gmail.com
Gerrit-Reviewer: Bao Wang baowan...@yahoo.com
Gerrit-Reviewer: Baohua Yang bao...@linux.vnet.ibm.com
Gerrit-Reviewer: Bob Melander bob.melan...@gmail.com
Gerrit-Reviewer: Carlos Gonçalves m...@cgoncalves.pt
Gerrit-Reviewer: Fawad Khaliq fa...@plumgrid.com
Gerrit-Reviewer: Hareesh Puthalath hareesh.puthal...@gmail.com
Gerrit-Reviewer: Henry Gessau ges...@cisco.com
Gerrit-Reviewer: Isaku Yamahata yamahata.rev...@gmail.com
Gerrit-Reviewer: Jenkins
Gerrit-Reviewer: Kanzhe Jiang kan...@gmail.com
Gerrit-Reviewer: Kyle Mestery mest...@mestery.com
Gerrit-Reviewer: Marios Andreou mar...@redhat.com
Gerrit-Reviewer: Maruti Kamat maruti.ka...@hp.com
Gerrit-Reviewer: Mohammad Banikazemi m...@us.ibm.com
Gerrit-Reviewer: Paul Carver pcar...@att.com
Gerrit-Reviewer: Pino de Candia gdecan...@midokura.com
Gerrit-Reviewer: Rudrajit Tapadar rudrajit.tapa...@gmail.com
Gerrit-Reviewer: Ryan Moats rmo...@us.ibm.com
Gerrit-Reviewer: Salvatore Orlando salv.orla...@gmail.com
Gerrit-Reviewer: Stefano Maffulli 'reed' stef...@openstack.org
Gerrit-Reviewer: Stephen Gordon sgor...@redhat.com
Gerrit-Reviewer: Stephen Wong stephen.kf.w...@gmail.com
Gerrit-Reviewer: Subrahmanyam Ongole song...@oneconvergence.com
Gerrit-Reviewer: Sumit Naiksatam sumitnaiksa...@gmail.com
Gerrit-Reviewer: Tomoe Sugihara to...@midokura.com
Gerrit-Reviewer: Welcome, new contributor!
Gerrit-Reviewer: YAMAMOTO Takashi yamam...@valinux.co.jp
Gerrit-Reviewer: Zoltán Lajos Kis zoltan.lajos@ericsson.com
Gerrit-Reviewer: ijw-ubuntu iawe...@cisco.com
Gerrit-Reviewer: mark mcclain m...@mcclain.xyz
Gerrit-Reviewer: vinay yadhav vinay.yad...@ericsson.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo] Dealing with database connection sharing issues

2015-02-24 Thread Robert Collins
On 23 February 2015 at 13:54, Michael Bayer mba...@redhat.com wrote:

 Correct me if I'm wrong but the register_after_fork seems to apply only to
 the higher level Process abstraction.   If someone calls os.fork(), as is
 the case now, there's no hook to use.

 Hence the solution I have in place right now, which is that Oslo.db *can*
 detect a fork and adapt at the most basic level by checking for os.getpid()
 and recreating the connection, no need for anyone to call engine.dispose()
 anywhere. But that approach has been rejected.  Because the caller of the
 library should be aware they're doing this.

 If we can all read the whole thread here each time and be on the same page
 about what is acceptable and what's not, that would help.

I've read the whole thread :).

I don't agree with the rejection you received :(.

Here are my principles in the design:
 - oslo.db is meant to be a good [but opinionated] general purpose db
library: it is by and for OpenStack, but it can only assume as givens
those things which are guaranteed the same for all OpenStack projects,
and which we can guarantee we don't want to change in future.
Everything else it needs to do the usual thing of offering interfaces
and extension points where its behaviour can be modified.
 - failing closed is usually much much better than failing open. Other
libraries and app code may do things oslo.db doesn't expect, and
oslo.db failing in a hard to debug fashion is a huge timewaste for
everyone involved.
 - faults should be trapped as close to the moment that it happened as
possible. That is, at the first sign.
 - correctness is more important than aesthetics : ugly but doing the
right thing is better than looking nice but breaking.
 - where we want to improve things in a program in a way thats
incompatible, we should consider a deprecation period.


Concretely, I think we should do the following:
 - in olso.db today, detect the fork and reopen the connection (so the
users code works); and log a DEBUG/TRACE level message that this is a
deprecated pattern and will be removed.
 - follow that up with patches to all the projects to prevent this
happening at all
 - wait until we're no longer doing security fixes to any branch with
the pre-fixed code
 - at the next major release of oslo.db, change it from deprecated to
hard failure

That gives a graceful migration path and ensures safety.

As to the potential for someone to deliberately:
 - open an oslo.db connection
 - fork
 - expect it to work

I say phoooey. Pre-forking patterns don't need this (it won't use the
connect before work is handed off to the child). Privilege dropping
patterns could potentially use this, but they are rare enough that
they can explicitly close the connection and make a new one after the
fork. In general anything related to fork is going to break and one
should re-establish things after forking. The exceptions are
sufficiently rare that I think we can defer adding apis to support
them (e.g. a way to say 'ok, refresh your cache of the pid now') until
someone actually wants that.

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [Cinder-GlusterFS CI] centos7 gate job abrupt failures

2015-02-24 Thread Jeremy Stanley
On 2015-02-25 01:02:07 +0530 (+0530), Bharat Kumar wrote:
[...]
 After running 971 test cases VM inaccessible for 569 ticks
[...]

Glad you're able to reproduce it. For the record that is running
their 8GB performance flavor with a CentOS 7 PVHVM base image. The
steps to recreate are http://paste.openstack.org/show/181303/ as
discussed in IRC (for the sake of others following along). I've held
a similar worker in HPCloud (15.126.235.20) which is a 30GB flavor
artifically limited to 8GB through a kernel boot parameter.
Hopefully following the same steps there will help either confirm
the issue isn't specific to running in one particular service
provider, or will yield some useful difference which could help
highlight the cause.

Either way, once 104.239.136.99 and 15.126.235.20 are no longer
needed, please let one of the infrastructure root admins know to
delete them.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ECMP on Neutron virtual router

2015-02-24 Thread henry hly
On Wed, Feb 25, 2015 at 3:11 AM, Kevin Benton blak...@gmail.com wrote:
 I wonder if there is a way we can easily abuse the extra routes extension to
 do this? Maybe two routes to the same network would imply ECMP.


It's a good idea, and we deploy a system with similar concept(by extra
routes) by a tiny patch on existing neutron L3 plugin and agent code.

 If not, maybe this can fit into a larger refactoring for route management
 (dynamic routing, etc).

 On Feb 24, 2015 11:02 AM, Carl Baldwin c...@ecbaldwin.net wrote:

 It doesn't support this at this time.  There are no current plans to
 make it work.  I'm curious to know how you would like for this to work
 in your deployment.

 Carl

 On Tue, Feb 24, 2015 at 11:32 AM, NAPIERALA, MARIA H mn1...@att.com
 wrote:
  Does Neutron router support ECMP across multiple static routes to the
  same
  destination network but with different next-hops?
 
  Maria
 
 
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] db-level locks, non-blocking algorithms, active/active DB clusters and IPAM

2015-02-24 Thread Robert Collins
On 24 February 2015 at 01:07, Salvatore Orlando sorla...@nicira.com wrote:
 Lazy-Stacker summary:
...
 In the medium term, there are a few things we might consider for Neutron's
 built-in IPAM.
 1) Move the allocation logic out of the driver, thus making IPAM an
 independent service. The API workers will then communicate with the IPAM
 service through a message bus, where IP allocation requests will be
 naturally serialized
 2) Use 3-party software as dogpile, zookeeper but even memcached to
 implement distributed coordination. I have nothing against it, and I reckon
 Neutron can only benefit for it (in case you're considering of arguing that
 it does not scale, please also provide solid arguments to support your
 claim!). Nevertheless, I do believe API request processing should proceed
 undisturbed as much as possible. If processing an API requests requires
 distributed coordination among several components then it probably means
 that an asynchronous paradigm is more suitable for that API request.

So data is great. It sounds like as long as we have an appropriate
retry decorator in place, that write locks are better here, at least
for up to 30 threads. But can we trust the data?

One thing I'm not clear on is the SQL statement count.  You say 100
queries for A-1 with a time on Galera of 0.06*1.2=0.072 seconds per
allocation ? So is that 2 queries over 50 allocations over 20 threads?

I'm not clear on what the request parameter in the test json files
does, and AFAICT your threads each do one request each. As such I
suspect that you may be seeing less concurrency - and thus contention
- than real-world setups where APIs are deployed to run worker
processes in separate processes and requests are coming in
willy-nilly. The size of each algorithms workload is so small that its
feasible to imagine the thread completing before the GIL bytecount
code trigger (see
https://docs.python.org/2/library/sys.html#sys.setcheckinterval) and
the GIL's lack of fairness would exacerbate that.

If I may suggest:
 - use multiprocessing or some other worker-pool approach rather than threads
 - or set setcheckinterval down low (e.g. to 20 or something)
 - do multiple units of work (in separate transactions) within each
worker, aim for e.g. 10 seconds or work or some such.
 - log with enough detail that we can report on the actual concurrency
achieved. E.g. log the time in us when each transaction starts and
finishes, then we can assess how many concurrent requests were
actually running.

If the results are still the same - great, full steam ahead. If not,
well lets revisit :)

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manuals] Training guide issue

2015-02-24 Thread Ajay Kalambur (akalambu)
Hi
I am trying to just get started with  openstack commits and wanted to start by 
fixing some documentation bugs. I assigned 3 bugs which seem to be in the same 
file/area

https://bugs.launchpad.net/openstack-training-guides/+bug/1380153
https://bugs.launchpad.net/openstack-training-guides/+bug/1380155
https://bugs.launchpad.net/openstack-training-guides/+bug/1380156


The file seems to be located under the openstack-manuals branch since I found 
this xml file there
But the bug seems to be under Openstack Training guides which seems to be a 
different git repo with this file not present there

Can someone help me understand whats going on here?
Ajay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [infra] Infra cloud: infra running a cloud for nodepool

2015-02-24 Thread Chmouel Boudjnah
cor...@inaugust.com (James E. Blair) writes:

 A group of folks from HP is interested in starting an effort to run a
 cloud as part of the Infrastructure program with the purpose of
 providing resources to nodepool for OpenStack testing.  HP is supplying
 two racks of machines, and we will operate each as an independent cloud.
 I think this is a really good idea, and will do a lot for OpenStack.

 Here's what we would get out of it:

Pretty cool! thanks to HP for providing this. If that's possible (with
HP and if the infra wants to allow that) it would be nice to allow a dev
to login into the failing vm for investigation.

Cheers,
Chmouel

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] client library release versions

2015-02-24 Thread Robert Collins
On 25 February 2015 at 11:18, Matt Riedemann mrie...@linux.vnet.ibm.com wrote:



 I was hoping to do a sqlalchemy-migrate release this week so I'm interested
 in not screwing that up. :)

 The current release is 0.9.4 and there was one change to requirements.txt,
 cee9136, since then, so if I'm reading this correctly the next version for
 sqlalchemy-migrate should really be 0.10.0.

So as its a 0.* project today, the convention we usually follow is to
right-shift the versions: X.Y.Z - 0.X.Y, because its not stable.

So, a Y change in such a project would give you 0.9.5. And we say this
is ok because only also no-public-API projects should be depending on
it and we're expecting to break the API lots, and need a way to signal
that (an X equivalent) without first committing to a public API
We're perhaps not following that rule all the well (in
requirements.txt constraints...)

 Regarding public API and 1.x.y, I don't think there is really anything
 holding sqlalchemy-migrate back from that, it's hella old so we should
 probably be 1.0.0 by now.

+1.

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Bug statuses definition in Fuel

2015-02-24 Thread Mike Scherbakov
Hi Dmitry,
thanks for extending info on what is different in Fuel for Confirmed  Fix
Released statuses [1]

It is pretty hard to go in history of the page now, but I think I like
original OpenStack Imporance description in [2], than Fuel-specific [3]:

 - Critical = can't deploy anything and there's no trivial workaround; data
 loss; or security vulnerability

- High = specific hardware, configurations, or components are unusable and
 there's no workaround; or everything is broken but there's a workaround

So, can't deploy anything is Critical. If you can deploy cloud, but it
doesn't work afterwards - is not Critical anymore with current description.
I do not think that it is just High that you can't open Horizon page after
deployment.

Why don't we stick to the original OpenStack criteria?

 - Critical if the bug prevents a key feature from working properly
 (regression) for all users (or without a simple workaround) or result in
 data loss
 - High if the bug prevents a key feature from working properly for some
 users (or with a workaround)



[1]
https://wiki.openstack.org/w/index.php?title=Fuel%2FHow_to_contributediff=73079oldid=72329
[2]
https://wiki.openstack.org/wiki/BugTriage#Task_2:_Prioritize_confirmed_bugs_.28bug_supervisors.29

[3]
https://wiki.openstack.org/wiki/Fuel/How_to_contribute#Confirm_and_triage_bugs

On Thu, Feb 5, 2015 at 8:22 PM, Dmitry Mescheryakov 
dmescherya...@mirantis.com wrote:

 Guys,

 I was looking for a page where bug statuses in out LP projects are
 described and found none. Mike suggested to add this to How To Contribute
 page and so I did. Please take a look at the section [1], just to make sure
 that we are on the same page. The status descriptions are located in the
 second from the top list.

 Thanks,

 Dmitry

 [1]
 https://wiki.openstack.org/wiki/Fuel/How_to_contribute#Confirm_and_triage_bugs




-- 
Mike Scherbakov
#mihgen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Re-evaluating the suitability of the 6 month release cycle

2015-02-24 Thread Robert Collins
On 25 February 2015 at 13:13, Jeremy Stanley fu...@yuggoth.org wrote:
 On 2015-02-24 11:27:05 + (+), Daniel P. Berrange wrote:
 [...]
 It would be reasonable for the vulnerability team to take the decision
 that they'll support fixes for master, and any branches that the stable
 team decide to support.
 [...]

 Well, it's worth noting that the VMT doesn't even support (i.e.
 issue advisories for bugs in) master branches now, the exception
 being branchless projects where the bug appears in master prior to
 an existing release tag.

 But I think Thierry's earlier point is that as soon as you start
 marking _some_ releases as special (supported by VMT, stable maint,
 docs, translators...) then those become your new actual releases and
 the other interim releases become your milestones, and we're back to
 the current model again.

I don't think thats true actually. We'd still have a major smoothing
effect on work, which means less high peaks at release time and less
troughs at 'review direction' time and so on.

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] client library release versions

2015-02-24 Thread Matt Riedemann



On 2/24/2015 3:34 PM, Robert Collins wrote:

Hi, in the cross project meeting a small but important thing came up.

Most (all?) of our client libraries run with semver: x.y.z version
numbers. http://semver.org/ and
http://docs.openstack.org/developer/pbr/semver.html

However we're seeing recent releases that are bumping .z inappropriately.

This makes the job of folk writing version constraints harder :(.

*most* of our releases should be an increment of .y - so 1.2.0, 1.3.0
etc. The only time a .z increase is expected is for
backwards-compatible bug fixes. [1]

In particular, changing a dependency version is probably never a .z
increase, except - perhaps - when the dependency itself only changed
.z, and so on transitively.

Adding or removing a dependency really can't ever be a .z increase.

We're nearly finished on the pbr support to help automate the decision
making process, but the rule of thumb - expect to do .y increases - is
probably good enough for a while yet.

-Rob

[1]: The special case is for projects that have not yet committed to a
public API - 0.x.y versions. Don't do that. Commit to a public API :)



I was hoping to do a sqlalchemy-migrate release this week so I'm 
interested in not screwing that up. :)


The current release is 0.9.4 and there was one change to 
requirements.txt, cee9136, since then, so if I'm reading this correctly 
the next version for sqlalchemy-migrate should really be 0.10.0.


Regarding public API and 1.x.y, I don't think there is really anything 
holding sqlalchemy-migrate back from that, it's hella old so we should 
probably be 1.0.0 by now.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] novaclient functional test guidelines

2015-02-24 Thread Joe Gordon
On Tue, Feb 24, 2015 at 12:30 PM, Sean Dague s...@dague.net wrote:

 On 02/24/2015 03:28 PM, Ed Leafe wrote:
  On Feb 24, 2015, at 1:49 PM, Sean Dague s...@dague.net wrote:
 
  IMHO the CLI should have an option to returned raw JSON back instead of
  pretty tabled results as well.
 
  Um... isn't that just the API calls?
 
  I'm not sure creating a 3rd functional surface is really the answer
  here, because we still need to actually test the CLI / pretty table
 output.
 
  The python-openstacksdk project was originally envisioned to wrap the
 API calls and return usable Python objects. The nova client CLI (or any
 other CLI, for that matter) would then just provide the command line input
 parsing and output presentation. It's been a while since I was involved
 with that project, but it seems that decoupling the command line interface
 from the Python API wrapper would make testing much, much easier.

 Right, I think to some degree novaclient is legacy code, and we should
 focus on specific regressions and bugs without doing to much code change.

 The future should be more focussed on openstacksdk and openstackclient.


While I don't disagree with this, it seems like we have been waiting for
openstackclient and openstacksdk for a while now, do we even have a
timeline for moving off of novaclient?



 -Sean

 --
 Sean Dague
 http://dague.net


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Re-evaluating the suitability of the 6 month release cycle

2015-02-24 Thread Jeremy Stanley
On 2015-02-24 11:27:05 + (+), Daniel P. Berrange wrote:
[...]
 It would be reasonable for the vulnerability team to take the decision
 that they'll support fixes for master, and any branches that the stable
 team decide to support.
[...]

Well, it's worth noting that the VMT doesn't even support (i.e.
issue advisories for bugs in) master branches now, the exception
being branchless projects where the bug appears in master prior to
an existing release tag.

But I think Thierry's earlier point is that as soon as you start
marking _some_ releases as special (supported by VMT, stable maint,
docs, translators...) then those become your new actual releases and
the other interim releases become your milestones, and we're back to
the current model again.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-24 Thread Sukhdev Kapur
Folks,

A great discussion. I am not expert at OVN, hence, want to ask a question.
The answer may make a  case that it should probably be a ML2 driver as
oppose to monolithic plugin.

Say a customer want to deploy an OVN based solution and use HW devices from
one vendor for L2 and L3 (e.g. Arista or Cisco), and want to use another
vendor for services (e.g. F5 or A10) - how can that be supported?

If OVN goes in as ML2 driver, I can then run ML2 and Service plugin to
achieve above solution. For a monolithic plugin, don't I have an issue?

regards..
-Sukhdev


On Tue, Feb 24, 2015 at 8:58 AM, Salvatore Orlando sorla...@nicira.com
wrote:

 I think we're speculating a lot about what would be best for OVN whereas
 we should probably just expose pro and cons of ML2 drivers vs standalone
 plugin (as I said earlier on indeed it does not necessarily imply
 monolithic *)

 I reckon the job of the Neutron community is to provide a full picture to
 OVN developers - so that they could make a call on the integration strategy
 that best suits them.
 On the other hand, if we're planning to commit to a model where ML2 is not
 anymore a plugin but the interface with the API layer, then any choice
 which is not a ML2 driver does not make any sense. Personally I'm not sure
 we ever want to do that, at least not in the near/medium term, but I'm one
 and hardly representative of the developer/operator communities.

 Salvatore


 * In particular with the advanced service split out the term monolithic
 simply does not mean anything anymore.

 On 24 February 2015 at 17:48, Robert Kukura kuk...@noironetworks.com
 wrote:

  Kyle, What happened to the long-term potential goal of ML2 driver APIs
 becoming neutron's core APIs? Do we really want to encourage new monolithic
 plugins?

 ML2 is not a control plane - its really just an integration point for
 control planes. Although co-existence of multiple mechanism drivers is
 possible, and sometimes very useful, the single-driver case is fully
 supported. Even with hierarchical bindings, its not really ML2 that
 controls what happens - its the drivers within the framework. I don't think
 ML2 really limits what drivers can do, as long as a virtual network can be
 described as a set of static and possibly dynamic network segments. ML2 is
 intended to impose as few constraints on drivers as possible.

 My recommendation would be to implement an ML2 mechanism driver for OVN,
 along with any needed new type drivers or extension drivers. I believe this
 will result in a lot less new code to write and maintain.

 Also, keep in mind that even if multiple driver co-existence doesn't
 sound immediately useful, there are several potential use cases to
 consider. One is that it allows new technology to be introduced into an
 existing cloud alongside what previously existed. Migration from one ML2
 driver to another may be a lot simpler (and/or flexible) than migration
 from one plugin to another. Another is that additional drivers can support
 special cases, such as bare metal, appliances, etc..

 -Bob


 On 2/24/15 11:11 AM, Kyle Mestery wrote:

  On Tue, Feb 24, 2015 at 3:19 AM, Salvatore Orlando sorla...@nicira.com
 wrote:

  On 24 February 2015 at 01:34, Kyle Mestery mest...@mestery.com wrote:

  Russel and I have already merged the initial ML2 skeleton driver [1].

   The thinking is that we can always revert to a non-ML2 driver if
 needed.


  If nothing else an authoritative decision on a design direction saves
 us the hassle of going through iterations and discussions.
 The integration through ML2 is definitely viable. My opinion however is
 that since OVN implements a full control plane, the control plane bits
 provided by ML2 are not necessary, and a plugin which provides only
 management layer capabilities might be the best solution. Note: this does
 not mean it has to be monolithic. We can still do L3 with a service plugin.
  However, since the same kind of approach has been adopted for ODL I
 guess this provides some sort of validation.


 To be honest, after thinking about this last night, I'm now leaning
 towards doing this as a full plugin. I don't really envision OVN running
 with other plugins, as OVN is implementing it's own control plane, as you
 say. So the value of using ML2 is quesitonable.


I'm not sure how useful having using OVN with other drivers will be,
 and that was my initial concern with doing ML2 vs. full plugin. With the HW
 VTEP support in OVN+OVS, you can tie in physical devices this way. Anyways,
 this is where we're at for now. Comments welcome, of course.


  That was also kind of my point regarding the control plane bits
 provided by ML2 which OVN does not need. Still, the fact that we do not use
 a function does not make any harm.
 Also i'm not sure if OVN needs at all a type manager. If not, we can
 always implement a no-op type manager, I guess.

See above. I'd like to propose we move OVN to a full plugin instead
 of an ML2 MechanismDriver.

  

Re: [openstack-dev] Kerberos in OpenStack

2015-02-24 Thread Sanket Lawangare
Thanks a lot for taking out time and replying back Tim. Will let you know
if i have any further questions.

On Tue, Feb 24, 2015 at 1:22 PM, Tim Bell tim.b...@cern.ch wrote:

  You may also get some information from how we set up Kerberos at CERN at
 http://openstack-in-production.blogspot.fr/2014/10/kerberos-and-single-sign-on-with.html



 From my understanding, the only connection is between Keystone and KDC.
 There is a standard Keystone token issues based off the Kerberos ticket and
 the rest is the same as if a password had been supplied.



 Tim



 *From:* Sanket Lawangare [mailto:sanket.lawang...@gmail.com]
 *Sent:* 24 February 2015 19:53
 *To:* openstack-dev@lists.openstack.org
 *Subject:* [openstack-dev] Kerberos in OpenStack



 Hello  Everyone,



 My name is Sanket Lawangare. I am a graduate Student studying at The
 University of Texas, at San Antonio.* For my Master’s Thesis I am working
 on the Identity component of OpenStack. My research is to investigate
 external authentication with Identity(keystone) using Kerberos.*



 Based on reading Jammie lennox's Blogs on Kerberos implementation in
 OpenStack and my understanding of Kerberos I have come up with a figure
 explaining possible interaction of KDC with the OpenStack client, keystone
 and the OpenStack services(Nova, Cinder, Swift...).

 These are the Blogs -


 http://www.jamielennox.net/blog/2015/02/12/step-by-step-kerberized-keystone/

 http://www.jamielennox.net/blog/2013/10/22/keystone-token-binding/

 I am trying to understand the working of Kerberos in OpenStack.



 Please click this link to view the figure:
 https://docs.google.com/drawings/d/1re0lNbiMDTbnkrqGMjLq6oNoBtR_GA0x7NWacf0Ulbs/edit?usp=sharing



 P.S. - [The steps in this figure are self explanatory the basic
 understanding of Kerberos is expected]



 Based on the figure i had couple of questions:



 1. Is Nova or other services registered with the KDC?



 2. What does keystone do with Kerberos ticket/credentials? Does
 Keystone authenticates the users and gives them direct access to other
 services such as Nova, Swift etc..



 3. After receiving the Ticket from the KDC does keystone embed some
 kerberos credential information in the token?



 4. What information does the service (e.g.Nova) see in the Ticket and
 the token (Does the token have some kerberos info or some customized info
 inside it?).



 If you could share your insights and guide me on this. I would be really
 appreciate it. Thank you all for your time.



 Regards,

 Sanket Lawangare

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] novaclient functional test guidelines

2015-02-24 Thread Joe Gordon
On Tue, Feb 24, 2015 at 1:18 PM, melanie witt melwi...@gmail.com wrote:

 On Feb 24, 2015, at 9:47, Sean Dague s...@dague.net wrote:

  I'm happy if there are other theories about how we do these things,
  being the first functional test in the python-novaclient tree that
  creates and destroys real resources, there isn't an established pattern
  yet. But I think doing all CLI calls in CLI tests is actually really
  cumbersome, especially in the amount of output parsing code needed if
  you are going to setup any complicated resource structure.

 I think I'm in agreement with the pattern you describe.

 I imagine having a set of functional tests for the API, that don't do any
 CLI calls at all. With that we test that the API works properly. Then have
 a separate set for the CLI, which only calls CLI for the command being
 tested, everything else to set up and tear down the test done by API calls.
 This would be done with the rationale that because the entire API
 functionality is tested separately, we can safely use it for setup/teardown
 with the intent to isolate the CLI test to the command being tested and
 avoid introducing side effects from the CLI commands.

 But I suppose one could make the same argument for using CLI everywhere
 (if they are all tested, they can also be trusted not to introduce side
 effects). I tend to favor using the API because it's the most bare bones
 setup/teardown we could use. At the same time I understand the idea of
 performing an entire test using the CLI, as a way of replicating the
 experience a real user might have using the CLI, from start to end. I don't
 think I feel strongly either way.


 I guess its time to revisit the actual status of novaclient and if we want
to actively move away from it to openstacksdk/OSC as well. If we are
actively trying to move away from novaclient, using the python API as much
as possible makes a lot of sense.




 For the --poll stuff, I agree the API should have it and the CLI uses it.
 And with and without poll functionality should be tested separately, API
 and CLI.

 melanie (melwitt)





 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Re-evaluating the suitability of the 6 month release cycle

2015-02-24 Thread Robert Collins
On 24 February 2015 at 22:53, Daniel P. Berrange berra...@redhat.com wrote:
 I was writing this mail for the past few days, but the nova thread
 today prompted me to finish it off  send it :-)

++


 The first two observations strongly suggest that the choice of 6
 months as a cycle length is a fairly arbitrary decision that can be
 changed without unreasonable pain. The third observation suggests a
 much shorter cycle length would smooth out the bumps and lead to a
 more efficient  satisfying development process for all involved.

I'm very glad to see this being discussed (again :)). Any proposal
that reduces our cycle time is going to get my enthusiastic support.

...
 Upgrades  deprecation
 --

 It is common right now for projects to say upgrades are only
 supported between releases N-1 and N. ie to go from Icehouse
 to Kilo, you need to first deploy Juno. This is passable when
 you're talking 6 month gaps between cycles, but when there are
 2 month gaps it is not reasonable to expect everyone to be
 moving fast enough to keep up with every release. If an
 organization's beurocracy means they can't deploy more often
 than every 12 months, forcing them to deploy the 5 intermediate
 releases to run upgrade scripts is quite unpleasant. We would
 likely have to look at officially allowing upgrades between
 any (N-3, N-2, N-1) to N. From a database POV this should not
 be hard, since the DB migration scripts don't have any built
 in assumptions about this. Similarly the versioned objects used
 by Nova are quite flexible in this regard too, as long as the
 compat code isn't deleted too soon.

 Deprecation warnings would need similar consideration. It would
 not be sufficient to deprecate in one release and delete in the
 next. We'd likely want to say that depecations last for a given
 time period rather than number of releases, eg 6 months. This
 could be easily handled by simply including the date of initial
 deprecation in the deprecation message. It would thus be clear
 when the feature will be removed to all involved.

I think a useful thing here is to consider online upgrades vs offline
upgrades. If we care to we could say that online upgrades are only
supported release to release, with offline upgrades being supported by
releases up to 8 months apart. The benefit of this would be to reduce
our test matrix: online upgrades are subject to much greater
interactions between concurrent processes, and its hard enough to
validate that N - N+1 works with any deep confidence vs also checking
that N-N+2, N-N+3 also work: for a 6 month sliding window to match
the current thing, we need to allow upgrades from Feb through August:
a Feb-Apr
b Feb-Jun
c Feb-Aug
d Apr- Jun
e Apr-Aug
f Jun-Aug

We'd need to be testing all the combinations leading to the branch a
patch is for, so changes to Aug would need c, e and f all tested.


Thats 3 times the overhead of supporting:
Feb-Apr and then
Apr-Jun and then
Jun-Aug
serially where we'd only be testing one combination at a time.

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][all] Revisiting the 6 month release cycle

2015-02-24 Thread Jeremy Stanley
On 2015-02-24 10:00:51 -0800 (-0800), Johannes Erdfelt wrote:
[...]
 Recently, I have spent a lot more time waiting on reviews than I
 have spent writing the actual code.

That's awesome, assuming what you mean here is that you've spent
more time reviewing submitted code than writing more. That's where
we're all falling down as a project and should be doing better, so I
applaud your efforts in this area.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Bug statuses definition in Fuel

2015-02-24 Thread Dmitry Borodaenko
Mike,

I introduced the Fuel specific bug priority descriptions in June 2014
[0], which may be why you were having trouble finding it in the recent
changes.

[0] 
https://wiki.openstack.org/w/index.php?title=Fuel%2FHow_to_contributediff=56952oldid=56951

I think you're using a weird definition of the word deploy: the way
I understand it, if your deployment has finished but the software you
deployed doesn't work, it means the deployment has failed. Since
Horizon is the primary means for most users to operate OpenStack,
unusable dashboard means unusable OpenStack means means deployment has
failed means the bug is Critical.

Still, your confusion proves the point that it's not obvious that
these criteria are offered *in addition* to the OpenStack criteria,
meant only to cover the cases where OpenStack criteria are too generic
or not applicable. I've rephrased the description of that list to make
it more obvious.


On Tue, Feb 24, 2015 at 3:24 PM, Mike Scherbakov
mscherba...@mirantis.com wrote:
 Hi Dmitry,
 thanks for extending info on what is different in Fuel for Confirmed  Fix
 Released statuses [1]

 It is pretty hard to go in history of the page now, but I think I like
 original OpenStack Imporance description in [2], than Fuel-specific [3]:

 - Critical = can't deploy anything and there's no trivial workaround; data
 loss; or security vulnerability

 - High = specific hardware, configurations, or components are unusable and
 there's no workaround; or everything is broken but there's a workaround

 So, can't deploy anything is Critical. If you can deploy cloud, but it
 doesn't work afterwards - is not Critical anymore with current description.
 I do not think that it is just High that you can't open Horizon page after
 deployment.

 Why don't we stick to the original OpenStack criteria?

 - Critical if the bug prevents a key feature from working properly
 (regression) for all users (or without a simple workaround) or result in
 data loss
 - High if the bug prevents a key feature from working properly for some
 users (or with a workaround)



 [1]
 https://wiki.openstack.org/w/index.php?title=Fuel%2FHow_to_contributediff=73079oldid=72329
 [2]
 https://wiki.openstack.org/wiki/BugTriage#Task_2:_Prioritize_confirmed_bugs_.28bug_supervisors.29
 [3]
 https://wiki.openstack.org/wiki/Fuel/How_to_contribute#Confirm_and_triage_bugs

 On Thu, Feb 5, 2015 at 8:22 PM, Dmitry Mescheryakov
 dmescherya...@mirantis.com wrote:

 Guys,

 I was looking for a page where bug statuses in out LP projects are
 described and found none. Mike suggested to add this to How To Contribute
 page and so I did. Please take a look at the section [1], just to make sure
 that we are on the same page. The status descriptions are located in the
 second from the top list.

 Thanks,

 Dmitry

 [1]
 https://wiki.openstack.org/wiki/Fuel/How_to_contribute#Confirm_and_triage_bugs




 --
 Mike Scherbakov
 #mihgen


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Dmitry Borodaenko

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] some questions about bp filtering-weighing-with-driver-supplied-functions

2015-02-24 Thread Zhangli (ISSP)
Hi Duncan,

Really thanks for replying and sorry for my delayed response due to Chinese new 
year.

 1) Driver authors tend, in my experience, to know more than admins, so
 drivers should be able (where useful) to be able to set a default value to
 either filter expression or weighting expression

Actually I am in a team which is writing cinder driver for Huawei storage 
array, and I agree that driver authors know more than admins about the storage 
device, but not the requirement. I think it's admin's choice what kind of 
storage is needed, e.g:
1) Admin A cares nothing but free capacity, so capacity filter is enough for 
him;
2) Admin B want SLA volume, so he need min_iops/min_bandwidth/replication to be 
filter conditions (in this case, the admin is likely to create a special 
volume-type);
I'm not sure if the scenario above is in the scope of this BP, but I think 
editing equation in cinder.conf IS a way to match the requirement.


 2) Admins definitely need to be able to over-ride this if desired via
 cinder.conf

 I thing it is fairly easy (and beneficial) to go through the in-tree
 drivers and add the conf value to the stats report, once the base driver
 change has merged.

Do you mean the cinder base driver can have the built-in implementation of 
filter/goodness function? Is there a plan? maybe we can do something about this.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >