Re: [openstack-dev] removal of v3 in tree testing

2015-03-06 Thread Alex Xu
2015-03-06 13:49 GMT+08:00 GHANSHYAM MANN ghanshyamm...@gmail.com:

 Hi Sean,

 That is very nice idea to keep only 1 set of tests and run those twice via
 tox.

 Actually my main goal was-
 - 1. Create clean sample file structure for V2. V2.1 and micro-versions
Something like below-
  api_samples/
  extensions/
  v2.0/ - v2 sample files
  v2.1/- v2.1 sample files
  v2.2/- v2.2 sample files
  and so on

 - 2.  Merge sample files between v2 and v2.1.

 But your idea is much better which almost covers mine (except dir
 structure for microversion which can/should be work after that).


++ for Sean's idea. One sample tests is best. And in normally we should
keep the v2 API sample tests to ensure v2.1 totally compatible with v2. But
really like the currently v2.1 API sample tests is each file per extension,
not like v2 all the extension in a huge file... So I think we need keep the
v2.1 api sample tests, and make the v2 API running on it.



 As there are many extensions merged/split from v2 - v2.1, we need to
 twist tests to work for both v2 and v2.1.
 For exmple, v2 flavor-swap, flavor-disable, flavor-extraData has been
 merged to single flavor plugin in v2.1.

 So running v2.1 flavor tests for v2 needs above extensions to be enabled
 in that tests. It looks something like -
 https://review.openstack.org/#/c/162016/


Yea...good point and great work! I think all of ours hate those twist, but
no choice...

Actually for the extension mapping between v2.1 and v2, we already have
mapping at:
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/plugins/v3/extension_info.py#L29

So can we use this mapping to twist tests in the base class? Then we
needn't twist a lot of extension.



 --
 Thanks  Regards
 Ghanshyam Mann


 On Fri, Mar 6, 2015 at 7:00 AM, Sean Dague s...@dague.net wrote:
 On 03/04/2015 07:48 PM, GHANSHYAM MANN wrote:
  Hi Sean,
 
  Yes having V3 directory/file names is very confusing now.
 
  But current v3 sample tests cases tests v2.1 plugins. As /v3 url is
  redirected to v21 plugins, v3 sample tests make call through v3 url and
  test v2.1 plugins.
 
  I think we can start cleaning up the *v3* from everywhere and change it
  to v2.1 or much appropriate name.
 
  To cleanup the same from sample files, I was planning to rearrange
  sample files structure. Please check if that direction looks good (still
  need to release patch for directory restructure)
 
 
 https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:sample_files_structure,n,z

 I had another chat with Alex this morning on IRC. I think my confusion
 is that I don't feel like I understand how we get down to 1 set of API
 samples in the tree based on that set of posted patches.

 It seems like there should only be 1 set of samples in docs/ and one set
 of templates. I would also argue that we should only have 1 set of tests
 (though that I'm mid term flexible on).

 It seems that if our concern is that both the v2 and v21 endpoints need
 to have the same results, we could change the functional tox target to
 run twice, once with v2 and once with v21 set as the v2 endpoint.
 Eventually we'll be able to drop v2 on v2.

 Anyway, in order to both assist my own work unwinding the test tree, and
 to help review your work there, can you lay out your vision for cleaning
 this up with all the steps involved? Hopefully that will cut down the
 confusion and make all this work move faster.

 -Sean

 --
 Sean Dague
 http://dague.net

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder]difference between spec merged and BP approval

2015-03-06 Thread Chen, Wei D
Hi,

I thought the feature should be approved as long as the SPEC[1] is merged, but 
it seems I am wrong from the beginning[2], both of
them (SPEC merged and BP approval[4][5]) is necessary and mandatory for getting 
some effective reviews, right? anyone can help to
confirm with that?

Besides, who is eligible to define/modify the priority in the list[3], only PTL 
or any core? I am trying to understand the
acceptable procedure for the coming 'L'.

[1] https://review.openstack.org/#/c/136253/
[2] https://review.openstack.org/#/c/147726/
[3] https://etherpad.openstack.org/p/cinder-k3-priorities
[4] 
https://blueprints.launchpad.net/cinder/+spec/support-modify-volume-image-metadata
[5] 
https://blueprints.launchpad.net/python-cinderclient/+spec/support-modify-volume-image-metadata



Best Regards,
Dave Chen



smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Getting to a 1.0

2015-03-06 Thread Dan Prince
On Tue, 2015-03-03 at 17:30 -0500, James Slagle wrote:
 Hi,
 
 Don't let the subject throw you off :)
 
 I wasn't sure how to phrase what I wanted to capture in this mail, and
 that seemed reasonable enough. I wanted to kick off a discussion about
 what gaps people think are missing from TripleO before we can meet the
 goal of realistically being able to use TripleO in production.
 
 The things in my mind are:
 
 Upgrades - I believe the community is trending away from the image
 based upgrade rebuild process. The ongoing Puppet integration work is
 integrated with Heat's SoftwareConfig/SoftwareDeployment features and
 is package driven. There is still work to be done, especially around
 supporting rollbacks, but I think this could be part of the answer to
 how the upgrade problem gets solved.

+1 Using packages solves some problems very nicely. We haven't solved
all the CI related issues around using packages with upstream but it is
getting better. I mention this because it would be nice to have CI
testing on the upgrade process automated at some point...

 
 HA - We have an implementation of HA in tripleo-image-elements today.
 However, the Puppet codepath leaves that mostly unused. The Puppet
 modules however do support HA. Is that the answer here as well?

In general most of the puppet modules support the required HA bits. We
are still working to integrate some of the final pieces here but in
general I expect this to proceed quickly.

 
 CLI - We have devtest. I'm not sure if anyone would argue that should
 be used in production. It could be...but I don't think that was it's
 original goal and it shows. The downstreams of TripleO that I'm aware
 of each ended up more of less having their own CLI tooling. Obviously
 I'm only very familiar with one of the downstreams, but in some
 instances I believe parts of devtest were reused, and other times not.
 That begs the question, do we need a well represented unified CLI in
 TripleO? We have a pretty good story about using Nova/Ironic/Heat[0]
 to deploy OpenStack, and devtest is one such implementation of that
 story. Perhaps we need something more production oriented.

I think this is an interesting idea and perhaps has some merit. I'd like
to see some specific examples showing how the unified CLI might make
things easier for end users...

 
 Baremetal management - To what extent should TripleO venture into this
 space? I'm thinking things like discovery/introspection, ready state,
 and role assignment. Ironic is growing features to expose things like
 RAID management via vendor passthrough API's. Should TripleO take a
 role in exercising those API's? It's something that could be built
 into the flow of the unified CLI if we were to end up going that
 route.
 
 Bootstrapping - The undercloud needs to be
 bootstrapped/deployed/installed itself. We have the seed vm to do
 that. I've also worked on an implementation to install an undercloud
 via an installation script assuming the base OS is already installed.
 Are these the only 2 options we should consider, or are there other
 ideas that will integrate better into existing infrastructure?

And also should we think about possibly renaming these? I find that many
times when talking about TripleO to someone new they find the whole
undercloud/overcloud thing confusing. Calling the undercloud the
baremetal cloud makes it click.

 
 Release Cadence with wider OpenStack - I'd love to be able to say on
 the day that a new release of OpenStack goes live that you can use
 TripleO to deploy that release in production...and here's how you'd do
 it
 
 What other items should we include here? I almost added a point for
 Stability, but let's just assume we want to make everything as stable
 as we possibly can :).
 
 I know I've mostly raised questions. I have some of my own answers in
 mind. But, I was actually hoping to get others talking about what the
 right answers might be.
 
 [0] Plus the other supporting cast of characters: 
 Keystone/Glance/Neutron/Swift.
 
 Thanks.
 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Core nominations.

2015-03-06 Thread Hemanth Makkapati
I like the idea of a 'core-member'. But, how are core-members different from 
core-reviewers? For instance, with core-reviewers it is very clear that these 
are folks you would trust with merging code because they are supposed to have a 
good understanding of the overall project. What about core-members? Are 
core-members essentially just core-reviewers who somehow don't fit the criteria 
of core-reviewers but are good candidates nevertheless? Just trying to 
understand here ... no offense meant.


Also, +1 to both the criteria for removing existing cores and addition of new 
cores.


-Hemanth.


From: Nikhil Komawar nikhil.koma...@rackspace.com
Sent: Friday, March 6, 2015 4:04 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance] Core nominations.


Thank you all for the input outside of the program: Kyle, Ihar, Thierry, Daniel!


Mike, Ian: It's a good idea to have the policy however, we need to craft one 
that is custom to the Glance program. It will be a bit different to ones out 
there as we've contributors who are dedicated to only subset of the code - for 
example just glance_store or python-glanceclient or metadefs. From here on, we 
may see that for Artifacts and other such features. It's already being observed 
for metadefs.


While I like Mike's suggestion to (semi-)adopt what Nova and Neutron are doing, 
it also makes me wonder if that's going to help us in long term. If not, then 
what can we do now to set a good path forward?


Flavio, Erno, Malini, Louis, Mike: Drafting a guideline policy and implementing 
rotation based on that was my intent so that everyone is aware of the changes 
in the program. That would let the core reviewers know what their duties are 
and let non-cores know what they need to do to become cores. Moreover, I've a 
idea for proposing a core-member status for our program than just 
core-reviewer. That seems more applicable for a few strong regular contributors 
like Travis and Lakshmi who work on bug fixes, bug triaging and client 
improvements however, do not seem to keep momentum on reviews. The core status 
can affect project decisions hence, this change may be important. This process 
may involve some interactions with governance so, will take more time.


Malini: I wish to take a strategic decision here rather an agile one. That 
needs a lot of brainpower before implementation. While warning and acting is 
good, it seems less applicable for this case. Simply because, we need to make a 
positive difference in the interactions of the community and we've a chance of 
doing that here.


Nevertheless, I do not want to block the new-core additions or ask Flavio 
et.al. to accommodate for the reviews that the new members would have been able 
to do (just kidding).


Tweaking Flavio's criterion of cleaning up the list for cores who have not done 
any reviews in the last 2 cycles (Icehouse and Juno), I've prepared a new list 
below (as Flavio's list did not match up even if we take cycles to be Juno, 
Kilo). They can be added back to the list faster in the future if they consider 
contributing to Glance again.


The criterion is:

Reviews = 50 in combined cycles.


Proposal to remove the following members(review_count) from the glance-core 
list:

  *   Brian Lamar (0+15)
  *   Brian Waldon (0+0)
  *   Dan Prince (3+1)
  *   Eoghan Glynn (0+3)
  *   John Bresnahan (31+12)

And we would add the following new members:

  *   Ian Cordasco
  *   Louis Taylor
  *   Mike Fedosin
  *   Hemanth Makkapati


This way we've a first round of consolidation done. It must be evident that the 
list-cleanup proposed above is not comprehensive with regards to who is truly 
inactive. Thus, misses out on a few names due to lack of established criterion. 
We can do more about rotation in the following weeks.


Hope it works!


Regards,
-Nikhil

From: Kyle Mestery mest...@mestery.com
Sent: Friday, March 6, 2015 12:45 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance] Core nominations.

On Fri, Mar 6, 2015 at 11:40 AM, Ian Cordasco 
ian.corda...@rackspace.commailto:ian.corda...@rackspace.com wrote:
I like that idea. Can you start it out with Nova or Neutron's guidelines?

FYI, the core reviewer guidelines for Neutron are in-tree now [1], along with 
all of our other policies around operating in Neutron [2].

[1] 
https://github.com/openstack/neutron/blob/master/doc/source/policies/core-reviewers.rst
[2] https://github.com/openstack/neutron/tree/master/doc/source/policies

On 3/5/15, 17:38, Mikhail Fedosin 
mfedo...@mirantis.commailto:mfedo...@mirantis.com wrote:

I think yes, it does. But I mean that now we're writing a document called
Glance Review Guidelines

https://docs.google.com/document/d/1Iia0BjQoXvry9XSbf30DRwQt--ODglw-ZTT_5R
JabsI/edit?usp=sharing

Re: [openstack-dev] [nova] novaclient functional test guidelines

2015-03-06 Thread Joe Gordon
First pass at trying to capture this thread into a README:
https://review.openstack.org/162334

On Tue, Feb 24, 2015 at 2:07 PM, Joe Gordon joe.gord...@gmail.com wrote:



 On Tue, Feb 24, 2015 at 1:18 PM, melanie witt melwi...@gmail.com wrote:

 On Feb 24, 2015, at 9:47, Sean Dague s...@dague.net wrote:

  I'm happy if there are other theories about how we do these things,
  being the first functional test in the python-novaclient tree that
  creates and destroys real resources, there isn't an established pattern
  yet. But I think doing all CLI calls in CLI tests is actually really
  cumbersome, especially in the amount of output parsing code needed if
  you are going to setup any complicated resource structure.

 I think I'm in agreement with the pattern you describe.

 I imagine having a set of functional tests for the API, that don't do any
 CLI calls at all. With that we test that the API works properly. Then have
 a separate set for the CLI, which only calls CLI for the command being
 tested, everything else to set up and tear down the test done by API calls.
 This would be done with the rationale that because the entire API
 functionality is tested separately, we can safely use it for setup/teardown
 with the intent to isolate the CLI test to the command being tested and
 avoid introducing side effects from the CLI commands.

 But I suppose one could make the same argument for using CLI everywhere
 (if they are all tested, they can also be trusted not to introduce side
 effects). I tend to favor using the API because it's the most bare bones
 setup/teardown we could use. At the same time I understand the idea of
 performing an entire test using the CLI, as a way of replicating the
 experience a real user might have using the CLI, from start to end. I don't
 think I feel strongly either way.


  I guess its time to revisit the actual status of novaclient and if we
 want to actively move away from it to openstacksdk/OSC as well. If we are
 actively trying to move away from novaclient, using the python API as much
 as possible makes a lot of sense.




 For the --poll stuff, I agree the API should have it and the CLI uses it.
 And with and without poll functionality should be tested separately, API
 and CLI.

 melanie (melwitt)





 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] [oslo] Further details for the Oslo Cache Updated to use dogpile.cache spec

2015-03-06 Thread xiaoyuan

Hi everyone,

I am Xiaoyuan Lu, a recipient oftheHP Helion OpenStack scholarship, and 
ElizabethK. Josephis my mentor from HP. I would like to work on projects 
related to Keystone. Considering the amount of work and tiime limit, 
after going through the Keystone specs, finally we decided to work on 
oslo-cache-using-dogpile.[0]


At the Oslo meeting this week, I met with the Oslo team and we 
identified the need to add some ideas for what the library API might 
need to include with regard to the Oslo Cache Updated to use 
dogpile.cache spec.


Can we get some feedback to help flesh this out?

[0]http://specs.openstack.org/openstack/oslo-specs/specs/kilo/oslo-cache-using-dogpile.html

--
Xiaoyuan Lu
Computer Science M.S.
UC Santa Cruz

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] HTTPD Config

2015-03-06 Thread Adam Young

On 03/06/2015 01:29 PM, Matthias Runge wrote:

On Fri, Mar 06, 2015 at 11:08:44AM -0500, Adam Young wrote:

No matter what we do in devstack, this is something, horizon and
keystone devs need to fix first. E.g. in Horizon, we still discover hard
coded URLs here and there. To catch that kind of things, I had a patch
up for review, to easily configure moving Horizon from using http server
root to something different.

Link?

https://review.openstack.org/#/c/86880/

Obviously, that needs a bit work now.

Please bring that back to life.  I added myself as a reviewer. That is 
absolutely necessary, and should not be too controversial.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] [gnocchi] monitoring unexpected success of xfail designated tests

2015-03-06 Thread Chris Dent


With the advent of gabbi tests in both ceilometer and gnocchi, we've
started using xfail (expected failure) as a way of highlighting HTTP
behavior that is wrong or poor[1] and linking to bugs on launchpad in
the description of the tests.

This means that we need to start monitoring local test runs for
unexpected success to see which of these have been fixed, and
update the tests accordingly. testr itself will not alert you.

Gnocchi has already started using pretty_tox.sh, so as long as there
is a recent version of subunit-trace (from tempest-lib) installed,
gnocchi test runs will tell when there has been an unexpected
success.

Ceilometer hasn't made that change, but it is possible to do it by
hand. After any test run:

   testr last --subunit |subunit-trace

will report on the most recent test run and give a summary. Again,
this assume a recent tempest-lib has been installed (from PyPI).

[1] Often but not always related to the framework being used.
Usually, but not always, WSME failing to trap exceptions in the face
of unexpected data.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Deprecation warnings in python-fuelclient-6.1.*

2015-03-06 Thread Roman Prykhodchenko
Kamil,

thank you for the explanation.
Indeed, the idea of keeping two separate entry points — one for the old 
functionality and another with the new cliff-based CLI makes more sense.

In addition to the advantages you mentioned it will allow us to avoid all the 
mess with fall-backs and concentrate on making the new CLI as it should be from 
the beginning.

I ask you guys to take a look at the chain of patches [1] that implement this 
approach.


1. 
https://review.openstack.org/#/q/status:open+project:stackforge/python-fuelclient+branch:master+topic:bp/re-thinking-fuel-client,n,z



- romcheg

 4 бер. 2015 о 12:05 Kamil Sambor ksam...@mirantis.com написав(ла):
 
 @romcheg
 - the idea is to switch partially to new client so keeping one package with 
 two entry points: fuel and fuel_v2. It will be convenient for us to add new 
 commands to fuel_v2 only and switch slowly old commands to new version and 
 add warnings in old client commands. It will give users time to switch to new 
 client and it will be easy for us to migrate only old commands. Now when we 
 add new command we add to old client and then in future still need to migrate 
 it. SO keeping two entry points for fuel-client IMHO will be convenient for 
 developers and for users.
 
 Best regards,
 Kamil Sambor
 
 On Wed, Mar 4, 2015 at 10:54 AM, Roman Prykhodchenko m...@romcheg.me wrote:
 I’d like to resolve some questions:
 
 @Przemysław:
  - We can avoid that message by supplying --quiet.
  - Changelog is currently managed automatically by PBR so as soon as there is 
 a release there will be a change log
  - I think #2 can be done along with #3
 
 @Kamil:
  - The issue is that it’s not possible to release commands in this release 
 because it will immediately make the CLI incompatible.For 7.0 there is a plan 
 to get rid of the old CLI completely and replace it with Cliff-based one. I 
 agree that people may forget the deprecation warning before 7.1 ISO is 
 available but that is partially solvable by a change log. Besides, 
 python-fuelclient-7.0 will be available on PyPi much earlier than 7.0 ISO is 
 released.
  - ^ is basically the reason why we cannot use #4, because there will be 
 nothing new to use, at least in the 6.1 ISO. Keeping both CLIs in the source 
 tree will create more mess and will be terribly hard to test.
 
 
 - romcheg
 
  4 бер. 2015 о 10:11 Kamil Sambor ksam...@mirantis.com написав(ла):
 
  Hi all,
 
  IMHO  deprecation warning should be added only to commands that we recently 
  changed (because users can switch to new interface when they see 
  deprecation error) or eventually solution #2 sounds ok but is not ideal 
  because people can forget about warning that they saw in previous release. 
  Also we discuss 4th solution, simply we should inform users about 
  deprecation of client and encourage users to use fuel_v2 client with new 
  commands and parameters.
 
  Best regards,
  Kamil Sambor
 
  On Wed, Mar 4, 2015 at 9:28 AM, Przemyslaw Kaminski 
  pkamin...@mirantis.com wrote:
  Maybe add a Changelog in the repo and maintain it?
 
  http://keepachangelog.com/
 
  Option #2 is OK but it can cause pain when testing -- upon each fresh
  installation from ISO we would get that message and it might break some
  tests though that is fixable. Option #3 is OK too. #1 is worst and I
  wouldn't do it.
 
  Or maybe display that info when showing all the commands (typing 'fuel'
  or 'fuel -h')? We already have a deprecation warning there concerning
  client/config.yaml, it is not very disturbing and shouldn't break any
  currently used automation scripts.
 
  P.
 
 
  On 03/03/2015 03:52 PM, Roman Prykhodchenko wrote:
   Hi folks!
  
  
   According to the refactoring plan [1] we are going to release the 6.1 
   version of python-fuelclient which is going to contain recent changes but 
   will keep backwards compatibility with what was before. However, the next 
   major release will bring users the fresh CLI that won’t be compatible 
   with the old one and the new, actually usable IRL API library that also 
   will be different.
  
   The issue this message is about is the fact that there is a strong need 
   to let both CLI and API users about those changes. At the moment I can 
   see 3 ways of resolving it:
  
   1. Show deprecation warning for commands and parameters which are going 
   to be different. Log deprecation warnings for deprecated library methods.
   The problem with this approach is that the structure of both CLI and the 
   library will be changed, so deprecation warning will be raised for mostly 
   every command for the whole release cycle. That does not look very user 
   friendly, because users will have to run all commands with --quiet for 
   the whole release cycle to mute deprecation warnings.
  
   2. Show the list o the deprecated stuff and planned changes on the first 
   run. Then mute it.
   The disadvantage of this approach if that there is a need of storing the 
   info about the first run 

Re: [openstack-dev] [horizon] Stepping down as a Horizon core reviewer

2015-03-06 Thread Timur Sufiev
I've been away from Horizon activities for a while, so this sad news has
come to me just a moment ago.

Julie, you were of great help on #openstack-horizon, especially to
newcomers, I'll be missing you there.

Wish you luck in any of your new endeavors :)!

On Fri, Feb 13, 2015 at 5:18 PM, David Lyle dkly...@gmail.com wrote:


 On Fri, Feb 13, 2015 at 6:14 AM, Thierry Carrez thie...@openstack.org
 wrote:

 Julie Pichon wrote:
  In the spirit of stepping down considerately [1], I'd like to ask to be
  removed from the core and drivers team for Horizon and associated
  projects. I'm embarking on some fun adventures far far away and won't
  have any time to spare for OpenStack for a while.

 Aw. Sad to hear that. Please come back to us when said adventures start
 to become unfun!

 --
 Thierry Carrez (ttx)


 Thank you Julie for all of your contributions. You've been an integral
 part of the Horizon team. We will miss you.

 We'll always have room for you, if you ever want to take us back.

 Best wishes on your next adventures.

 David

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Timur Sufiev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [Third-party-announce] Cinder Merged patch broke HDS driver

2015-03-06 Thread Marcus Vinícius Ramires do Nascimento
Ramy,
Ok, I'll do that. Good idea.

Thang,
Thank you for all your help.

Regards,
Marcus

On Thu, Mar 5, 2015 at 11:41 PM, Thang Pham thang.g.p...@gmail.com wrote:

 I commented on this in your patch (
 https://review.openstack.org/#/c/161837/) and posted a patch to help you
 along - https://review.openstack.org/#/c/161945/.  This patch will
 make create_snapshot and create_volume_from_snapshot method use
 snapshot objects.  By using snapshot objects in both methods, you could now
 update the driver to use snapshot objects, instead of the workaround you
 had originally posted.

 Regards,
 Thang

 On Thu, Mar 5, 2015 at 8:10 PM, Asselin, Ramy ramy.asse...@hp.com wrote:

  Marcus,



 Don’t turn off ci, because then you could miss another regression.



 Instead, simply exclude that test case:

 e.g.

 export
 DEVSTACK_GATE_TEMPEST_REGEX=^(?=.*tempest.api.volume)(?!.*test_snapshots_actions).*



 Ramy





 *From:* Marcus Vinícius Ramires do Nascimento [mailto:marcus...@gmail.com]

 *Sent:* Wednesday, March 04, 2015 1:29 PM
 *To:* openstack-dev@lists.openstack.org; Announcements for third party
 CI operators.
 *Subject:* [openstack-dev] [cinder] [Third-party-announce] Cinder Merged
 patch broke HDS driver



 Hi folks,



 This weekend, the patch *Snapshot and volume objects* (
 https://review.openstack.org/#/c/133566) was merged and this one broke
 our HDS HBSD driver and the respective CI.



 When CI tries to run tempest.api.volume.admin.test_snapshots_actions the
 following error is shown:



 2015-03-04 14:00:34.368 ERROR oslo_messaging.rpc.dispatcher
 [req-c941792b-963f-4a7d-a6ac-9f1d9f823fd1 915289d113dd4f9db2f2a792c18b3564
 984bc8d228c8497689dde60dc2b8f300] Exception during messag

 e handling: 'class 'cinder.objects.snapshot.Snapshot'' object has no
 attribute 'snapshot_metadata'

 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher Traceback
 (most recent call last):

 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File
 /usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py,
 line 142, in _dispatch_and_reply

 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher
 executor_callback))

 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File
 /usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py,
 line 186, in _dispatch

 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher
 executor_callback)

 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File
 /usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py,
 line 130, in _do_dispatch

 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher result =
 func(ctxt, **new_args)

 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File
 /usr/local/lib/python2.7/dist-packages/osprofiler/profiler.py, line 105,
 in wrapper

 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher return
 f(*args, **kwargs)

 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File
 /opt/stack/cinder/cinder/volume/manager.py, line 156, in lso_inner1

 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher return
 lso_inner2(inst, context, snapshot, **kwargs)

 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File
 /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py,
 line 431, in inner

 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher return
 f(*args, **kwargs)

 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File
 /opt/stack/cinder/cinder/volume/manager.py, line 155, in lso_inner2

 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher return
 f(*_args, **_kwargs)

 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File
 /opt/stack/cinder/cinder/volume/manager.py, line 635, in delete_snapshot

 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher
 snapshot.save()

 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File
 /usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py, line 82,
 in __exit__

 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher
 six.reraise(self.type_, self.value, self.tb)

 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File
 /opt/stack/cinder/cinder/volume/manager.py, line 625, in delete_snapshot

 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher
 self.driver.delete_snapshot(snapshot)

 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File
 /usr/local/lib/python2.7/dist-packages/osprofiler/profiler.py, line 105,
 in wrapper

 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher return
 f(*args, **kwargs)

 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File
 /opt/stack/cinder/cinder/volume/drivers/hitachi/hbsd_iscsi.py, line 314,
 in delete_snapshot

 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher
 self.common.delete_snapshot(snapshot)

 2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File
 

Re: [openstack-dev] [Fuel] Testing DB migrations

2015-03-06 Thread Boris Bobrov
On Friday 06 March 2015 16:57:19 Nikolay Markov wrote:
 Hi everybody,
 
 From time to time some bugs appear regarding failed database migrations
 during upgrade and we have High-priority bug for 6.1 (
 https://bugs.launchpad.net/fuel/+bug/1391553) on testing this migration
 process. I want to start a thread for discussing how we're going to do it.
 
 I don't see any obvious solution, but we can at least start adding tests
 together with any changes in migrations, which will use a number of various
 fake environments upgrading and downgrading DB.
 
 Any thoughts?

In Kyestone adding unit tests and running them in in-memory sqlite was proven 
ineffective.The only solution we've come to is to run all db-related tests 
against real rdbmses.

-- 
Best regards,
Boris Bobrov

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] blueprint about multiple workers supported in nova-scheduler

2015-03-06 Thread Attila Fazekas
Looks like we need some kind of _per compute node_ mutex in the critical 
section,
multiple scheduler MAY be able to schedule to two compute node at same time,
but not for scheduling to the same compute node.

If we don't want to introduce another required component or
reinvent the wheel there are some possible trick with the existing globally 
visible
components like with the RDMS.

`Randomized` destination choose is recommended in most of the possible 
solutions,
alternatives are much more complex.

One SQL example:

* Add `sched_cnt`, defaul=0, Integer field; to a hypervisors related table.

When the scheduler picks one (or multiple) node, he needs to verify is the 
node(s) are 
still good before sending the message to the n-cpu.

It can be done by re-reading the ONLY the picked hypervisor(s) related data.
with `LOCK IN SHARE MODE`.
If the destination hyper-visors still OK:

Increase the sched_cnt value exactly by 1,
test is the UPDATE really update the required number of rows,
the WHERE part needs to contain the previous value.

You also need to update the resource usage on the hypervisor,
 by the expected cost of the new vms.

If at least one selected node was ok, the transaction can be COMMITed.
If you were able to COMMIT the transaction, the relevant messages 
 can be sent.

The whole process needs to be repeated with the items which did not passed the
post verification.

If a message sending failed, `act like` migrating the vm to another host.

If multiple scheduler tries to pick multiple different host in different order,
it can lead to a DEADLOCK situation.
Solution: Try to have all scheduler to acquire to Shared RW locks in the same 
order,
at the end.

Galera multi-writer (Active-Active) implication:
As always, retry on deadlock. 

n-sch + n-cpu crash at the same time:
* If the scheduling is not finished properly, it might be fixed manually,
or we need to solve which still alive scheduler instance is 
responsible for fixing the particular scheduling..


- Original Message -
 From: Nikola Đipanov ndipa...@redhat.com
 To: openstack-dev@lists.openstack.org
 Sent: Friday, March 6, 2015 10:29:52 AM
 Subject: Re: [openstack-dev] [nova] blueprint about multiple workers 
 supported in nova-scheduler
 
 On 03/06/2015 01:56 AM, Rui Chen wrote:
  Thank you very much for in-depth discussion about this topic, @Nikola
  and @Sylvain.
  
  I agree that we should solve the technical debt firstly, and then make
  the scheduler better.
  
 
 That was not necessarily my point.
 
 I would be happy to see work on how to make the scheduler less volatile
 when run in parallel, but the solution must acknowledge the eventually
 (or never really) consistent nature of the data scheduler has to operate
 on (in it's current design - there is also the possibility of offering
 an alternative design).
 
 I'd say that fixing the technical debt that is aimed at splitting the
 scheduler out of Nova is a mostly orthogonal effort.
 
 There have been several proposals in the past for how to make the
 scheduler horizontally scalable and improve it's performance. One that I
 remember from the Atlanta summit time-frame was the work done by Boris
 and his team [1] (they actually did some profiling and based their work
 on the bottlenecks they found). There are also some nice ideas in the
 bug lifeless filed [2] since this behaviour particularly impacts ironic.
 
 N.
 
 [1] https://blueprints.launchpad.net/nova/+spec/no-db-scheduler
 [2] https://bugs.launchpad.net/nova/+bug/1341420
 
 
  Best Regards.
  
  2015-03-05 21:12 GMT+08:00 Sylvain Bauza sba...@redhat.com
  mailto:sba...@redhat.com:
  
  
  Le 05/03/2015 13:00, Nikola Đipanov a écrit :
  
  On 03/04/2015 09:23 AM, Sylvain Bauza wrote:
  
  Le 04/03/2015 04:51, Rui Chen a écrit :
  
  Hi all,
  
  I want to make it easy to launch a bunch of scheduler
  processes on a
  host, multiple scheduler workers will make use of
  multiple processors
  of host and enhance the performance of nova-scheduler.
  
  I had registered a blueprint and commit a patch to
  implement it.
  
  https://blueprints.launchpad.__net/nova/+spec/scheduler-__multiple-workers-support
  
  https://blueprints.launchpad.net/nova/+spec/scheduler-multiple-workers-support
  
  This patch had applied in our performance environment
  and pass some
  test cases, like: concurrent booting multiple instances,
  currently we
  didn't find inconsistent issue.
  
  IMO, nova-scheduler should been scaled horizontally on
  easily way, the
  multiple workers should been supported as an out of box
  feature.
  
  Please feel free to discuss this feature, thanks.
  
  
  

Re: [openstack-dev] [Neutron] Issue when upgrading from Juno to Kilo due to agent report_state RPC namespace patch

2015-03-06 Thread McCann, Jack
+1 on avoiding changes that break rolling upgrade.

Rolling upgrade has been working so far (at least from my perspective), and
as openstack adoption spreads, it will be important for more and more users.

How do we make rolling upgrade a supported part of Neutron?

- Jack

 -Original Message-
 From: Assaf Muller [mailto:amul...@redhat.com]
 Sent: Thursday, March 05, 2015 11:59 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron] Issue when upgrading from Juno to Kilo 
 due
 to agent report_state RPC namespace patch
 
 
 
 - Original Message -
  To turn this stuff off, you don't need to revert.  I'd suggest just
  setting the namespace contants to None, and that will result in the same
  thing.
 
 
 http://git.openstack.org/cgit/openstack/neutron/tree/neutron/common/constants.py#
 n152
 
  It's definitely a non-backwards compatible change.  That was a conscious
  choice as the interfaces are a bit of a tangled mess, IMO.  The
  non-backwards compatible changes were simpler so I went that route,
  because as far as I could tell, rolling upgrades were not supported.  If
  they do work, it's due to luck.  There's multiple things including the
  lack of testing this scenario to lack of data versioning that make it a
  pretty shaky area.
 
  However, if it worked for some people, I totally get the argument
  against breaking it intentionally.  As mentioned before, a quick fix if
  needed is to just set the namespace constants to None.  If someone wants
  to do something to make it backwards compatible, that's even better.
 
 
 I sent out an email to the operators list to get some feedback:
 http://lists.openstack.org/pipermail/openstack-operators/2015-March/006429.html
 
 And at least one operator reported that he performed a rolling Neutron upgrade
 from I to J successfully. So, I'm agreeing with you agreeing with me that we
 probably don't want to mess this up knowingly, even though there is no testing
 to make sure that it keeps working.
 
 I'll follow up on IRC with you to figure out who's doing what.
 
  --
  Russell Bryant
 
  On 03/04/2015 11:50 AM, Salvatore Orlando wrote:
   To put in another way I think we might say that change 154670 broke
   backward compatibility on the RPC interface.
   To be fair this probably happened because RPC interfaces were organised
   in a way such that this kind of breakage was unavoidable.
  
   I think the strategy proposed by Assaf is a viable one. The point about
   being able to do rolling upgrades only from version N to N+1 is a
   sensible one, but it has more to do with general backward compability
   rules for RPC interfaces.
  
   In the meanwhile this is breaking a typical upgrade scenario. If a fix
   allowing agent state updates both namespaced and not is available today
   or tomorrow, that's fine. Otherwise I'd revert just to be safe.
  
   By the way, we were supposed to have already removed all server rpc
   callbacks in the appropriate package... did we forget out this one or is
   there a reason for which it's still in neutron.db?
  
   Salvatore
  
   On 4 March 2015 at 17:23, Miguel Ángel Ajo majop...@redhat.com
   mailto:majop...@redhat.com wrote:
  
   I agree with Assaf, this is an issue across updates, and
   we may want (if that’s technically possible) to provide
   access to those functions with/without namespace.
  
   Or otherwise think about reverting for now until we find a
   migration strategy
  
  
 https://review.openstack.org/#/q/status:merged+project:openstack/neutron+branch:m
 aster+topic:bp/rpc-docs-and-namespaces,n,z
  
  
   Best regards,
   Miguel Ángel Ajo
  
   On Wednesday, 4 de March de 2015 at 17:00, Assaf Muller wrote:
  
   Hello everyone,
  
   I'd like to highlight an issue with:
   https://review.openstack.org/#/c/154670/
  
   According to my understanding, most deployments upgrade the
   controllers first
   and compute/network nodes later. During that time period, all
   agents will
   fail to report state as they're sending the report_state message
   outside
   of any namespace while the server is expecting that message in a
   namespace.
   This is a show stopper as the Neutron server will think all of its
   agents are dead.
  
   I think the obvious solution is to modify the Neutron server code
   so that
   it accepts the report_state method both in and outside of the
   report_state
   RPC namespace and chuck that code away in L (Assuming we support
   rolling upgrades
   only from version N to N+1, which while is unfortunate, is the
   behavior I've
   seen in multiple places in the code).
  
   Finally, are there additional similar issues for other RPC methods
   placed in a namespace
   this cycle?
  
  
   Assaf Muller, Cloud Networking Engineer
   Red Hat
  


Re: [openstack-dev] [Fuel] Testing DB migrations

2015-03-06 Thread Nikolay Markov
We already run unit tests only using real Postgresql. But
this still doesn't answer the question how we should test migrations.

On Fri, Mar 6, 2015 at 5:24 PM, Boris Bobrov bbob...@mirantis.com wrote:

 On Friday 06 March 2015 16:57:19 Nikolay Markov wrote:
  Hi everybody,
 
  From time to time some bugs appear regarding failed database migrations
  during upgrade and we have High-priority bug for 6.1 (
  https://bugs.launchpad.net/fuel/+bug/1391553) on testing this migration
  process. I want to start a thread for discussing how we're going to do
 it.
 
  I don't see any obvious solution, but we can at least start adding tests
  together with any changes in migrations, which will use a number of
 various
  fake environments upgrading and downgrading DB.
 
  Any thoughts?

 In Kyestone adding unit tests and running them in in-memory sqlite was
 proven
 ineffective.The only solution we've come to is to run all db-related tests
 against real rdbmses.

 --
 Best regards,
 Boris Bobrov

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Best regards,
Nick Markov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Testing DB migrations

2015-03-06 Thread Roman Podoliaka
Hi all,

You could take a look at how this is done in OpenStack projects [1][2]

Most important parts:
1) use the same RDBMS you use in production
2) test migration scripts on data, not on empty schema
3) test corner cases (adding a NOT NULL column without a server side
default value, etc)
4) do a separate migration scripts run with large data sets to make
sure you don't introduce slow migrations [3]

Thanks,
Roman

[1] 
https://github.com/openstack/nova/blob/fb642be12ef4cd5ff9029d4dc71c7f5d5e50ce29/nova/tests/unit/db/test_migrations.py#L66-L833
[2] 
https://github.com/openstack/oslo.db/blob/0058c6510bfc6c41c830c38f3a30b5347a703478/oslo_db/sqlalchemy/test_migrations.py#L40-L273
[3] 
http://josh.people.rcbops.com/2013/12/third-party-testing-with-turbo-hipster/

On Fri, Mar 6, 2015 at 4:50 PM, Nikolay Markov nmar...@mirantis.com wrote:
 We already run unit tests only using real Postgresql. But this still doesn't
 answer the question how we should test migrations.

 On Fri, Mar 6, 2015 at 5:24 PM, Boris Bobrov bbob...@mirantis.com wrote:

 On Friday 06 March 2015 16:57:19 Nikolay Markov wrote:
  Hi everybody,
 
  From time to time some bugs appear regarding failed database migrations
  during upgrade and we have High-priority bug for 6.1 (
  https://bugs.launchpad.net/fuel/+bug/1391553) on testing this migration
  process. I want to start a thread for discussing how we're going to do
  it.
 
  I don't see any obvious solution, but we can at least start adding tests
  together with any changes in migrations, which will use a number of
  various
  fake environments upgrading and downgrading DB.
 
  Any thoughts?

 In Kyestone adding unit tests and running them in in-memory sqlite was
 proven
 ineffective.The only solution we've come to is to run all db-related tests
 against real rdbmses.

 --
 Best regards,
 Boris Bobrov

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Best regards,
 Nick Markov

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Deprecation of ComputeFilter

2015-03-06 Thread Sylvain Bauza

Hi,

First, sorry for cross-posting on both dev and operator MLs but I also 
would like to get operators feedback.


So, I was reviewing the scheduler ComputeFilter and I was wondering why 
the logic should be in a filter.
We indeed already have a check on the service information each time that 
a request is coming in, which is done by 
HostManager.get_all_host_states() - basically called by 
FilterScheduler._schedule()


Instead, I think it is error-prone to leave that logic in a filter 
because it can easily be accidentally removed from the list of filters. 
Besides, the semantics of the filter is not well known and operators 
could not understand that it is filtering on a Service RPC status, not 
the real compute node behind it.


In order to keep a possibility for operators to explicitely ask the 
FilterScheduler to also filter on disabled hosts, I propose to add a 
config option which would be self-explicit.


So, I made a quick POC for showing how we could move the logic to 
HostManager [1]. Feel free to review it and share your thoughts both on 
the change and here, because I want to make sure that we get a consensus 
on the removal before really submitting anything.



[1] https://review.openstack.org/#/c/162180/

-Sylvain


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] [gnocchi] monitoring unexpected success of xfail designated tests

2015-03-06 Thread Julien Danjou
On Fri, Mar 06 2015, Chris Dent wrote:

 Gnocchi has already started using pretty_tox.sh, so as long as there
 is a recent version of subunit-trace (from tempest-lib) installed,
 gnocchi test runs will tell when there has been an unexpected
 success.

Are you saying that unexpected success is not a failure?
That sounds wrong. Where should we fix that?

-- 
Julien Danjou
# Free Software hacker
# http://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Testing DB migrations

2015-03-06 Thread Nikolay Markov
Hi everybody,

From time to time some bugs appear regarding failed database migrations
during upgrade and we have High-priority bug for 6.1 (
https://bugs.launchpad.net/fuel/+bug/1391553) on testing this migration
process. I want to start a thread for discussing how we're going to do it.

I don't see any obvious solution, but we can at least start adding tests
together with any changes in migrations, which will use a number of various
fake environments upgrading and downgrading DB.

Any thoughts?

-- 
Best regards,
Nick Markov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] HTTPD Config

2015-03-06 Thread Rich Megginson

On 03/06/2015 12:37 AM, Matthias Runge wrote:

On 05/03/15 19:49, Adam Young wrote:


I'd like to drop port 5000 all-together, as we are using a port assigned
to a different service.  35357 is also problematic as it is in the
middle of the Ephemeral range.  Since we are  talking about running
everything in one web server anywya, using port 80/443 for all web stuff
is the right approach.

I have thought about this as well. The issue here is, URLs for keystone
and horizon will probably clash.
(is https://server/api/... a keystone or a call for horizon).

No matter what we do in devstack, this is something, horizon and
keystone devs need to fix first. E.g. in Horizon, we still discover hard
coded URLs here and there. To catch that kind of things, I had a patch
up for review, to easily configure moving Horizon from using http server
root to something different.

I would expect the same thing for keystone, too.


It's the same thing for almost every project.  I've been working on the 
puppet-openstack code quite a bit lately, and there are many, many 
places that assume keystone is listening to http(s)://host:5000/v2.0 or 
host:35357/v2.0.



Matthias


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] blueprint about multiple workers supported in nova-scheduler

2015-03-06 Thread Attila Fazekas




- Original Message -
 From: Attila Fazekas afaze...@redhat.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Friday, March 6, 2015 4:19:18 PM
 Subject: Re: [openstack-dev] [nova] blueprint about multiple workers 
 supported in nova-scheduler
 
 Looks like we need some kind of _per compute node_ mutex in the critical
 section,
 multiple scheduler MAY be able to schedule to two compute node at same time,
 but not for scheduling to the same compute node.
 
 If we don't want to introduce another required component or
 reinvent the wheel there are some possible trick with the existing globally
 visible
 components like with the RDMS.
 
 `Randomized` destination choose is recommended in most of the possible
 solutions,
 alternatives are much more complex.
 
 One SQL example:
 
 * Add `sched_cnt`, defaul=0, Integer field; to a hypervisors related table.
 
 When the scheduler picks one (or multiple) node, he needs to verify is the
 node(s) are
 still good before sending the message to the n-cpu.
 
 It can be done by re-reading the ONLY the picked hypervisor(s) related data.
 with `LOCK IN SHARE MODE`.
 If the destination hyper-visors still OK:
 
 Increase the sched_cnt value exactly by 1,
 test is the UPDATE really update the required number of rows,
 the WHERE part needs to contain the previous value.

This part is very likely not needed, if all scheduler needs,
to update the (any) same field regarding to the same host, and they
acquire the RW lock for reading before they change it to WRITE lock.

Other strategy might consider pre acquiring the write lock only,
but the write intent is not sure before we re-read and verify the data.  
 
 
 You also need to update the resource usage on the hypervisor,
  by the expected cost of the new vms.
 
 If at least one selected node was ok, the transaction can be COMMITed.
 If you were able to COMMIT the transaction, the relevant messages
  can be sent.
 
 The whole process needs to be repeated with the items which did not passed
 the
 post verification.
 
 If a message sending failed, `act like` migrating the vm to another host.
 
 If multiple scheduler tries to pick multiple different host in different
 order,
 it can lead to a DEADLOCK situation.
 Solution: Try to have all scheduler to acquire to Shared RW locks in the same
 order,
 at the end.
 
 Galera multi-writer (Active-Active) implication:
 As always, retry on deadlock.
 
 n-sch + n-cpu crash at the same time:
 * If the scheduling is not finished properly, it might be fixed manually,
 or we need to solve which still alive scheduler instance is
 responsible for fixing the particular scheduling..
 
 
 - Original Message -
  From: Nikola Đipanov ndipa...@redhat.com
  To: openstack-dev@lists.openstack.org
  Sent: Friday, March 6, 2015 10:29:52 AM
  Subject: Re: [openstack-dev] [nova] blueprint about multiple workers
  supported in nova-scheduler
  
  On 03/06/2015 01:56 AM, Rui Chen wrote:
   Thank you very much for in-depth discussion about this topic, @Nikola
   and @Sylvain.
   
   I agree that we should solve the technical debt firstly, and then make
   the scheduler better.
   
  
  That was not necessarily my point.
  
  I would be happy to see work on how to make the scheduler less volatile
  when run in parallel, but the solution must acknowledge the eventually
  (or never really) consistent nature of the data scheduler has to operate
  on (in it's current design - there is also the possibility of offering
  an alternative design).
  
  I'd say that fixing the technical debt that is aimed at splitting the
  scheduler out of Nova is a mostly orthogonal effort.
  
  There have been several proposals in the past for how to make the
  scheduler horizontally scalable and improve it's performance. One that I
  remember from the Atlanta summit time-frame was the work done by Boris
  and his team [1] (they actually did some profiling and based their work
  on the bottlenecks they found). There are also some nice ideas in the
  bug lifeless filed [2] since this behaviour particularly impacts ironic.
  
  N.
  
  [1] https://blueprints.launchpad.net/nova/+spec/no-db-scheduler
  [2] https://bugs.launchpad.net/nova/+bug/1341420
  
  
   Best Regards.
   
   2015-03-05 21:12 GMT+08:00 Sylvain Bauza sba...@redhat.com
   mailto:sba...@redhat.com:
   
   
   Le 05/03/2015 13:00, Nikola Đipanov a écrit :
   
   On 03/04/2015 09:23 AM, Sylvain Bauza wrote:
   
   Le 04/03/2015 04:51, Rui Chen a écrit :
   
   Hi all,
   
   I want to make it easy to launch a bunch of scheduler
   processes on a
   host, multiple scheduler workers will make use of
   multiple processors
   of host and enhance the performance of nova-scheduler.
   
   I had registered a blueprint and commit a patch to
   implement it.

Re: [openstack-dev] [nova] blueprint about multiple workers supported in nova-scheduler

2015-03-06 Thread Mehdi Abaakouk


Hi, just some oslo.messaging thoughts about having multiple 
nova-scheduler processes (can also apply to any other daemon acting as 
rpc server),


nova-scheduler use service.Service.create() to create a rpc server, that 
one is identified by a 'topic' and a 'server' (the 
oslo.messaging.Target).
Creating multiple workers like [1] does, will result to have all workers 
that share the same identity. This is usually because the 'server' is 
set with the 'hostname', to make our life easier.
With rabbitmq for example, the 'server' attribute of the 
oslo.messaging.Target is used for a queue name, you usually have the 
following queues created:


scheduler
scheduler.scheduler-node-1
scheduler.scheduler-node-2
scheduler.scheduler-node-3
...

Keep things as-is will result that messages that go to 
scheduler.scheduler-node-1 will be processed randomly by the first ready 
worker. You will not be able to identify workers from the amqp point of 
view.
The side effect of that is if a worker stuck, bug or whatever and 
doesn't consume messages anymore, we will not be able to see it. One of 
the other worker will continue to notify that scheduler-node-1 works and 
consume new messages even if all of them are dead/stuck except one.


So I think that each rpc servers (each workers) should have a different 
'server', to get amqp queues like that:


scheduler
scheduler.scheduler-node-1-worker-1
scheduler.scheduler-node-1-worker-2
scheduler.scheduler-node-1-worker-3
scheduler.scheduler-node-2-worker-1
scheduler.scheduler-node-2-worker-2
scheduler.scheduler-node-3-worker-1
scheduler.scheduler-node-3-worker-2
...

Cheers,


[1] https://review.openstack.org/#/c/159382/
---
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Proposal to add Ihar Hrachyshka as a Neutron Core Reviewer

2015-03-06 Thread Akihiro Motoki
+1

Ihar has been made great jobs and it is a nice addition for the team.

2015年3月5日木曜日、Edgar Maganaedgar.mag...@workday.com
javascript:_e(%7B%7D,'cvml','edgar.mag...@workday.com');さんは書きました:

  No doubt about it!

  +1  Cheers for a new extremely good core member!

  Thanks,

  Edgar

   From: Kyle Mestery mest...@mestery.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Wednesday, March 4, 2015 at 11:42 AM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [neutron] Proposal to add Ihar Hrachyshka as a
 Neutron Core Reviewer

I'd like to propose that we add Ihar Hrachyshka to the Neutron core
 reviewer team. Ihar has been doing a great job reviewing in Neutron as
 evidence by his stats [1]. Ihar is the Oslo liaison for Neutron, he's been
 doing a great job keeping Neutron current there. He's already a critical
 reviewer for all the Neutron repositories. In addition, he's a stable
 maintainer. Ihar makes himself available in IRC, and has done a great job
 working with the entire Neutron team. His reviews are thoughtful and he
 really takes time to work with code submitters to ensure his feedback is
 addressed.

  I'd also like to again remind everyone that reviewing code is a
 responsibility, in Neutron the same as other projects. And core reviewers
 are especially beholden to this responsibility. I'd also like to point out
 and reinforce that +1/-1 reviews are super useful, and I encourage everyone
 to continue reviewing code across Neutron as well as the other OpenStack
 projects, regardless of your status as a core reviewer on these projects.

  Existing Neutron cores, please vote +1/-1 on this proposal to add Ihar to
 the core reviewer team.

  Thanks!
  Kyle

 [1] http://stackalytics.com/report/contribution/neutron-group/90



-- 
Akihiro Motoki amot...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] HTTPD Config

2015-03-06 Thread Adam Young

On 03/06/2015 02:37 AM, Matthias Runge wrote:

On 05/03/15 19:49, Adam Young wrote:


I'd like to drop port 5000 all-together, as we are using a port assigned
to a different service.  35357 is also problematic as it is in the
middle of the Ephemeral range.  Since we are  talking about running
everything in one web server anywya, using port 80/443 for all web stuff
is the right approach.

I have thought about this as well. The issue here is, URLs for keystone
and horizon will probably clash.
(is https://server/api/... a keystone or a call for horizon).

No matter what we do in devstack, this is something, horizon and
keystone devs need to fix first. E.g. in Horizon, we still discover hard
coded URLs here and there. To catch that kind of things, I had a patch
up for review, to easily configure moving Horizon from using http server
root to something different.

Link?




I would expect the same thing for keystone, too.
Keystone is pretty well set to be moved around.  The way the URLs are 
built are already hierarchical, and so there should be no problem.  
Other services might be a different story.



OK, lets assume we fix the bugs we catch.  How doe we cleanly put all of 
the horizon stuff in its own config section so we can make it work 
beside a Keystone server.  I can see a desire to use Virtual hosts for 
hostsnames in some deployments, so that they could be later split up,.   
Example,  keystone.younglogic.net  and horizon.younglogic.net could 
start on the same server, and be different servers in the future.


And then there is the devstack approach of  doing everything with IP 
addresses.



So I think the goal is to make it possible to wrap the config in a 
virtual host, but not required.  I think we should focus on Location 
tags for the major chunks of configuration.





Matthias


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] [gnocchi] monitoring unexpected success of xfail designated tests

2015-03-06 Thread Chris Dent

On Fri, 6 Mar 2015, Julien Danjou wrote:


On Fri, Mar 06 2015, Chris Dent wrote:


Gnocchi has already started using pretty_tox.sh, so as long as there
is a recent version of subunit-trace (from tempest-lib) installed,
gnocchi test runs will tell when there has been an unexpected
success.


Are you saying that unexpected success is not a failure?


testr will announce commands succeeded when there is a uxsuccess in the
result set. There me be some special sauce we can give it (or
perhaps subunit.run) to say otherwise, but as things are currently
written in the tox files, uxsuccess means pass.


That sounds wrong. Where should we fix that?


It looks like the problem is in subunit (this is with a locally
modified gabbi test):

$ for i in testtools subunit ; \
  do python -m $i.run discover gnocchi.tests.gabbi /dev/null || \
  echo $i catches uxsuccess as fail ; \
  done
testtools catches uxsuccess as fail

I've filed a bug: https://bugs.launchpad.net/subunit/+bug/1429196

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] VXLAN with single-NIC compute nodes: Avoiding the MTU pitfalls

2015-03-06 Thread Fredy Neeser

Hello world

I recently created a VXLAN test setup with single-NIC compute nodes 
(using OpenStack Juno on Fedora 20), conciously ignoring the OpenStack 
advice of using nodes with at least 2 NICs ;-) .


The fact that both native and encapsulated traffic needs to pass through 
the same NIC does create some interesting challenges, but finally I got 
it working cleanly, staying clear of MTU pitfalls ...


I documented my findings here:

  [1] 
http://blog.systemathic.ch/2015/03/06/openstack-vxlan-with-single-nic-compute-nodes/
  [2] 
http://blog.systemathic.ch/2015/03/05/openstack-mtu-pitfalls-with-tunnels/


For those interested in single-NIC setups, I'm curious what you think 
about [1]  (a small patch is needed to add VLAN awareness to the 
qg-XXX Neutron gateway ports).



While catching up with Neutron changes for OpenStack Kilo, I came across 
the in-progress work on MTU selection and advertisement:


  [3]  Spec: 
https://github.com/openstack/neutron-specs/blob/master/specs/kilo/mtu-selection-and-advertisement.rst

  [4]  Patch review:  https://review.openstack.org/#/c/153733/
  [5]  Spec update:  https://review.openstack.org/#/c/159146/

Seems like [1] eliminates some additional MTU pitfalls that are not 
addressed by [3-5].


But I think it would be nice if we could achieve [1] while coordinating 
with the MTU selection and advertisement work [3-5].


Thoughts?

Cheers,
- Fredy

Fredy (Freddie) Neeser
http://blog.systeMathic.ch


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] blueprint about multiple workers supported in nova-scheduler

2015-03-06 Thread Nikola Đipanov
On 03/06/2015 01:56 AM, Rui Chen wrote:
 Thank you very much for in-depth discussion about this topic, @Nikola
 and @Sylvain.
 
 I agree that we should solve the technical debt firstly, and then make
 the scheduler better.
 

That was not necessarily my point.

I would be happy to see work on how to make the scheduler less volatile
when run in parallel, but the solution must acknowledge the eventually
(or never really) consistent nature of the data scheduler has to operate
on (in it's current design - there is also the possibility of offering
an alternative design).

I'd say that fixing the technical debt that is aimed at splitting the
scheduler out of Nova is a mostly orthogonal effort.

There have been several proposals in the past for how to make the
scheduler horizontally scalable and improve it's performance. One that I
remember from the Atlanta summit time-frame was the work done by Boris
and his team [1] (they actually did some profiling and based their work
on the bottlenecks they found). There are also some nice ideas in the
bug lifeless filed [2] since this behaviour particularly impacts ironic.

N.

[1] https://blueprints.launchpad.net/nova/+spec/no-db-scheduler
[2] https://bugs.launchpad.net/nova/+bug/1341420


 Best Regards.
 
 2015-03-05 21:12 GMT+08:00 Sylvain Bauza sba...@redhat.com
 mailto:sba...@redhat.com:
 
 
 Le 05/03/2015 13:00, Nikola Đipanov a écrit :
 
 On 03/04/2015 09:23 AM, Sylvain Bauza wrote:
 
 Le 04/03/2015 04:51, Rui Chen a écrit :
 
 Hi all,
 
 I want to make it easy to launch a bunch of scheduler
 processes on a
 host, multiple scheduler workers will make use of
 multiple processors
 of host and enhance the performance of nova-scheduler.
 
 I had registered a blueprint and commit a patch to
 implement it.
 
 https://blueprints.launchpad.__net/nova/+spec/scheduler-__multiple-workers-support
 
 https://blueprints.launchpad.net/nova/+spec/scheduler-multiple-workers-support
 
 This patch had applied in our performance environment
 and pass some
 test cases, like: concurrent booting multiple instances,
 currently we
 didn't find inconsistent issue.
 
 IMO, nova-scheduler should been scaled horizontally on
 easily way, the
 multiple workers should been supported as an out of box
 feature.
 
 Please feel free to discuss this feature, thanks.
 
 
 As I said when reviewing your patch, I think the problem is
 not just
 making sure that the scheduler is thread-safe, it's more
 about how the
 Scheduler is accounting resources and providing a retry if those
 consumed resources are higher than what's available.
 
 Here, the main problem is that two workers can actually
 consume two
 distinct resources on the same HostState object. In that
 case, the
 HostState object is decremented by the number of taken
 resources (modulo
 what means a resource which is not an Integer...) for both,
 but nowhere
 in that section, it does check that it overrides the
 resource usage. As
 I said, it's not just about decorating a semaphore, it's
 more about
 rethinking how the Scheduler is managing its resources.
 
 
 That's why I'm -1 on your patch until [1] gets merged. Once
 this BP will
 be implemented, we will have a set of classes for managing
 heterogeneous
 types of resouces and consume them, so it would be quite
 easy to provide
 a check against them in the consume_from_instance() method.
 
 I feel that the above explanation does not give the full picture in
 addition to being factually incorrect in several places. I have
 come to
 realize that the current behaviour of the scheduler is subtle enough
 that just reading the code is not enough to understand all the edge
 cases that can come up. The evidence being that it trips up even
 people
 that have spent significant time working on the code.
 
 It is also important to consider the design choices in terms of
 tradeoffs that they were trying to make.
 
 So here are some facts about the way Nova does scheduling of
 instances
 to compute hosts, considering the amount of resources requested
 by the
 flavor (we will try to put the facts into a bigger picture later):
 
 * Scheduler receives request to chose hosts for one or more
 instances.
 * Upon every request 

Re: [openstack-dev] [ceilometer] [gnocchi] monitoring unexpected success of xfail designated tests

2015-03-06 Thread Chris Dent

On Fri, 6 Mar 2015, Chris Dent wrote:


It looks like the problem is in subunit (this is with a locally
modified gabbi test):

   $ for i in testtools subunit ; \
 do python -m $i.run discover gnocchi.tests.gabbi /dev/null || \
 echo $i catches uxsuccess as fail ; \
 done
   testtools catches uxsuccess as fail

I've filed a bug: https://bugs.launchpad.net/subunit/+bug/1429196


Actually I guess that's just the way subunit works. It doesn't exit
with failure even when there is failure. So that suggests the
problem is in testr's interpretation of the result stream.

I'll update the bug.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] [all] [stable] python-glanceclient release 0.16.1

2015-03-06 Thread Ian Cordasco
On 3/5/15, 10:56, Dr. Jens Rosenboom j.rosenb...@x-ion.de wrote:

Am 05/03/15 um 17:37 schrieb Ian Cordasco:
 The clients in general do not back port patches. Someone should work
with
 stable-maint to raise the cap in Icehouse and Juno. I suspect, however,
 that those caps were added due to the client breaking other projects.
 Proposals can be made though and ideally, openstack/requirements’ gate
 jobs will catch any breakage.

I don't think that raising the cap will be feasible anymore with
incompatible changes like the oslo namespace drop.

What prevents clients from having stable branches except we never did
this because up to now it wasn't necessary?

That’s a good question that I can’t answer. I’m relatively new to
OpenStack and this has just how I’ve understood the release management for
clients. Sorry I can’t provide more insight than that.

 On 3/5/15, 10:28, Dr. Jens Rosenboom j.rosenb...@x-ion.de wrote:

 Am 05/03/15 um 06:02 schrieb Nikhil Komawar:
 The python-glanceclient release management team is pleased to
announce:
   python-glanceclient version 0.16.1 has been released on
Thursday,
 Mar 5th around 04:56 UTC.

 The release includes a bugfix for [1], which is affecting us in
 Icehouse, most likely Juno is also affected. However, due to the
 requirements caps recently introduced, the bugfix will not be picked up
 by the stable branches (caps are =0.14.2 for Icehouse and =0.15.0 for
 Juno).

 The patch itself [2] applies cleanly to the older code, so in theory it
 should be possible to build some 0.14.3 release with that and update
the
 stable requirements accordingly. But I guess that this would require
 setting up stable branches for the client git repo, which currently
 don't exist.

 Are there plans to do this or is there some other way to backport the
 fix? I assume that the same issue may happen with other client releases
 in the future.

 [1] https://bugs.launchpad.net/python-glanceclient/+bug/1423165
 [2] https://review.openstack.org/156975


 

__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
_
_
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Core nominations.

2015-03-06 Thread Kyle Mestery
On Fri, Mar 6, 2015 at 11:40 AM, Ian Cordasco ian.corda...@rackspace.com
wrote:

 I like that idea. Can you start it out with Nova or Neutron’s guidelines?

 FYI, the core reviewer guidelines for Neutron are in-tree now [1], along
with all of our other policies around operating in Neutron [2].

[1]
https://github.com/openstack/neutron/blob/master/doc/source/policies/core-reviewers.rst
[2] https://github.com/openstack/neutron/tree/master/doc/source/policies


 On 3/5/15, 17:38, Mikhail Fedosin mfedo...@mirantis.com wrote:

 I think yes, it does. But I mean that now we're writing a document called
 Glance Review Guidelines
 
 
 https://docs.google.com/document/d/1Iia0BjQoXvry9XSbf30DRwQt--ODglw-ZTT_5R
 JabsI/edit?usp=sharing
 
 https://docs.google.com/document/d/1Iia0BjQoXvry9XSbf30DRwQt--ODglw-ZTT_5
 RJabsI/edit?usp=sharing and it has a section For cores. It's easy to
 include some concrete rules there to
 add
 more clarity.
 
 2015-03-05 17:46 GMT+03:00 Ihar Hrachyshka
 ihrac...@redhat.com:
 
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 On 03/05/2015 11:35 AM, Mikhail Fedosin wrote:
  Yes, it's absolutely right. For example, Nova and Neutron have
  official rules for that:
  https://wiki.openstack.org/wiki/Nova/CoreTeam where it says: A
  member of the team may be removed at any time by the PTL. This is
  typically due to a drop off of involvement by the member such that
  they are no longer meeting expectations to maintain team
  membership.
 https://wiki.openstack.org/wiki/NeutronCore
 https://wiki.openstack.org/wiki/NeutronCore The PTL
  may remove a member from neutron-core at any time. Typically when a
  member has decreased their involvement with the project through a
  drop in reviews and participation in general project development,
  the PTL will propose their removal and remove them. Members who
  have previously been core may be fast-tracked back into core if
  their involvement picks back up So, as Louis has mentioned, it's a
  routine work, and why should we be any different? Also, I suggest
  to write the same wiki document for Glance to prevent these issues
  in the future.
 
 
 Does the rule belong to e.g. governance repo? It seems like a sane
 requirement for core *reviewers* to actually review code, no? Or are
 there any projects that would not like to adopt such a rule formally?
 
 /Ihar
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1
 
 iQEcBAEBAgAGBQJU+GxdAAoJEC5aWaUY1u579mEIAMN/wucsahaZ0yMT2/eo8t05
 rIWI+lBLjNueWJgB+zNbVlVcsKBZ/hl4J0O3eE65RtlTS5Rta5hv0ymyRL1nnUZH
 g/tL3ogEF0SsSBkiavVh3klGmUwsvQ+ygPN5rVgnbiJ+uO555EPlbiHwZHbcjBoI
 lyUjIhWzUCX26wq7mgiTsY858UgdEt3urVHD9jTE2WNszMRLXQ7vsoAik9xDfISz
 E0eZ8WVQKlNHNox0UoKbobdb3YDhmY3Ahp9Fj2cT1IScyQGORnm0hXV3+pRdWNhD
 1M/gDwAf97F1lfNxPpy4JCGutbe5zoPQYLpJExzaXkqqARxUnwhB1gZ9lEG8l2o=
 =lcLY
 -END PGP SIGNATURE-
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Announcing Gertty 1.1.0

2015-03-06 Thread James E. Blair
Announcing Gertty 1.1.0
===

Gertty is a console-based interface to the Gerrit Code Review system.

Gertty is designed to support a workflow similar to reading network
news or mail.  It syncs information from Gerrit to local storage to
support disconnected operation and easy manipulation of local git
repos.  It is fast and efficient at dealing with large numbers of
changes and projects.

The full README may be found here:

  https://git.openstack.org/cgit/stackforge/gertty/tree/README.rst

Changes since 1.0.3:


* Gertty is now packaged in both Debian and Fedora.

* Threaded change list: when changes depend on each other, their
  relationship is shown visually in the change list.  This is very
  similar to how mail and news readers show message threads.

* Starred changes are supported.

* Sortable change list: lists of changes may now be sorted by change
  number, or how recently they were updated.

* Summary vote values are now colorized in change lists.

* Review dialog improvements: the review dialog now includes
  descriptions of votes and appropriate colors for each value.

* Improved responsiveness: background syncing operations release
  database locks faster to avoid UI update delays.  Also, toggling
  whether a change is hidden or reviewed is faster.

* A verbose logging option has been added (less info than debug, but
  more than the standard level which only logs errors).

* Long lines are now wrapped in the side-by-side diff view.

* Gertty now checks that its config file is not world readable if it
  contains a password (which is already optional).

* Several bug fixes related to:

  * Keymap customization
  * Errors displaying files with no changes
  * Syncing changes with draft patchsets
  * Errors displaying binary diffs
  * Character encoding issues
  * Searching

Thanks to the following people whose changes are included in this
release:

  Andrew Ruthven
  Antoine Musso
  Bradley Jones
  Cedric Brandily
  Christian Berendt
  David Pursehouse
  Dolph Mathews
  Ian Cordasco
  James Polley
  Jan Kundrát
  Jay Pipes
  Jeremy Stanley
  John L. Villalovos
  Khai Do
  Matthew Treinish

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] HTTPD Config

2015-03-06 Thread Matthias Runge
On Fri, Mar 06, 2015 at 11:08:44AM -0500, Adam Young wrote:
 No matter what we do in devstack, this is something, horizon and
 keystone devs need to fix first. E.g. in Horizon, we still discover hard
 coded URLs here and there. To catch that kind of things, I had a patch
 up for review, to easily configure moving Horizon from using http server
 root to something different.
 Link?
https://review.openstack.org/#/c/86880/

Obviously, that needs a bit work now.

-- 
Matthias Runge mru...@redhat.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][Neutron] revert to fix a tftp DHCP issue

2015-03-06 Thread Dan Prince
I mentioned this on #openstack-neutron IRC today but it would be nice to
get more eyes on a (2 weeks old) revert here [1].

The only reasons TripleO ci has been working the last 2 weeks because we
are cherry picking this very revert/fix via our CI scripts here [2].

This issue has sort of slipped through the cracks I think. After talking
about it on IRC ihrachyshka was kind enough to post a roll-forward fix
here [4].

A good way forwards which we could test along the way with the upstream
CI would be to:

1) Land the the revert in [1].

2) Update TripleO CI so that we no longer cherry pick the Neutron fix
here [3].

3) Update the roll-forward fix in [4] to contain the original patch plus
the fixed code and run 'check experimental' on it. This should fire the
TripleO ci job and we'll be able to see if it fails...


[1] https://review.openstack.org/#/c/156853/
[2]http://git.openstack.org/cgit/openstack-infra/tripleo-ci/commit/?id=f90760f425ec6c80da12ebee4922d799824f4440
[3] https://review.openstack.org/#/c/162212/1
[4] https://review.openstack.org/#/c/162260/1

Appreciate help on these issues.

Dan


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Core nominations.

2015-03-06 Thread Ian Cordasco
I like that idea. Can you start it out with Nova or Neutron’s guidelines?

On 3/5/15, 17:38, Mikhail Fedosin mfedo...@mirantis.com wrote:

I think yes, it does. But I mean that now we're writing a document called
Glance Review Guidelines

https://docs.google.com/document/d/1Iia0BjQoXvry9XSbf30DRwQt--ODglw-ZTT_5R
JabsI/edit?usp=sharing
https://docs.google.com/document/d/1Iia0BjQoXvry9XSbf30DRwQt--ODglw-ZTT_5
RJabsI/edit?usp=sharing and it has a section For cores. It's easy to
include some concrete rules there to
add 
more clarity.

2015-03-05 17:46 GMT+03:00 Ihar Hrachyshka
ihrac...@redhat.com:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 03/05/2015 11:35 AM, Mikhail Fedosin wrote:
 Yes, it's absolutely right. For example, Nova and Neutron have
 official rules for that:
 https://wiki.openstack.org/wiki/Nova/CoreTeam where it says: A
 member of the team may be removed at any time by the PTL. This is
 typically due to a drop off of involvement by the member such that
 they are no longer meeting expectations to maintain team
 membership. 
https://wiki.openstack.org/wiki/NeutronCore
https://wiki.openstack.org/wiki/NeutronCore The PTL
 may remove a member from neutron-core at any time. Typically when a
 member has decreased their involvement with the project through a
 drop in reviews and participation in general project development,
 the PTL will propose their removal and remove them. Members who
 have previously been core may be fast-tracked back into core if
 their involvement picks back up So, as Louis has mentioned, it's a
 routine work, and why should we be any different? Also, I suggest
 to write the same wiki document for Glance to prevent these issues
 in the future.


Does the rule belong to e.g. governance repo? It seems like a sane
requirement for core *reviewers* to actually review code, no? Or are
there any projects that would not like to adopt such a rule formally?

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJU+GxdAAoJEC5aWaUY1u579mEIAMN/wucsahaZ0yMT2/eo8t05
rIWI+lBLjNueWJgB+zNbVlVcsKBZ/hl4J0O3eE65RtlTS5Rta5hv0ymyRL1nnUZH
g/tL3ogEF0SsSBkiavVh3klGmUwsvQ+ygPN5rVgnbiJ+uO555EPlbiHwZHbcjBoI
lyUjIhWzUCX26wq7mgiTsY858UgdEt3urVHD9jTE2WNszMRLXQ7vsoAik9xDfISz
E0eZ8WVQKlNHNox0UoKbobdb3YDhmY3Ahp9Fj2cT1IScyQGORnm0hXV3+pRdWNhD
1M/gDwAf97F1lfNxPpy4JCGutbe5zoPQYLpJExzaXkqqARxUnwhB1gZ9lEG8l2o=
=lcLY
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] VXLAN with single-NIC compute nodes: Avoiding the MTU pitfalls

2015-03-06 Thread Attila Fazekas
Can you check is this patch does the right thing[1]:

[1] https://review.openstack.org/#/c/112523/6

- Original Message -
 From: Fredy Neeser fredy.nee...@solnet.ch
 To: openstack-dev@lists.openstack.org
 Sent: Friday, March 6, 2015 6:01:08 PM
 Subject: [openstack-dev] [neutron] VXLAN with single-NIC compute nodes:   
 Avoiding the MTU pitfalls
 
 Hello world
 
 I recently created a VXLAN test setup with single-NIC compute nodes
 (using OpenStack Juno on Fedora 20), conciously ignoring the OpenStack
 advice of using nodes with at least 2 NICs ;-) .
 
 The fact that both native and encapsulated traffic needs to pass through
 the same NIC does create some interesting challenges, but finally I got
 it working cleanly, staying clear of MTU pitfalls ...
 
 I documented my findings here:
 
[1]
 http://blog.systemathic.ch/2015/03/06/openstack-vxlan-with-single-nic-compute-nodes/
[2]
 http://blog.systemathic.ch/2015/03/05/openstack-mtu-pitfalls-with-tunnels/
 
 For those interested in single-NIC setups, I'm curious what you think
 about [1]  (a small patch is needed to add VLAN awareness to the
 qg-XXX Neutron gateway ports).
 
 
 While catching up with Neutron changes for OpenStack Kilo, I came across
 the in-progress work on MTU selection and advertisement:
 
[3]  Spec:
 https://github.com/openstack/neutron-specs/blob/master/specs/kilo/mtu-selection-and-advertisement.rst
[4]  Patch review:  https://review.openstack.org/#/c/153733/
[5]  Spec update:  https://review.openstack.org/#/c/159146/
 
 Seems like [1] eliminates some additional MTU pitfalls that are not
 addressed by [3-5].
 
 But I think it would be nice if we could achieve [1] while coordinating
 with the MTU selection and advertisement work [3-5].
 
 Thoughts?
 
 Cheers,
 - Fredy
 
 Fredy (Freddie) Neeser
 http://blog.systeMathic.ch
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] Do No Evil

2015-03-06 Thread Michael Krotscheck
Heya!

So, a while ago Horizon pulled in JSHint to do javascript linting, which is
awesome, but has a rather obnoxious Do no evil licence in the codebase:
https://github.com/jshint/jshint/blob/master/src/jshint.js

StoryBoard had the same issue, and I've recently replaced JSHint with
ESlint for just that reason, but I'm not certain it matters as far as
OpenStack license compatibility. I'm personally of the opinion that tools
used != code shipped, but I am neither a lawyer nor a liable party should
my opinion be wrong. Is this something worth revisiting?

Michael
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Driver documentation for Kilo [cinder] [neutron] [nova] [trove]

2015-03-06 Thread Anne Gentle
Hi all,

We have been working on streamlining driver documentation for Kilo through
a specification, on the mailing lists, and in my weekly What's Up Doc
updates. Thanks for the reviews while we worked out the solutions. Here's
the final spec:
http://specs.openstack.org/openstack/docs-specs/specs/kilo/move-driver-docs.html

Driver documentation caretakers, please note the following summary:

- At a minimum, driver docs are published in the Configuration Reference at
with tables automatically generated from the code. There's a nice set of
examples in this patch: https://review.openstack.org/#/c/157086/

- If you want full driver docs on docs.openstack.org, please add a contact
person's name and email to this wiki page:
https://wiki.openstack.org/wiki/Documentation/VendorDrivers

- To be included in the April 30 release of the Configuration Reference,
driver docs are due by April 9th.

Thanks all for your collaboration and attention.

Anne


-- 
Anne Gentle
annegen...@justwriteclick.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] [all] glance_store release 0.2.0

2015-03-06 Thread Nikhil Komawar
The glance_store release management team is pleased to announce:

glance_store version 0.2.0 has been released on Friday March 6th around 
20:17 UTC.

For more information, please find the details at:

https://launchpad.net/glance-store/+milestone/v0.2.0

Please report the issues through launchpad:

https://bugs.launchpad.net/glance-store

Thanks,
-Nikhil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ThirdPartyCI][PCI] Intel Third party Hardware based CI for PCI

2015-03-06 Thread Chris Friesen
Hi...it would be good to test a bunch of the 
hugepages/pinning/multi-numa-node-guests/etc. features with real hardware.  The 
normal testing doesn't cover much of that since it's hardware-agnostic.


Chris


On 01/07/2015 08:31 PM, yongli he wrote:

Hi,

Intel  set up a Hardware based Third Part CI.   it's already running sets of PCI
test cases
for several  weeks (do not sent out comments, just log the result)
the log server and these test cases seems fairly stable now.   to begin given
comments  to nova
repository,  what other necessary work need to be address?

Details:
1. ThirdPartySystems https://wiki.openstack.org/wiki/ThirdPartySystems 
Information
https://wiki.openstack.org/wiki/ThirdPartySystems/Intel-PCI-CI

2. a sample logs:
http://192.55.68.190/143614/6/ cid:part2.01090706.06060904@intel.com

http://192.55.68.190/143614/6/

http://192.55.68.190/139900/4

http://192.55.68.190/143372/3/

http://192.55.68.190/141995/6/

http://192.55.68.190/137715/13/

http://192.55.68.190/133269/14/

3. Test cases on github:
https://github.com/intel-hw-ci/Intel-Openstack-Hardware-CI/tree/master/pci_testcases



Thanks
Yongli He



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Question about bug 1314614

2015-03-06 Thread Sławek Kapłoński
Hello,

Today I found bug https://bugs.launchpad.net/neutron/+bug/1314614 because I 
have such problem on my infra. 
I saw that bug is In progress but change is abandoned quite long time ago. I 
was wondering is it possible that neutron will send notification to nova that 
such port was deleted in neutron? I know that in Juno neutron is sending 
notifications to nova when port is UP on compute node so maybe same mechanism 
can be used to notify nova that port is no longer exists and nova should 
delete it?

--
Pozdrawiam / Best regards
Sławek Kapłoński
sla...@kaplonski.pl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Websocketproxy.py Invalid Token

2015-03-06 Thread Murali R
Hi Folks,

My Juno setup does not get the console for VM started and errors out with
1006 error. I struggled for a day changing many things but does not work.
What could I be doing wrong?

The controller has nova-consoleauth and nova-novncproxy services running
and compute the nova-compute.

Controller setting in nova.conf (192.168.20.129 mgmt  10.145... is
external access):
my_ip = 192.168.20.129
novncproxy_base_url=http://10.145.90.59:6080/vnc_auto.html
vncserver_proxyclient_address = 192.168.20.133

Compute:
vnc_enabled = True
novncproxy_base_url = http://10.145.90.59:6080/vnc_auto.html
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = 192.168.20.133

I think this is what the minimum specified for Juno. Two issues

1. nova get-vnc-console always returns link with 127.0.0.1 ip addr
2. The link with IP fixed says not authorized and debug message is as
below

2015-03-06 02:15:47.439 31595 TRACE nova.console.websocketproxy   File
/usr/lib/python2.7/dist-packages/nova/console/websocketproxy.py, line 63,
in new_websocket_client
2015-03-06 02:15:47.439 31595 TRACE nova.console.websocketproxy raise
Exception(_(Invalid Token))
2015-03-06 02:15:47.439 31595 TRACE nova.console.websocketproxy Exception:
Invalid Token
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] Deprecation of ComputeFilter

2015-03-06 Thread Sylvain Bauza

(Adding back the -dev ML as it was removed)


Le 06/03/2015 20:25, Jay Pipes a écrit :

On 03/06/2015 10:54 AM, Jesse Keating wrote:

On 3/6/15 10:48 AM, Jay Pipes wrote:


Have you ever done this in practice?

One way of doing this would be to enable the host after adding it to a
host aggregate that only has your administrative tenant allowed. Then
launch an instance specifying some host aggregate extra_spec tag and 
the

launch request will go to that host...


At Rackspace scheduling builds against disabled hosts has been done, or
I am misremembering.


Cool, good to know. Just trying to get my head around the use cases.


As I did say, there are probably other ways around it. A host group AZ
might just work.


Yeah, I think a solution that doesn't rely on a CONF option would be 
my preference. Allowing administrative override of scheduling 
decisions entirely is OK, I guess. But I'd almost prefer an ability 
that simply sidesteps the scheduler altogether and allows the admin to 
directly launch an instance on a compute node directly without even 
needing to go through the RESTful tenant API at all.




Just to be clear, when evacuating or live-migrating VMs by specifying a 
destination host, it totally overrides the scheduler and doesn't call 
it, but rather call the Compute Manager directly.
In that case, there is no need to keep the ComputeFilter, because it 
won't never be called anyway.


That said, I know there is a pretty old hack by using AZs for specifying 
a destination host in a boot command and I wonder if it does the same 
behaviour. If not, it should do exactly like the above, and just ask the 
compute node directly without querying the Scheduler.


The actual only thing when the Scheduler would need to look at 
non-active nodes would be when waiting to use aggregate extra fields and 
matching flavor for sending VM requests to a specific set of hosts 
within an aggregate, where if by removing the inactive nodes, it would 
just call the active ones.


Maybe it's not worth good for leaving the CONF option as Jay mentioned, 
so I have to admit I'm in favor of removing the possibility to do this. 
Thoughts ?


-Sylvain


Anyway, something to ponder...

Best,
-jay

___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Core nominations.

2015-03-06 Thread Nikhil Komawar
Thank you all for the input outside of the program: Kyle, Ihar, Thierry, Daniel!


Mike, Ian: It's a good idea to have the policy however, we need to craft one 
that is custom to the Glance program. It will be a bit different to ones out 
there as we've contributors who are dedicated to only subset of the code - for 
example just glance_store or python-glanceclient or metadefs. From here on, we 
may see that for Artifacts and other such features. It's already being observed 
for metadefs.


While I like Mike's suggestion to (semi-)adopt what Nova and Neutron are doing, 
it also makes me wonder if that's going to help us in long term. If not, then 
what can we do now to set a good path forward?


Flavio, Erno, Malini, Louis, Mike: Drafting a guideline policy and implementing 
rotation based on that was my intent so that everyone is aware of the changes 
in the program. That would let the core reviewers know what their duties are 
and let non-cores know what they need to do to become cores. Moreover, I've a 
idea for proposing a core-member status for our program than just 
core-reviewer. That seems more applicable for a few strong regular contributors 
like Travis and Lakshmi who work on bug fixes, bug triaging and client 
improvements however, do not seem to keep momentum on reviews. The core status 
can affect project decisions hence, this change may be important. This process 
may involve some interactions with governance so, will take more time.


Malini: I wish to take a strategic decision here rather an agile one. That 
needs a lot of brainpower before implementation. While warning and acting is 
good, it seems less applicable for this case. Simply because, we need to make a 
positive difference in the interactions of the community and we've a chance of 
doing that here.


Nevertheless, I do not want to block the new-core additions or ask Flavio 
et.al. to accommodate for the reviews that the new members would have been able 
to do (just kidding).


Tweaking Flavio's criterion of cleaning up the list for cores who have not done 
any reviews in the last 2 cycles (Icehouse and Juno), I've prepared a new list 
below (as Flavio's list did not match up even if we take cycles to be Juno, 
Kilo). They can be added back to the list faster in the future if they consider 
contributing to Glance again.


The criterion is:

Reviews = 50 in combined cycles.


Proposal to remove the following members(review_count) from the glance-core 
list:

  *   Brian Lamar (0+15)
  *   Brian Waldon (0+0)
  *   Dan Prince (3+1)
  *   Eoghan Glynn (0+3)
  *   John Bresnahan (31+12)

And we would add the following new members:

  *   Ian Cordasco
  *   Louis Taylor
  *   Mike Fedosin
  *   Hemanth Makkapati


This way we've a first round of consolidation done. It must be evident that the 
list-cleanup proposed above is not comprehensive with regards to who is truly 
inactive. Thus, misses out on a few names due to lack of established criterion. 
We can do more about rotation in the following weeks.


Hope it works!


Regards,
-Nikhil

From: Kyle Mestery mest...@mestery.com
Sent: Friday, March 6, 2015 12:45 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance] Core nominations.

On Fri, Mar 6, 2015 at 11:40 AM, Ian Cordasco 
ian.corda...@rackspace.commailto:ian.corda...@rackspace.com wrote:
I like that idea. Can you start it out with Nova or Neutron’s guidelines?

FYI, the core reviewer guidelines for Neutron are in-tree now [1], along with 
all of our other policies around operating in Neutron [2].

[1] 
https://github.com/openstack/neutron/blob/master/doc/source/policies/core-reviewers.rst
[2] https://github.com/openstack/neutron/tree/master/doc/source/policies

On 3/5/15, 17:38, Mikhail Fedosin 
mfedo...@mirantis.commailto:mfedo...@mirantis.com wrote:

I think yes, it does. But I mean that now we're writing a document called
Glance Review Guidelines

https://docs.google.com/document/d/1Iia0BjQoXvry9XSbf30DRwQt--ODglw-ZTT_5R
JabsI/edit?usp=sharing
https://docs.google.com/document/d/1Iia0BjQoXvry9XSbf30DRwQt--ODglw-ZTT_5
RJabsI/edit?usp=sharing and it has a section For cores. It's easy to
include some concrete rules there to
add
more clarity.

2015-03-05 17:46 GMT+03:00 Ihar Hrachyshka
ihrac...@redhat.commailto:ihrac...@redhat.com:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 03/05/2015 11:35 AM, Mikhail Fedosin wrote:
 Yes, it's absolutely right. For example, Nova and Neutron have
 official rules for that:
 https://wiki.openstack.org/wiki/Nova/CoreTeam where it says: A
 member of the team may be removed at any time by the PTL. This is
 typically due to a drop off of involvement by the member such that
 they are no longer meeting expectations to maintain team
 membership.
https://wiki.openstack.org/wiki/NeutronCore
https://wiki.openstack.org/wiki/NeutronCore The PTL
 may remove a member from neutron-core at any time. Typically when a
 member 

Re: [openstack-dev] [Glance] Core nominations.

2015-03-06 Thread Ian Cordasco


On 3/6/15, 15:04, Nikhil Komawar nikhil.koma...@rackspace.com wrote:

Thank you all for the input outside of the program: Kyle, Ihar, Thierry,
Daniel!


Mike, Ian: It's a good idea to have the policy however, we need to craft
one that is custom to the Glance program. It will be a bit different to
ones out there as we've contributors who are dedicated to only subset of
the code - for example just glance_store
 or python-glanceclient or metadefs. From here on, we may see that for
Artifacts and other such features. It's already being observed for
metadefs.


While I like Mike's suggestion to (semi-)adopt what Nova and Neutron are
doing, it also makes me wonder if that's going to help us in long term.
If not, then what can we do now to set a good path forward?


Flavio, Erno, Malini, Louis, Mike: Drafting a guideline policy and
implementing rotation based on that was my intent so that everyone is
aware of the changes in the program. That would let the core reviewers
know what their duties are and let non-cores know
 what they need to do to become cores. Moreover, I've a idea for
proposing a core-member status for our program than just core-reviewer.
That seems more applicable for a few strong regular contributors like
Travis and Lakshmi who work on bug fixes, bug triaging
 and client improvements however, do not seem to keep momentum on
reviews. The core status can affect project decisions hence, this change
may be important. This process may involve some interactions with
governance so, will take more time.


Malini: I wish to take a strategic decision here rather an agile one.
That needs a lot of brainpower before implementation. While warning and
acting is good, it seems less applicable for this case. Simply because,
we need to make a positive difference in
 the interactions of the community and we've a chance of doing that here.



Nevertheless, I do not want to block the new-core additions or ask Flavio
et.al. to accommodate for the reviews that the new members would have
been able to do (just kidding).



Tweaking Flavio's criterion of cleaning up the list for cores who have
not done any reviews in the last 2 cycles (Icehouse and Juno), I've
prepared a new list below (as Flavio's list did not match up even if we
take cycles to be Juno, Kilo). They can be
 added back to the list faster in the future if they consider
contributing to Glance again.


The criterion is:
Reviews = 50 in combined cycles.


Proposal to remove the following members(review_count) from the
glance-core list:

* Brian Lamar (0+15)
* Brian Waldon (0+0)
* Dan Prince (3+1)
* Eoghan Glynn (0+3)
* John Bresnahan (31+12)

+1 to removing inactive core reviewers.


And we would add the following new members:

* Ian Cordasco

Who’s that guy? Why is he being nominated? =P

* Louis Taylor
* Mike Fedosin
* Hemanth Makkapati

+1 to these three being added though.



This way we've a first round of consolidation done. It must be evident
that the list-cleanup proposed above is not comprehensive with regards to
who is truly inactive. Thus, misses out on a few names due to lack of
established criterion. We can do more about
 rotation in the following weeks.


Hope it works!



Regards,
-Nikhil



From: Kyle Mestery mest...@mestery.com
Sent: Friday, March 6, 2015 12:45 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance] Core nominations.

On Fri, Mar 6, 2015 at 11:40 AM, Ian Cordasco
ian.corda...@rackspace.com wrote:

I like that idea. Can you start it out with Nova or Neutron’s guidelines?



FYI, the core reviewer guidelines for Neutron are in-tree now [1], along
with all of our other policies around operating in Neutron [2].

[1] 
https://github.com/openstack/neutron/blob/master/doc/source/policies/core-
reviewers.rst 
https://github.com/openstack/neutron/blob/master/doc/source/policies/core
-reviewers.rst
[2] 
https://github.com/openstack/neutron/tree/master/doc/source/policies
https://github.com/openstack/neutron/tree/master/doc/source/policies
 


On 3/5/15, 17:38, Mikhail Fedosin mfedo...@mirantis.com wrote:

I think yes, it does. But I mean that now we're writing a document called
Glance Review Guidelines

https://docs.google.com/document/d/1Iia0BjQoXvry9XSbf30DRwQt--ODglw-ZTT_5
R
JabsI/edit?usp=sharing
https://docs.google.com/document/d/1Iia0BjQoXvry9XSbf30DRwQt--ODglw-ZTT_
5
RJabsI/edit?usp=sharing and it has a section For cores. It's easy to
include some concrete rules there to
add
more clarity.

2015-03-05 17:46 GMT+03:00 Ihar Hrachyshka
ihrac...@redhat.com:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 03/05/2015 11:35 AM, Mikhail Fedosin wrote:
 Yes, it's absolutely right. For example, Nova and Neutron have
 official rules for that:
 https://wiki.openstack.org/wiki/Nova/CoreTeam where it says: A
 member of the team may be removed at any time by the PTL. This is
 typically due to a drop off of involvement by the member such that
 they are 

Re: [openstack-dev] [Glance] Core nominations.

2015-03-06 Thread Flavio Percoco

On 06/03/15 21:09 +, Ian Cordasco wrote:



On 3/6/15, 15:04, Nikhil Komawar nikhil.koma...@rackspace.com wrote:


Thank you all for the input outside of the program: Kyle, Ihar, Thierry,
Daniel!


Mike, Ian: It's a good idea to have the policy however, we need to craft
one that is custom to the Glance program. It will be a bit different to
ones out there as we've contributors who are dedicated to only subset of
the code - for example just glance_store
or python-glanceclient or metadefs. From here on, we may see that for
Artifacts and other such features. It's already being observed for
metadefs.


While I like Mike's suggestion to (semi-)adopt what Nova and Neutron are
doing, it also makes me wonder if that's going to help us in long term.
If not, then what can we do now to set a good path forward?


Flavio, Erno, Malini, Louis, Mike: Drafting a guideline policy and
implementing rotation based on that was my intent so that everyone is
aware of the changes in the program. That would let the core reviewers
know what their duties are and let non-cores know
what they need to do to become cores. Moreover, I've a idea for
proposing a core-member status for our program than just core-reviewer.
That seems more applicable for a few strong regular contributors like
Travis and Lakshmi who work on bug fixes, bug triaging
and client improvements however, do not seem to keep momentum on
reviews. The core status can affect project decisions hence, this change
may be important. This process may involve some interactions with
governance so, will take more time.


Malini: I wish to take a strategic decision here rather an agile one.
That needs a lot of brainpower before implementation. While warning and
acting is good, it seems less applicable for this case. Simply because,
we need to make a positive difference in
the interactions of the community and we've a chance of doing that here.



Nevertheless, I do not want to block the new-core additions or ask Flavio
et.al. to accommodate for the reviews that the new members would have
been able to do (just kidding).



Tweaking Flavio's criterion of cleaning up the list for cores who have
not done any reviews in the last 2 cycles (Icehouse and Juno), I've
prepared a new list below (as Flavio's list did not match up even if we
take cycles to be Juno, Kilo). They can be
added back to the list faster in the future if they consider
contributing to Glance again.


The criterion is:
Reviews = 50 in combined cycles.


Proposal to remove the following members(review_count) from the
glance-core list:

* Brian Lamar (0+15)
* Brian Waldon (0+0)
* Dan Prince (3+1)
* Eoghan Glynn (0+3)
* John Bresnahan (31+12)


+1 to removing inactive core reviewers.


+2

this is a good start.





And we would add the following new members:

* Ian Cordasco


Who’s that guy? Why is he being nominated? =P


Oh dear...




* Louis Taylor
* Mike Fedosin
* Hemanth Makkapati


+1 to these three being added though.


+2





This way we've a first round of consolidation done. It must be evident
that the list-cleanup proposed above is not comprehensive with regards to
who is truly inactive. Thus, misses out on a few names due to lack of
established criterion. We can do more about
rotation in the following weeks.


Hope it works!


Thanks for taking care of this,
Flavio





Regards,
-Nikhil



From: Kyle Mestery mest...@mestery.com
Sent: Friday, March 6, 2015 12:45 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance] Core nominations.

On Fri, Mar 6, 2015 at 11:40 AM, Ian Cordasco
ian.corda...@rackspace.com wrote:

I like that idea. Can you start it out with Nova or Neutron’s guidelines?



FYI, the core reviewer guidelines for Neutron are in-tree now [1], along
with all of our other policies around operating in Neutron [2].

[1]
https://github.com/openstack/neutron/blob/master/doc/source/policies/core-
reviewers.rst
https://github.com/openstack/neutron/blob/master/doc/source/policies/core
-reviewers.rst
[2]
https://github.com/openstack/neutron/tree/master/doc/source/policies
https://github.com/openstack/neutron/tree/master/doc/source/policies



On 3/5/15, 17:38, Mikhail Fedosin mfedo...@mirantis.com wrote:


I think yes, it does. But I mean that now we're writing a document called
Glance Review Guidelines

https://docs.google.com/document/d/1Iia0BjQoXvry9XSbf30DRwQt--ODglw-ZTT_5
R
JabsI/edit?usp=sharing
https://docs.google.com/document/d/1Iia0BjQoXvry9XSbf30DRwQt--ODglw-ZTT_
5
RJabsI/edit?usp=sharing and it has a section For cores. It's easy to
include some concrete rules there to
add
more clarity.

2015-03-05 17:46 GMT+03:00 Ihar Hrachyshka
ihrac...@redhat.com:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 03/05/2015 11:35 AM, Mikhail Fedosin wrote:

Yes, it's absolutely right. For example, Nova and Neutron have
official rules for that:
https://wiki.openstack.org/wiki/Nova/CoreTeam where it says: A
member