Re: [openstack-dev] Rally scenario Issue

2014-09-03 Thread masoom alam
Hi Ajay,

We are testing the same scenario that you are working one, but getting the
follow error:

http://paste.openstack.org/show/105029/

Could you be of any help here?

Thanks




On Wed, Sep 3, 2014 at 4:16 AM, Ajay Kalambur (akalambu) akala...@cisco.com
 wrote:

  Hi Guys
 For the throughput tests I need to be able to install iperf on the cloud
 image. For this DNS server needs to be set. But the current network context
 should also support DNS name server setting
 Should we add that into network context?
 Ajay



   From: Boris Pavlovic bo...@pavlovic.me

 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Friday, August 29, 2014 at 2:08 PM

 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Cc: Harshil Shah (harsshah) harss...@cisco.com
 Subject: Re: [openstack-dev] Rally scenario Issue

   Timur,

  Thanks for pointing Ajay.

  Ajay,

   Also I cannot see this failure unless I run rally with –v –d object.


  Actually rally is sotring information about all failures. To get
 information about them you can run next command:

  *rally task results --pprint*

  It will display all information about all iterations (including
 exceptions)


   Second when most of the steps in the scenario failed like attaching to
 network, ssh and run command why bother reporting the results


  Because, bad results are better then nothing...


  Best regards,
 Boris Pavlovic


 On Sat, Aug 30, 2014 at 12:54 AM, Timur Nurlygayanov 
 tnurlygaya...@mirantis.com wrote:

   Hi Ajay,

  looks like you need to use NeutronContext feature to configure Neutron
 Networks during the benchmarks execution.
  We now working on merge of two different comits with NeutronContext
 implementation:
 https://review.openstack.org/#/c/96300  and
 https://review.openstack.org/#/c/103306

  could you please apply commit https://review.openstack.org/#/c/96300
 and run your benchmarks? Neutron Network with subnetworks and routers will
 be automatically created for each created tenant and you should have the
 ability to connect to VMs. Please, note, that you should add the following
 part to your task JSON to enable Neutron context:
 ...
 context: {
 ...
 neutron_network: {
 network_cidr: 10.%s.0.0/16,
 }
 }
 ...

  Hope this will help.



  On Fri, Aug 29, 2014 at 11:42 PM, Ajay Kalambur (akalambu) 
 akala...@cisco.com wrote:

   Hi
 I am trying to run the Rally scenario boot-runcommand-delete. This
 scenario has the following code
   def boot_runcommand_delete(self, image, flavor,
script, interpreter, username,
fixed_network=private,
floating_network=public,
ip_version=4, port=22,
use_floatingip=True, **kwargs):
server = None
 floating_ip = None
 try:
 print fixed network:%s floating network:%s
 %(fixed_network,floating_network)
 server = self._boot_server(
 self._generate_random_name(rally_novaserver_),
 image, flavor, key_name='rally_ssh_key', **kwargs)

  *self.check_network(server, fixed_network)*

  The question I have is the instance is created with a call to
 boot_server but no networks are attached to this server instance. Next step
 it goes and checks if the fixed network is attached to the instance and
 sure enough it fails
 At the step highlighted in bold. Also I cannot see this failure unless I
 run rally with –v –d object. So it actually reports benchmark scenario
 numbers in a table with no errors when I run with
 rally task start boot-and-delete.json

  And reports results. First what am I missing in this case. Thing is I
 am using neutron not nova-network
 Second when most of the steps in the scenario failed like attaching to
 network, ssh and run command why bother reporting the results

  Ajay


  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --

  Timur,
 QA Engineer
 OpenStack Projects
 Mirantis Inc

 [image: http://www.openstacksv.com/] http://www.openstacksv.com/

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Review metrics - what do we want to measure?

2014-09-03 Thread Jesus M. Gonzalez-Barahona
On Wed, 2014-09-03 at 12:58 +1200, Robert Collins wrote:
 On 14 August 2014 11:03, James Polley j...@jamezpolley.com wrote:
  In recent history, we've been looking each week at stats from
  http://russellbryant.net/openstack-stats/tripleo-openreviews.html to get a
  gauge on how our review pipeline is tracking.
 
  The main stats we've been tracking have been the since the last revision
  without -1 or -2. I've included some history at [1], but the summary is
  that our 3rd quartile has slipped from 13 days to 16 days over the last 4
  weeks or so. Our 1st quartile is fairly steady lately, around 1 day (down
  from 4 a month ago) and median is unchanged around 7 days.
 
  There was lots of discussion in our last meeting about what could be causing
  this[2]. However, the thing we wanted to bring to the list for the
  discussion is:
 
  Are we tracking the right metric? Should we be looking to something else to
  tell us how well our pipeline is performing?
 
  The meeting logs have quite a few suggestions about ways we could tweak the
  existing metrics, but if we're measuring the wrong thing that's not going to
  help.
 
  I think that what we are looking for is a metric that lets us know whether
  the majority of patches are getting feedback quickly. Maybe there's some
  other metric that would give us a good indication?
 
 If we review all patches quickly and land none, thats bad too :).
 
 For the reviewers specifically i think we need a metric(s) that:
  - doesn't go bad when submitters go awol, don't respond etc
- including when they come back - our stats shouldn't jump hugely
 because an old review was resurrected
  - when good means submitters will be getting feedback
  - flag inventory- things we'd be happy to have landed that haven't
- including things with a -1 from non-core reviewers (*)
 
 (*) I often see -1's on things core wouldn't -1 due to the learning
 curve involved in becoming core
 
 So, as Ben says, I think we need to address the its-not-a-vote issue
 as a priority, that has tripped us up in lots of ways
 
 I think we need to discount -workflow patches where that was set by
 the submitter, which AFAICT we don't do today.
 
 Looking at current stats:
 Longest waiting reviews (based on oldest rev without -1 or -2):
 
 54 days, 2 hours, 41 minutes https://review.openstack.org/106167
 (Keystone/LDAP integration)
 That patch had a -1 on Aug 16 1:23 AM: but was quickyl turned to +2.
 
 So this patch had a -1 then after discussion it became a +2. And its
 evolved multiple times.
 
 What should we be saying here? Clearly its had little review input
 over its life, so I think its sadly accurate.
 
 I wonder if a big chunk of our sliding quartile is just use not
 reviewing the oldest reviews.

I've been researching review process in OpenStack and other projects for
a while, and my impression is that at least three timing metrics are
relevant:

(1) Total time from submitting a patch to final closing of the review
process (landing that, or a subsequent patch, or finally abandoning).
This gives an idea of how the whole process is working.

(2) Time from submitting a patch to that patch being approved (+2 in
OpenStack, I guess) or declined (and a new patch is requested). This
gives an idea of how quick reviewers are providing definite feedback to
patch submitters, and is a metric for each patch cycle.

(3) Time from a patch being reviewed, with a new patch being requested,
to a new patch being submitted. This gives an idea of the reaction
time of patch submitter.

Usually, you want to keep (1) low, while (2) and (3) give you an idea of
what is happening if (1) gets high.

There is another relevant metric in some cases, which is

(4) The number of patch cycles per review cycle (that is, how many
patches are needed per patch landing in master). In some cases, that may
help to explain how (2) and (3) contribute to (1).

And a fifth metric gives you a throughput metric:

(5) BMI (backlog management index), number of new review processes by
number of closed review process for a certain period. It gives an idea
of whether the backlog is going up (1) or down (1), and is usually
very interesting when seen over time.

(1) alone is not enough to assess on how well the review process is,
because it could low, but (5) showing an increasing backlog because
simply new review requests come too quickly (eg, in periods when
developers are submitting a lot of patch proposals after a freeze). (1)
could also be high, but (5) show a decrease in the backlog, because for
example reviewers or submitters are overworked or slowly scheduled, but
still the project copes with the backlog. Depending on the relationship
of (1) and (5), maybe you need more reviewers, or reviewers scheduling
their reviews with more priority wrt other actions, or something else.

Note for example that in a project with low BMI (1) for a long period,
but with a high total delay in reviews (1), usually putting more
reviewers doesn't reduce 

Re: [openstack-dev] [nova][NFV] VIF_VHOSTUSER

2014-09-03 Thread Luke Gorrie
On 1 September 2014 09:10, loy wolfe loywo...@gmail.com wrote:

 If the neutron side MD is just for snabbswitch, then I thinks there is no
 change to be merged into the tree. Maybe we can learn from sriov nic,
 although backend is vendor specific, but the MD is generic, can support
 snabb, dpdkovs, ans other userspace vswitch, etc.


That is an interesting idea.

The Snabb mech driver simply asks Neutron/Nova to bind the port with
VIF_VHOSTUSER. If this is the requirement for other drivers too then it
would seem that we have good potential for sharing code. Perhaps we could
rename mech_snabb to mech_vhostuser, like we have already renamed the VIF
code.

Going forward we would like the Snabb driver to become more mainstream in
the way it manages its agent on the compute host. Currently we use a simple
brute force approach [1] that is intended to protect us from
synchronisation bugs (race conditions) and be compatible with multiple
versions of OpenStack (e.g. the one we deploy with + the one we upstream
towards). If we did not have to support a production version based on a
stable OpenStack release then we might have been less conservative here.

Cheers,
-Luke

[1] Snabb NFV architecture:
https://github.com/SnabbCo/snabbswitch/wiki/Snabb-NFV-Architecture
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [releases] pbr, postversioning, integrated release workflow

2014-09-03 Thread Thierry Carrez
Robert Collins wrote:
 [...]
 Here's what happens.
 
 The two interacting rules are this:
  - pbr will now error if it tries to create a version number lower
 than the last release (and releases are found via the git tags in the
 branch).
  - pbr treats preversion version numbers as a hard target it has to use
 
 When we make a release (say 2014.1.0), we tag it, and we now have
 setup.cfg containing that version, and that version tagged.
 
 The very next patch will be one patch *after* that version, so it has
 to be a patch for the next release. That makes the minimum legitimate
 release version 2014.1.1, and the local version number will be a dev
 build so 2014.1.1.dev1.g$shahere.
 
 But the target it is required to use is 2014.1.0 - so we get a dev
 build of that (2014.1.0.dev1.g$shahere) - and thats lower than the
 last release (2014.1.0) and thus we trigger the error.
 
 So, if we tag an API server branch with the same version it has in the
 branch, any patch that attempts to change that branch will fail,
 unless that patch is updating the version number in setup.cfg.

I think the current release process already enforces that (that the
patch after the tag is the setup.cfg version bump). That was the only
way to avoid building erroneous versions (2014.1.0.dev1 after 2014.1.0).
Here is what we do currently (say for Icehouse release):

At RC1 time on master branch, a patch is pushed to bump setup.cfg to
2014.2 (a.k.a. open Juno development). On master, future tags will be
2014.2.b1 etc.

A release branch (proposed/icehouse) is created from the last commit
before that version bump. That branch still has 2014.1 in setup.cfg, and
we control what lands on it. At release time, we tag 2014.1.0 on
proposed/icehouse. The very next commit on that branch is a version bump
on setup.cfg to go to 2014.1.1. The future tag on that branch will be
2014.1.1.

If I understand the issue correctly, that process will just continue to
work.

For details, see:
https://wiki.openstack.org/wiki/ReleaseTeam/How_To_Release#Final_release
in particular the Push new version to master and Push .1 version to
stable/$SERIES branch sections.

 [...]
 Going forward:
 
 * We could just do the above - tag and submit a version fix
 
 * We could submit the version fix as soon as the release sha is
 chosen, before the tag
   - we could also wait for the version fixes to land before tagging
 
 * We could change pbr to not enforce this check again
   - or we could add an option to say 'I don't care'
 
 * We could remove the version numbers from setup.cfg entirely
 
 * We could change pbr to treat preversion versions as a *minimum*
 rather than a *must-reach*.
 
 I'm in favour of the last of those options. Its quite a conceptual
 change from the current definition, which is why we didn't do it
 initially. The way it would work is that when pbr calculates the
 minimum acceptable version based on the tags and sem-ver: headers in
 git history, it would compare the result to the preversion version,
 and if the preversion version is higher, take that. The impact would
 be that if someone lands an ABI break on a stable-release branch, the
 major version would have to be bumped - and for API servers we don't
 want that. *but* thats something we should improve in pbr anyway -
 teach it how we map semver onto the API server projects [e.g. that
 major is the first two components of 2014.1.0, and minor and patch are
 bundled together into the third component.
 
 The reason I prefer the last option over the second last, is that we
 don't have any mechanism to skip versions other than tagging today -
 and doing an alpha-0 tag at the opening of a new cycle just feels a
 little odd to me.

If my analysis above is right, I don't think we need to change anything:
the issue in pbr is only triggered if you try to do something you should
not do (i.e. have setup.cfg = tag on non-tagged commits).

Let me know what you think,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Juno Feature freeze - stop approving random changes

2014-09-03 Thread Thierry Carrez
Hi everyone,

Feature freeze is upon us, and with it, its inevitable 20-hour deep gate
queue. At this point the goal is to complete as many features as
possible before we tag juno-3 (ideally on Thursday). Given the queue
depth, anything that's not already in-flight has little chances of
making it by juno-3.

In order to preserve the gate for those last feature patches and reduce
the amount of disruptive feature freeze exceptions, I would like to ask
all core reviewers to stop approving random changes to the gate until
juno-3 milestone is completed.

That means at this point, only approve critical bug fixes, regressions,
security bugfixes or patches that directly complete a targeted feature.
Do *not* approve random bugfixes, or the 3rd patch in a series of 10
that you clearly know won't make it all in time for feature freeze.

Later today I'll be in touch with PTLs (or their release monkey) to
untarget all features that are not already gating: only features with
approved patches stuck in the gate will be kept on the list.

Thanks for helping us do a successful Juno release!

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Review metrics - what do we want to measure?

2014-09-03 Thread Joshua Hesketh

On 9/3/14 10:43 AM, Jeremy Stanley wrote:

On 2014-09-03 11:51:13 +1200 (+1200), Robert Collins wrote:

I thought there was now a thung where zuul can use a different account
per pipeline?

That was the most likely solution we discussed at the summit, but I
don't believe we've implemented it yet (or if we have then it isn't
yet being used for any existing pipelines).
It's currently in review[0], although I think from discussions with 
other zuul devs we're likely to refactor large parts of how we connect 
to systems and hence this may be delayed. If it's something that's 
needed soon(erish) we could probably do this step before the refactoring.


Cheers,
Josh

[0] https://review.openstack.org/#/c/97391/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Meeting time update

2014-09-03 Thread Tomas Sedovic
As you all (hopefully) know, our meetings alternate between Tuesdays
19:00 UTC and Wednesdays 7:00 UTC.

Because of the whining^W weekly-expressed preferences[1] of the
Europe-based folks, the latter meetings are going to be moved by +1 hour.

So the new meeting times are:

* Tuesdays at 19:00 UTC (unchanged)
* Wednesdays at 8:00 UTC (1 hour later)

The first new EU-friendly meeting will take place on Wednesday 17th
September.

The wiki page has been updated accordingly:

https://wiki.openstack.org/wiki/Meetings/TripleO

but I don't know how to reflect the change in the iCal feed. Anyone
willing to do that, please?

[1]:
http://lists.openstack.org/pipermail/openstack-dev/2014-August/043544.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra][Neutron] How to verify link to logs for disabled third-party CI

2014-09-03 Thread Joshua Hesketh

On 9/3/14 12:11 PM, Gary Duan wrote:

Hi,

Our CI system is disabled due to a running bug and wrong log link. I 
have manually verified the system with sandbox and two Neutron testing 
patches. However, with CI disabled, I am not able to see its review 
comment on any patch.


Is there a way that I can see what the comment will look like when CI 
is disabled?


Hi Gary,

If you are using zuul you can use the smtp reporter[0] to email you a 
report in the format as it will appear in gerrit. Otherwise you'll need 
to look at what you'll be posting via ssh (if communicating directly 
with the gerrit api).


Cheers,
Josh

[0] http://ci.openstack.org/zuul/reporters.html#smtp



Thanks,
Gary


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] [infra] Alpha wheels for Python 3.x

2014-09-03 Thread Yuriy Taraday
On Tue, Sep 2, 2014 at 11:17 PM, Clark Boylan cboy...@sapwetik.org wrote:

 On Tue, Sep 2, 2014, at 11:30 AM, Yuriy Taraday wrote:
  Hello.
 
  Currently for alpha releases of oslo libraries we generate either
  universal
  or Python 2.x-only wheels. This presents a problem: we can't adopt alpha
  releases in projects where Python 3.x is supported and verified in the
  gate. I've ran into this in change request [1] generated after
  global-requirements change [2]. There we have oslotest library that can't
  be built as a universal wheel because of different requirements (mox vs
  mox3 as I understand is the main difference). Because of that py33 job in
  [1] failed and we can't bump oslotest version in requirements.
 
  I propose to change infra scripts that generate and upload wheels to
  create
  py3 wheels as well as py2 wheels for projects that support Python 3.x (we
  can use setup.cfg classifiers to find that out) but don't support
  universal
  wheels. What do you think about that?
 
  [1] https://review.openstack.org/117940
  [2] https://review.openstack.org/115643
 
  --
 
  Kind regards, Yuriy.
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 We may find that we will need to have py3k wheels in addition to the
 existing wheels at some point, but I don't think this use case requires
 it. If oslo.test needs to support python2 and python3 it should use mox3
 in both cases which claims to support python2.6, 2.7 and 3.2. Then you
 can ship a universal wheel. This should solve the immediate problem.


Yes, I think, it's the way to go with oslotest specifically. Created a
change request for this: https://review.openstack.org/118551

It has been pointed out to me that one case where it won't be so easy is
 oslo.messaging and its use of eventlet under python2. Messaging will
 almost certainly need python 2 and python 3 wheels to be separate. I
 think we should continue to use universal wheels where possible and only
 build python2 and python3 wheels in the special cases where necessary.


We can make eventlet an optional dependency of oslo.messaging (through
setuptools' extras). In fact I don't quite understand the need for eventlet
as direct dependency there since we can just write code that uses threading
library and it'll get monkeypatched if consumer app wants to use eventlet.

The setup.cfg classifiers should be able to do that for us, though PBR
 may need updating?


I don't think so - it loads all classifiers from setup.cfg, they should be
available through some distutils machinery.

We will also need to learn to upload potentially 1
 wheel in our wheel jobs. That bit is likely straight foward. The last
 thing that we need to make sure we do is that we have some testing in
 place for the special wheels. We currently have the requirements
 integration test which runs under python2 checking that we can actually
 install all the things together. This ends up exercising our wheels and
 checking that they actually work. We don't have a python3 equivalent for
 that job. It may be better to work out some explicit checking of the
 wheels we produce that applies to both versions of python. I am not
 quite sure how we should approach that yet.


I guess we can just repeat that check with Python 3.x. If I see it right,
all we need is to repeat loop in pbr/tools/integration.sh with different
Python version. The problem might occur that we'll be running this test
with Python 3.4 that is default on trusty but all our unittests jobs run on
3.3 instead. May be we should drop 3.3 already?

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Feature Freeze Exception process for Juno

2014-09-03 Thread Nikola Đipanov
On 09/02/2014 09:23 PM, Michael Still wrote:
 On Tue, Sep 2, 2014 at 1:40 PM, Nikola Đipanov ndipa...@redhat.com wrote:
 On 09/02/2014 08:16 PM, Michael Still wrote:
 Hi.

 We're soon to hit feature freeze, as discussed in Thierry's recent
 email. I'd like to outline the process for requesting a freeze
 exception:

 * your code must already be up for review
 * your blueprint must have an approved spec
 * you need three (3) sponsoring cores for an exception to be granted

 Can core reviewers who have features up for review have this number
 lowered to two (2) sponsoring cores, as they in reality then need four
 (4) cores (since they themselves are one (1) core but cannot really
 vote) making it an order of magnitude more difficult for them to hit
 this checkbox?
 
 That's a lot of numbers in that there paragraph.
 
 Let me re-phrase your question... Can a core sponsor an exception they
 themselves propose? I don't have a problem with someone doing that,
 but you need to remember that does reduce the number of people who
 have agreed to review the code for that exception.
 

Michael has correctly picked up on a hint of snark in my email, so let
me explain where I was going with that:

The reason many features including my own may not make the FF is not
because there was not enough buy in from the core team (let's be
completely honest - I have 3+ other core members working for the same
company that are by nature of things easier to convince), but because of
any of the following:

* Crippling technical debt in some of the key parts of the code
* that we have not been acknowledging as such for a long time
* which leads to proposed code being arbitrarily delayed once it makes
the glaring flaws in the underlying infra apparent
* and that specs process has been completely and utterly useless in
helping uncover (not that process itself is useless, it is very useful
for other things)

I am almost positive we can turn this rather dire situation around
easily in a matter of months, but we need to start doing it! It will not
happen through pinning arbitrary numbers to arbitrary processes.

I will follow up with a more detailed email about what I believe we are
missing, once the FF settles and I have applied some soothing creme to
my burnout wounds, but currently my sentiment is:

Contributing features to Nova nowadays SUCKS!!1 (even as a core
reviewer) We _have_ to change that!

N.

 Michael
 
 * exceptions must be granted before midnight, Friday this week
 (September 5) UTC
 * the exception is valid until midnight Friday next week
 (September 12) UTC when all exceptions expire

 For reference, our rc1 drops on approximately 25 September, so the
 exception period needs to be short to maximise stabilization time.

 John Garbutt and I will both be granting exceptions, to maximise our
 timezone coverage. We will grant exceptions as they come in and gather
 the required number of cores, although I have also carved some time
 out in the nova IRC meeting this week for people to discuss specific
 exception requests.

 Michael



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][Nova]Quest about Cinder Brick proposal

2014-09-03 Thread Duncan Thomas
On 3 September 2014 01:20, Emma Lin l...@vmware.com wrote:
 Thank you all for the prompt response. And I’m glad to see the progress on
 this topic.
 Basically, what I’m thinking is the local storage support for big data and
 large scale computing is specially useful.

 I’ll monitor the meeting progress actively.

Lots of people are interested in the solution, but nobody has produced
a complete proposal yet.

 Duncan,
 And I’m interested to know the details of this topic. Btw, if this Brick
 code called by Cinder?

Brick is used in quite a few places in cinder, yes.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [solum] Consistency of development environment

2014-09-03 Thread Alexander Vollschwitz

Hello,

I've been looking at Solum for about a couple of months now, with the 
goal of eventually contributing to this project. First off, my apologies 
if this is the wrong place for my question.

I'm currently getting familiar with project structure, source code, and 
also trying to set up the dev env via Vagrant, following the Quick Start 
Guide:

http://solum.readthedocs.org/en/latest/getting_started/

Here I need some advice. During my most recent attempt to set up the dev 
env two days ago, I hit two problems: after devstack provisioned OS, 
q-dhcp and q-l3 were not running. The former refused to start due to an 
updated version requirement for dnsmasq (see 
https://bugs.launchpad.net/openstack-manuals/+bug/1347153) that was not 
met, the latter did not start due to problems with openvswitch.

I resolved both issues manually in the VM, all the while thinking that I 
must be doing something wrong. (Well I'm pretty sure I am.) On the other 
hand, I had similar problems getting the dev env up during earlier 
tries. So what is the right way to get a consistent setup of a Solum dev 
env? Are the instructions from the Quick Start guide linked above not 
current? Do I need to configure different branches/tags, or use 
different repos all together?

Sorry again if this is the wrong place to ask!
I hope I can make useful contributions soon.

Regards,

Alex
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] (Non-)consistency of the Ironic hash ring implementation

2014-09-03 Thread Nejc Saje



On 09/02/2014 11:19 PM, Gregory Haynes wrote:

Excerpts from Nejc Saje's message of 2014-09-01 07:48:46 +:

Hey guys,

in Ceilometer we're using consistent hash rings to do workload
partitioning[1]. We've considered generalizing your hash ring
implementation and moving it up to oslo, but unfortunately your
implementation is not actually consistent, which is our requirement.

Since you divide your ring into a number of equal sized partitions,
instead of hashing hosts onto the ring, when you add a new host,
an unbound amount of keys get re-mapped to different hosts (instead of
the 1/#nodes remapping guaranteed by hash ring). I've confirmed this
with the test in aforementioned patch[2].


I am just getting started with the ironic hash ring code, but this seems
surprising to me. AIUI we do require some rebalancing when a conductor
is removed or added (which is normal use of a CHT) but not for every
host added. This is supported by the fact that we currently dont have a
rebalancing routine, so I would be surprised if ironic worked at all if
we required it for each host that is added.


Sorry, I used terms that are used in the distributed caching use-case 
for hash-rings (which is why this algorithm was developed), where you 
have hosts and keys, and hash-ring tells you which host the key is on.


To translate the original e-mail into Ironic use-case, where you have 
hosts being mapped to conductors:


 Since you divide your ring into a number of equal sized partitions
 instead of hashing *conductors* onto the ring, when you add a new 
*host*,
 an unbound amount of *hosts* get re-mapped to different *conductors* 
(instead of
 the 1/#*conductors* of *hosts* being re-mapped guaranteed by hash 
ring). I've confirmed this

 with the test in aforementioned patch[2].

Cheers,
Nejc



Can anyone in Ironic with a bit more experience confirm/deny this?



If this is good enough for your use-case, great, otherwise we can get a
generalized hash ring implementation into oslo for use in both projects
or we can both use an external library[3].

Cheers,
Nejc

[1] https://review.openstack.org/#/c/113549/
[2]
https://review.openstack.org/#/c/113549/21/ceilometer/tests/test_utils.py
[3] https://pypi.python.org/pypi/hash_ring



Thanks,
Greg

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] (Non-)consistency of the Ironic hash ring implementation

2014-09-03 Thread Nejc Saje



On 09/02/2014 11:33 PM, Robert Collins wrote:

The implementation in ceilometer is very different to the Ironic one -
are you saying the test you linked fails with Ironic, or that it fails
with the ceilometer code today?


Disclaimer: in Ironic terms, node = conductor, key = host

The test I linked fails with Ironic hash ring code (specifically the 
part that tests consistency). With 1000 keys being mapped to 10 nodes, 
when you add a node:

- current ceilometer code remaps around 7% of the keys ( 1/#nodes)
- Ironic code remaps  90% of the keys

The problem lies in the way you build your hash ring[1]. You initialize 
a statically-sized array and divide its fields among nodes. When you do 
a lookup, you check which field in the array the key hashes to and then 
return the node that that field belongs to. This is the wrong approach 
because when you add a new node, you will resize the array and assign 
the fields differently, but the same key will still hash to the same 
field and will therefore hash to a different node.


Nodes must be hashed onto the ring as well, statically chopping up the 
ring and dividing it among nodes isn't enough for consistency.


Cheers,
Nejc



The Ironic hash_ring implementation uses a hash:
 def _get_partition(self, data):
 try:
 return (struct.unpack_from('I', hashlib.md5(data).digest())[0]
  self.partition_shift)
 except TypeError:
 raise exception.Invalid(
 _(Invalid data supplied to HashRing.get_hosts.))


so I don't see the fixed size thing you're referring to. Could you
point a little more specifically? Thanks!

-Rob

On 1 September 2014 19:48, Nejc Saje ns...@redhat.com wrote:

Hey guys,

in Ceilometer we're using consistent hash rings to do workload
partitioning[1]. We've considered generalizing your hash ring implementation
and moving it up to oslo, but unfortunately your implementation is not
actually consistent, which is our requirement.

Since you divide your ring into a number of equal sized partitions, instead
of hashing hosts onto the ring, when you add a new host,
an unbound amount of keys get re-mapped to different hosts (instead of the
1/#nodes remapping guaranteed by hash ring). I've confirmed this with the
test in aforementioned patch[2].

If this is good enough for your use-case, great, otherwise we can get a
generalized hash ring implementation into oslo for use in both projects or
we can both use an external library[3].

Cheers,
Nejc

[1] https://review.openstack.org/#/c/113549/
[2]
https://review.openstack.org/#/c/113549/21/ceilometer/tests/test_utils.py
[3] https://pypi.python.org/pypi/hash_ring

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] (Non-)consistency of the Ironic hash ring implementation

2014-09-03 Thread Nejc Saje

Sorry, forgot to link the reference:

[1] 
https://github.com/openstack/ironic/blob/b56db42aa39e855e558a52eb71e656ea14380f8a/ironic/common/hash_ring.py#L72


On 09/03/2014 01:50 PM, Nejc Saje wrote:



On 09/02/2014 11:33 PM, Robert Collins wrote:

The implementation in ceilometer is very different to the Ironic one -
are you saying the test you linked fails with Ironic, or that it fails
with the ceilometer code today?


Disclaimer: in Ironic terms, node = conductor, key = host

The test I linked fails with Ironic hash ring code (specifically the
part that tests consistency). With 1000 keys being mapped to 10 nodes,
when you add a node:
- current ceilometer code remaps around 7% of the keys ( 1/#nodes)
- Ironic code remaps  90% of the keys

The problem lies in the way you build your hash ring[1]. You initialize
a statically-sized array and divide its fields among nodes. When you do
a lookup, you check which field in the array the key hashes to and then
return the node that that field belongs to. This is the wrong approach
because when you add a new node, you will resize the array and assign
the fields differently, but the same key will still hash to the same
field and will therefore hash to a different node.

Nodes must be hashed onto the ring as well, statically chopping up the
ring and dividing it among nodes isn't enough for consistency.

Cheers,
Nejc



The Ironic hash_ring implementation uses a hash:
 def _get_partition(self, data):
 try:
 return (struct.unpack_from('I',
hashlib.md5(data).digest())[0]
  self.partition_shift)
 except TypeError:
 raise exception.Invalid(
 _(Invalid data supplied to HashRing.get_hosts.))


so I don't see the fixed size thing you're referring to. Could you
point a little more specifically? Thanks!

-Rob

On 1 September 2014 19:48, Nejc Saje ns...@redhat.com wrote:

Hey guys,

in Ceilometer we're using consistent hash rings to do workload
partitioning[1]. We've considered generalizing your hash ring
implementation
and moving it up to oslo, but unfortunately your implementation is not
actually consistent, which is our requirement.

Since you divide your ring into a number of equal sized partitions,
instead
of hashing hosts onto the ring, when you add a new host,
an unbound amount of keys get re-mapped to different hosts (instead
of the
1/#nodes remapping guaranteed by hash ring). I've confirmed this with
the
test in aforementioned patch[2].

If this is good enough for your use-case, great, otherwise we can get a
generalized hash ring implementation into oslo for use in both
projects or
we can both use an external library[3].

Cheers,
Nejc

[1] https://review.openstack.org/#/c/113549/
[2]
https://review.openstack.org/#/c/113549/21/ceilometer/tests/test_utils.py

[3] https://pypi.python.org/pypi/hash_ring

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack][nova] Resize: allow_resize_to_same_host=True fails

2014-09-03 Thread Manickam, Kanagaraj
This mail is regarding the flag allow_resize_to_same_host=True in nova.conf.

Currently Nova allows to resize the instance across different host by default 
and it provides the flag allow_resize_to_same_host to set  to True, when 
resize is required to be tested in single host  environment.
But this flag could be useful in multi host environment, where cloud admin 
wants the resize to happen on the same host.  To support this scenario, I have 
submitted a patch https://review.openstack.org/#/c/117116/.

As part of this patch, reviewers suggested to use new variable called 
'force_resize_to_same_host' instead. So Would like to get more reviews on this 
one.

Could you please provide your +1/-1 on following choices:


1.Use the same flag 'allow_resize_to_same_host=True' with the above patch.

2.Use the new flag called 'force_resize_to_same_host' with 'True'

Thanks  Regards
Kanagaraj M
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] (Non-)consistency of the Ironic hash ring implementation

2014-09-03 Thread Eoghan Glynn


 On 09/02/2014 11:33 PM, Robert Collins wrote:
  The implementation in ceilometer is very different to the Ironic one -
  are you saying the test you linked fails with Ironic, or that it fails
  with the ceilometer code today?
 
 Disclaimer: in Ironic terms, node = conductor, key = host
 
 The test I linked fails with Ironic hash ring code (specifically the
 part that tests consistency). With 1000 keys being mapped to 10 nodes,
 when you add a node:
 - current ceilometer code remaps around 7% of the keys ( 1/#nodes)
 - Ironic code remaps  90% of the keys

So just to underscore what Nejc is saying here ... 

The key point is the proportion of such baremetal-nodes that would
end up being re-assigned when a new conductor is fired up.

The defining property of a consistent hash-ring is that it
significantly reduces the number of re-mappings that occur when
the number of buckets change, when compared to a plain ol' hash.

This desire for stickiness would often be motivated by some form
of statefulness or initialization cost. In the ceilometer case,
we want to minimize unnecessary re-assignment so as to keep the
cadence of meter collection and alarm evaluation as even as
possible (given that each agent will be running off
non-synchronized periodic tasks).

In the case of the Ironic baremetal-node to conductor mapping,
perhaps such stickiness isn't really of any benefit?

If so, that's fine, but Nejc's main point UUIC is that an
approach that doesn't minimize the number of re-mappings isn't
really a form of *consistent* hashing.

Cheers,
Eoghan
 
 The problem lies in the way you build your hash ring[1]. You initialize
 a statically-sized array and divide its fields among nodes. When you do
 a lookup, you check which field in the array the key hashes to and then
 return the node that that field belongs to. This is the wrong approach
 because when you add a new node, you will resize the array and assign
 the fields differently, but the same key will still hash to the same
 field and will therefore hash to a different node.
 
 Nodes must be hashed onto the ring as well, statically chopping up the
 ring and dividing it among nodes isn't enough for consistency.
 
 Cheers,
 Nejc
 
 
  The Ironic hash_ring implementation uses a hash:
   def _get_partition(self, data):
   try:
   return (struct.unpack_from('I',
   hashlib.md5(data).digest())[0]
self.partition_shift)
   except TypeError:
   raise exception.Invalid(
   _(Invalid data supplied to HashRing.get_hosts.))
 
 
  so I don't see the fixed size thing you're referring to. Could you
  point a little more specifically? Thanks!
 
  -Rob
 
  On 1 September 2014 19:48, Nejc Saje ns...@redhat.com wrote:
  Hey guys,
 
  in Ceilometer we're using consistent hash rings to do workload
  partitioning[1]. We've considered generalizing your hash ring
  implementation
  and moving it up to oslo, but unfortunately your implementation is not
  actually consistent, which is our requirement.
 
  Since you divide your ring into a number of equal sized partitions,
  instead
  of hashing hosts onto the ring, when you add a new host,
  an unbound amount of keys get re-mapped to different hosts (instead of the
  1/#nodes remapping guaranteed by hash ring). I've confirmed this with the
  test in aforementioned patch[2].
 
  If this is good enough for your use-case, great, otherwise we can get a
  generalized hash ring implementation into oslo for use in both projects or
  we can both use an external library[3].
 
  Cheers,
  Nejc
 
  [1] https://review.openstack.org/#/c/113549/
  [2]
  https://review.openstack.org/#/c/113549/21/ceilometer/tests/test_utils.py
  [3] https://pypi.python.org/pypi/hash_ring
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Status of Neutron at Juno-3

2014-09-03 Thread Kyle Mestery
Given how deep the merge queue is (146 currently), we've effectively
reached feature freeze in Neutron now (likely other projects as well).
So this morning I'm going to go through and remove BPs from Juno which
did not make the merge window. I'll also be putting temporary -2s in
the patches to ensure they don't slip in as well. I'm looking at FFEs
for the high priority items which are close but didn't quite make it:

https://blueprints.launchpad.net/neutron/+spec/l3-high-availability
https://blueprints.launchpad.net/neutron/+spec/add-ipset-to-security
https://blueprints.launchpad.net/neutron/+spec/security-group-rules-for-devices-rpc-call-refactor

Thanks,
Kyle

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance][Nova][All] requests 2.4.0 breaks glanceclient

2014-09-03 Thread Kuvaja, Erno
Hi All,

While investigating glanceclient gating issues we narrowed it down to requests 
2.4.0 which was released 2014-08-29. Urllib3 seems to be raising new 
ProtocolError which does not get catched and breaks at least glanceclient.
Following error can be seen on console ProtocolError: ('Connection aborted.', 
gaierror(-2, 'Name or service not known')).

Unfortunately we hit on such issue just under the freeze. Apparently this 
breaks novaclient as well and there is change 
(https://review.openstack.org/#/c/118332/ )proposed to requirements to limit 
the version 2.4.0.

Is there any other projects using requirements and seeing issues with the 
latest version?


-  Erno (jokke_) Kuvaja

kuv...@hp.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] [glance] do NOT ever sort requirements.txt

2014-09-03 Thread Sean Dague
I'm not sure why people keep showing up with sort requirements patches
like - https://review.openstack.org/#/c/76817/6, however, they do.

All of these need to be -2ed with predjudice.

requirements.txt is not a declarative interface. The order is important
as pip processes it in the order it is. Changing the order has impacts
on the overall integration which can cause wedges later.

So please stop.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] znc as a service (was Re: [nova] Is the BP approval process broken?)

2014-09-03 Thread Louis Taylor
Thierry Carrez wrote:
 Note that ZNC is not the only IRC proxy out there. Bip is also working
 quite well.

Note also that an IRC proxy is not the only solution. Using a console IRC
client on a server works well for me. Irssi or weechat are both simple to set
up.


signature.asc
Description: Digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Nova][All] requests 2.4.0 breaks glanceclient

2014-09-03 Thread Sean Dague
Honestly, I don't think we should pin this. This seems like a pretty
easy fix in the code and the only unit tests that are failing are the
ones testing invalid endpoints.

My vote is to fix glanceclient instead and do a release.

-Sean

On 09/03/2014 08:30 AM, Kuvaja, Erno wrote:
 Hi All,
 
  
 
 While investigating glanceclient gating issues we narrowed it down to
 requests 2.4.0 which was released 2014-08-29. Urllib3 seems to be
 raising new ProtocolError which does not get catched and breaks at least
 glanceclient.
 
 Following error can be seen on console ProtocolError: ('Connection
 aborted.', gaierror(-2, 'Name or service not known')).
 
  
 
 Unfortunately we hit on such issue just under the freeze. Apparently
 this breaks novaclient as well and there is change
 (https://review.openstack.org/#/c/118332/ )proposed to requirements to
 limit the version 2.4.0.
 
  
 
 Is there any other projects using requirements and seeing issues with
 the latest version?
 
  
 
 -  Erno (jokke_) Kuvaja
 
 kuv...@hp.com
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [glance] do NOT ever sort requirements.txt

2014-09-03 Thread Daniel P. Berrange
On Wed, Sep 03, 2014 at 08:37:17AM -0400, Sean Dague wrote:
 I'm not sure why people keep showing up with sort requirements patches
 like - https://review.openstack.org/#/c/76817/6, however, they do.
 
 All of these need to be -2ed with predjudice.
 
 requirements.txt is not a declarative interface. The order is important
 as pip processes it in the order it is. Changing the order has impacts
 on the overall integration which can cause wedges later.

Can  requirements.txt contain comment lines ?  If so, it would be
worth adding 

   # The ordering of modules in this file is important
   # Do not attempt to re-sort the lines

Because 6 months hence people will have probably forgotten about
this mail, or if they're new contributors, never know it existed.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][SR-IOV] Please review this patch series: replace pci_request storage with proper object usage

2014-09-03 Thread Robert Li (baoli)
Hi,

the patch series:
 https://review.openstack.org/#/c/117781/5
https://review.openstack.org/#/c/117895/
https://review.openstack.org/#/c/117839/
 https://review.openstack.org/#/c/118391/

is ready for review. This needs to get in before Juno feature freeze so that 
the sr-iov patches  can land in Juno. Refer to 
https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov for all the 
patches related to SR-IOV.

thanks,
Robert



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [glance] do NOT ever sort requirements.txt

2014-09-03 Thread Romain Hardouin
On Wed, 2014-09-03 at 14:03 +0100, Daniel P. Berrange wrote:
 On Wed, Sep 03, 2014 at 08:37:17AM -0400, Sean Dague wrote:
  I'm not sure why people keep showing up with sort requirements patches
  like - https://review.openstack.org/#/c/76817/6, however, they do.
  
  All of these need to be -2ed with predjudice.
  
  requirements.txt is not a declarative interface. The order is important
  as pip processes it in the order it is. Changing the order has impacts
  on the overall integration which can cause wedges later.
 
 Can  requirements.txt contain comment lines ?  If so, it would be
 worth adding 
 
# The ordering of modules in this file is important
# Do not attempt to re-sort the lines
 
 Because 6 months hence people will have probably forgotten about
 this mail, or if they're new contributors, never know it existed.
 
 Regards,
 Daniel

Yes, requirements.txt can contain comment lines.
It's a good idea to keep this information in the file itself.

Best,
Romain



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [glance] do NOT ever sort requirements.txt

2014-09-03 Thread Sean Dague
On 09/03/2014 09:03 AM, Daniel P. Berrange wrote:
 On Wed, Sep 03, 2014 at 08:37:17AM -0400, Sean Dague wrote:
 I'm not sure why people keep showing up with sort requirements patches
 like - https://review.openstack.org/#/c/76817/6, however, they do.

 All of these need to be -2ed with predjudice.

 requirements.txt is not a declarative interface. The order is important
 as pip processes it in the order it is. Changing the order has impacts
 on the overall integration which can cause wedges later.
 
 Can  requirements.txt contain comment lines ?  If so, it would be
 worth adding 
 
# The ordering of modules in this file is important
# Do not attempt to re-sort the lines
 
 Because 6 months hence people will have probably forgotten about
 this mail, or if they're new contributors, never know it existed.

The point is that core review team members should know. In this case at
least one glance core +2ed this change.

Regular contributors can be educated by core team members.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] znc as a service (was Re: [nova] Is the BP approval process broken?)

2014-09-03 Thread Sylvain Bauza


Le 03/09/2014 14:38, Kuvaja, Erno a écrit :

Another well working option that can be easily used:
1) get a Linux system with internet connection (local box, VM in a cloud, 
whatever floats your boat)
2) install irssi and screen
3) run irssi in a screen

Now you can login (local console, ssh, telnet, mosh  again pick your 
preferred) into that linux system attach to the screen and you have your IRC 
client online all the time even you're not.

- Erno (jokke_) Kuvaja


Well, there is no need to ssh to Irssi, you can just use Irssi Proxy and 
make use of Pidgin or Xchat for it.


Re: ZNC as a service, I think it's OK provided the implementation is 
open-sourced with openstack-infra repo group, as for Gerrit, Zuul and 
others.
The only problem I can see is how to provide IRC credentials to this, as 
I don't want to share my creds up to the service.


-Sylvain


-Original Message-
From: Thierry Carrez [mailto:thie...@openstack.org]
Sent: 03 September 2014 13:24
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] znc as a service (was Re: [nova] Is the BP
approval process broken?)

Stefano Maffulli wrote:

On 08/29/2014 11:17 AM, John Garbutt wrote:

After moving to use ZNC, I find IRC works much better for me now, but
I am still learning really.

There! this sentence has two very important points worth highlighting:

1- when people say IRC they mean IRC + a hack to overcome its
limitation
2- IRC+znc is complex, not many people are used to it

Note that ZNC is not the only IRC proxy out there. Bip is also working quite
well.


I never used znc, refused to install, secure and maintain yet another
public facing service. For me IRC is: be there when it happens or read
the logs on eavesdrop, if needed.

Recently I found out that there are znc services out there that could
make things simpler but they're not easy to join (at least the couple
I looked at).

Would it make sense to offer znc as a service within the openstack project?

We could at least document the steps required to set up a proxy. Or propose
pre-configured images/containers for individuals to run in the cloud. I
agree with Ryan that running an IRC proxy for someone else creates...
interesting privacy issues that may just hinder adoption of said solution.

--
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][feature freeze exception] Proposal for using Launcher/ProcessLauncher for launching services

2014-09-03 Thread Kekane, Abhishek
Hi All,

Please give your support me for applying the  freeze exception for using 
oslo-incubator service framework in glance, based on the following blueprint:

https://blueprints.launchpad.net/glance/+spec/use-common-service-framework

I have ensured that after making these changes everything is working smoothly.

I have done the functional testing for following three scenarios:

1.   Enabled SSL and checked requests are processed by the Api service 
before and after SIGHUP signal

2.   Disabled SSL and  checked requests are processed by the Api service 
before and after SIGHUP signal

3.   I have also ensured reloading of the parameters like 
ilesystem_store_datadir, filesystem_store_datadirs are  working effectively 
after sending the SIGHUP signal.

To test 1st and 2nd I have created a python script which will send multiple 
requests to glance at a time and added a chron job to send a SIGHUP signal to 
the parent process.
I have tested above script for 1 hour and confirmed every request has been 
processed successfully.

Please consider this feature to be a part of Juno release.



Thanks  Regards,

Abhishek Kekane


From: Kekane, Abhishek [mailto:abhishek.kek...@nttdata.com]
Sent: 02 September 2014 19:11
To: OpenStack Development Mailing List (openstack-dev@lists.openstack.org)
Subject: [openstack-dev] [glance][feature freeze exception] Proposal for using 
Launcher/ProcessLauncher for launching services

Hi All,

I'd like to ask for a feature freeze exception for using oslo-incubator service 
framework in glance, based on the following blueprint:

https://blueprints.launchpad.net/glance/+spec/use-common-service-framework


The code to implement this feature is under review at present.

1. Sync oslo-incubator service module in glance: 
https://review.openstack.org/#/c/117135/2
2. Use Launcher/ProcessLauncher in glance: 
https://review.openstack.org/#/c/117988/


If we have this feature in glance then we can able to use features like reload 
glance configuration file without restart, graceful shutdown etc.
Also it will use common code like other OpenStack projects nova, keystone, 
cinder does.


We are ready to address all the concerns of the community if they have any.


Thanks  Regards,

Abhishek Kekane

__
Disclaimer:This email and any attachments are sent in strictest confidence for 
the sole use of the addressee and may contain legally privileged, confidential, 
and proprietary data. If you are not the intended recipient, please advise the 
sender by replying promptly to this email and then delete and destroy this 
email and any attachments without any further use, copying or forwarding

__
Disclaimer:This email and any attachments are sent in strictest confidence for 
the sole use of the addressee and may contain legally privileged, confidential, 
and proprietary data.  If you are not the intended recipient, please advise the 
sender by replying promptly to this email and then delete and destroy this 
email and any attachments without any further use, copying or forwarding___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][Nova]Quest about Cinder Brick proposal

2014-09-03 Thread Duncan Thomas
Emma

I encourage you to join the cinder RIC channel, #openstack-cinder, on
the freenode irc network (irc.freenode.net) to ask questions, you'll
get much more interactive feedback there.

Regards

-- 
Duncan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Treating notifications as a contract

2014-09-03 Thread Sandy Walsh
Is there anything slated for the Paris summit around this?

I just spent nearly a week parsing Nova notifications and the pain of no schema 
has overtaken me. 

We're chatting with IBM about CADF and getting down to specifics on their 
applicability to notifications. Once I get StackTach.v3 into production I'm 
keen to get started on revisiting the notification format and olso.messaging 
support for notifications. 

Perhaps a hangout for those keenly interested in doing something about this?

Thoughts?
-S


From: Eoghan Glynn [egl...@redhat.com]
Sent: Monday, July 14, 2014 8:53 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] Treating notifications as a contract

  So what we need to figure out is how exactly this common structure can be
  accommodated without reverting back to what Sandy called the wild west
  in another post.

 I got the impression that wild west is what we've already got
 (within the payload)?

Yeah, exactly, that was my interpretation too.

So basically just to ensure that the lightweight-schema/common-structure
notion doesn't land us back not too far beyond square one (if there are
too many degrees-of-freedom in that declaration of a list of dicts with
certain required fields that you had envisaged in an earlier post).

  For example you could write up a brief wiki walking through how an
  existing widely-consumed notification might look under your vision,
  say compute.instance.start.end. Then post a link back here as an RFC.
 
  Or, possibly better, maybe submit up a strawman spec proposal to one
  of the relevant *-specs repos and invite folks to review in the usual
  way?

 Would oslo-specs (as in messaging) be the right place for that?

That's a good question.

Another approach would be to hone in on the producer-side that's
currently the heaviest user of notifications, i.e. nova, and propose
the strawman to nova-specs given that (a) that's where much of the
change will be needed, and (b) many of the notification patterns
originated in nova and then were subsequently aped by other projects
as they were spun up.

 My thinking is the right thing to do is bounce around some questions
 here (or perhaps in a new thread if this one has gone far enough off
 track to have dropped people) and catch up on some loose ends.

Absolutely!

 For example: It appears that CADF was designed for this sort of thing and
 was considered at some point in the past. It would be useful to know
 more of that story if there are any pointers.

 My initial reaction is that CADF has the stank of enterprisey all over
 it rather than less is more and worse is better but that's a
 completely uninformed and thus unfair opinion.

TBH I don't know enough about CADF, but I know a man who does ;)

(gordc, I'm looking at you!)

 Another question (from elsewhere in the thread) is if it is worth, in
 the Ironic notifications, to try and cook up something generic or to
 just carry on with what's being used.

Well, my gut instinct is that the content of the Ironic notifications
is perhaps on the outlier end of the spectrum compared to the more
traditional notifications we see emitted by nova, cinder etc. So it
may make better sense to concentrate initially on how contractizing
these more established notifications might play out.

  This feels like something that we should be thinking about with an eye
  to the K* cycle - would you agree?

 Yup.

 Thanks for helping to tease this all out and provide some direction on
 where to go next.

Well thank *you* for picking up the baton on this and running with it :)

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][feature freeze exception] Proposal for using Launcher/ProcessLauncher for launching services

2014-09-03 Thread Kuvaja, Erno
In principle I like the idea and concept, a lot. In practice I don't think 
glance code is ready in a state that we could say SIGHUP reloads our configs. 
Even more my concern is that based on the behavior seen some config options 
gets picked up and some does not.

As long as we do not have definite list documented what options gets updated on 
the fly and what is the actual behavior at the point when new config is picked 
up (lets say we have some locks in place and locking folder gets updated, what 
happens?) I don't think we should be taking this functionality in. Even the 
current behavior is fundamentally broken, at least it's broken in a way that it 
is consistent and the behavior is known.


-  Erno (jokke_) Kuvaja

kuv...@hp.com

From: Kekane, Abhishek [mailto:abhishek.kek...@nttdata.com]
Sent: 03 September 2014 14:39
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [glance][feature freeze exception] Proposal for 
using Launcher/ProcessLauncher for launching services

Hi All,

Please give your support me for applying the  freeze exception for using 
oslo-incubator service framework in glance, based on the following blueprint:

https://blueprints.launchpad.net/glance/+spec/use-common-service-framework

I have ensured that after making these changes everything is working smoothly.

I have done the functional testing for following three scenarios:

1.   Enabled SSL and checked requests are processed by the Api service 
before and after SIGHUP signal

2.   Disabled SSL and  checked requests are processed by the Api service 
before and after SIGHUP signal

3.   I have also ensured reloading of the parameters like 
ilesystem_store_datadir, filesystem_store_datadirs are  working effectively 
after sending the SIGHUP signal.

To test 1st and 2nd I have created a python script which will send multiple 
requests to glance at a time and added a chron job to send a SIGHUP signal to 
the parent process.
I have tested above script for 1 hour and confirmed every request has been 
processed successfully.

Please consider this feature to be a part of Juno release.



Thanks  Regards,

Abhishek Kekane


From: Kekane, Abhishek [mailto:abhishek.kek...@nttdata.com]
Sent: 02 September 2014 19:11
To: OpenStack Development Mailing List 
(openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org)
Subject: [openstack-dev] [glance][feature freeze exception] Proposal for using 
Launcher/ProcessLauncher for launching services

Hi All,

I'd like to ask for a feature freeze exception for using oslo-incubator service 
framework in glance, based on the following blueprint:

https://blueprints.launchpad.net/glance/+spec/use-common-service-framework


The code to implement this feature is under review at present.

1. Sync oslo-incubator service module in glance: 
https://review.openstack.org/#/c/117135/2
2. Use Launcher/ProcessLauncher in glance: 
https://review.openstack.org/#/c/117988/


If we have this feature in glance then we can able to use features like reload 
glance configuration file without restart, graceful shutdown etc.
Also it will use common code like other OpenStack projects nova, keystone, 
cinder does.


We are ready to address all the concerns of the community if they have any.


Thanks  Regards,

Abhishek Kekane

__
Disclaimer:This email and any attachments are sent in strictest confidence for 
the sole use of the addressee and may contain legally privileged, confidential, 
and proprietary data. If you are not the intended recipient, please advise the 
sender by replying promptly to this email and then delete and destroy this 
email and any attachments without any further use, copying or forwarding

__
Disclaimer:This email and any attachments are sent in strictest confidence for 
the sole use of the addressee and may contain legally privileged, confidential, 
and proprietary data. If you are not the intended recipient, please advise the 
sender by replying promptly to this email and then delete and destroy this 
email and any attachments without any further use, copying or forwarding
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] (Non-)consistency of the Ironic hash ring implementation

2014-09-03 Thread Lucas Alvares Gomes
On Wed, Sep 3, 2014 at 12:50 PM, Nejc Saje ns...@redhat.com wrote:


 On 09/02/2014 11:33 PM, Robert Collins wrote:

 The implementation in ceilometer is very different to the Ironic one -
 are you saying the test you linked fails with Ironic, or that it fails
 with the ceilometer code today?


 Disclaimer: in Ironic terms, node = conductor, key = host

 The test I linked fails with Ironic hash ring code (specifically the part
 that tests consistency). With 1000 keys being mapped to 10 nodes, when you
 add a node:
 - current ceilometer code remaps around 7% of the keys ( 1/#nodes)
 - Ironic code remaps  90% of the keys

Thanks Nejc for posting it here, I'm not super familiar with the
consistent hashing code but I def take a look at it. IMO this behavior
is not desirable in Ironic at all, we don't want  90% of the hash to
get remapped when a conductor join or leave the cluster.

Also it would be good to open a bug about this problem in Ironic so we
can track the progress there (I can open it if you want, but I will
first do some investigation to understand the problem better).

Cheers,
Lucas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][Nova]Quest about Cinder Brick proposal

2014-09-03 Thread Emma Lin
That¹s good. Thanks for your suggestion.

On 3/9/14 9:44 pm, Duncan Thomas duncan.tho...@gmail.com wrote:

openstack-cinder


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] znc as a service (was Re: [nova] Is the BP approval process broken?)

2014-09-03 Thread Ryan Brown
On 09/03/2014 09:35 AM, Sylvain Bauza wrote:
 Re: ZNC as a service, I think it's OK provided the implementation is
 open-sourced with openstack-infra repo group, as for Gerrit, Zuul and
 others.
 The only problem I can see is how to provide IRC credentials to this, as
 I don't want to share my creds up to the service.
 
 -Sylvain
There are more than just adoption (user trust) problems. An Open Source
implementation wouldn't solve the liability concerns, because users
would still have logs of their (potentially sensitive) credentials and
conversations on servers run by OpenStack Infra.

This is different from Gerrit/Zuul etc which just display code/changes
and run/display tests on those public items. There isn't anything
sensitive to be leaked there. Storing credentials and private messages
is a different story, and would require much more security work than
just storing code and test results.

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Treating notifications as a contract

2014-09-03 Thread Chris Dent

On Wed, 3 Sep 2014, Sandy Walsh wrote:


Is there anything slated for the Paris summit around this?


There are plans to make plans, but that's about all I know.


I just spent nearly a week parsing Nova notifications and the pain of
no schema has overtaken me.


/me passes the ibuprofen


We're chatting with IBM about CADF and getting down to specifics on
their applicability to notifications. Once I get StackTach.v3 into
production I'm keen to get started on revisiting the notification
format and olso.messaging support for notifications.

Perhaps a hangout for those keenly interested in doing something about this?


That seems like a good idea. I'd like to be a part of that.
Unfortunately I won't be at summit but would like to contribute what
I can before and after.

I took some notes on this a few weeks ago and extracted what seemed
to be the two main threads or ideas the were revealed by the
conversation that happened in this thread:

* At the micro level have versioned schema for notifications such that
  one end can declare I am sending version X of notification
  foo.bar.Y and the other end can effectively deal.

* At the macro level standardize a packaging or envelope of all
  notifications so that they can be consumed by very similar code.
  That is: constrain the notifications in some way so we can also
  constrain the consumer code.

These ideas serve two different purposes: One is to ensure that
existing notification use cases are satisfied with robustness and
provide a contract between two endpoints. The other is to allow a
fecund notification environment that allows and enables many
participants.

Is that a good summary? What did I leave out or get wrong?

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [solum] Consistency of development environment

2014-09-03 Thread Adrian Otto
Alex,

Thanks for your question, and for your interest in Solum. Our current dev 
environment setup relies on devstack to produce a working OpenStack environment 
for Solum to run within. We have noticed lately that our devstack setup does 
not always work. Because we are checking out devstack code from a repo that is 
frequently changing, and there are not unit tests for everything devstack can 
configure, sometimes it does not work. Also we have noticed that there may be 
some things that devstack does that are not deterministic, so even if it does 
pass a test, there is no guarantee that it will work again if the same thing is 
repeated. Sometimes it is hard to tell if the problem is Solum, or devstack.

We have discussed ways to mitigate this. One idea was to select a particular 
devstack from a prior OpenStack release to help cut down on the rate of change.

We also considered additional functional tests for devstack to run when new 
code is submitted. I suppose we could run testing continuously in loops in 
attempts to detect non-determinism.

All of the above are opportunities for us to improve matters going forward. 
There are probably even better ideas we should consider as well. For now, we 
would like to help you get past the friction you are experiencing so you can 
get a working environment up. I suggest finding us in #solum on Freenode IRC, 
and let's try to sort through it.

Thanks,

Adrian Otto


 Original message 
From: Alexander Vollschwitz
Date:09/03/2014 4:33 AM (GMT-08:00)
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [solum] Consistency of development environment


Hello,

I've been looking at Solum for about a couple of months now, with the
goal of eventually contributing to this project. First off, my apologies
if this is the wrong place for my question.

I'm currently getting familiar with project structure, source code, and
also trying to set up the dev env via Vagrant, following the Quick Start
Guide:

http://solum.readthedocs.org/en/latest/getting_started/

Here I need some advice. During my most recent attempt to set up the dev
env two days ago, I hit two problems: after devstack provisioned OS,
q-dhcp and q-l3 were not running. The former refused to start due to an
updated version requirement for dnsmasq (see
https://bugs.launchpad.net/openstack-manuals/+bug/1347153) that was not
met, the latter did not start due to problems with openvswitch.

I resolved both issues manually in the VM, all the while thinking that I
must be doing something wrong. (Well I'm pretty sure I am.) On the other
hand, I had similar problems getting the dev env up during earlier
tries. So what is the right way to get a consistent setup of a Solum dev
env? Are the instructions from the Quick Start guide linked above not
current? Do I need to configure different branches/tags, or use
different repos all together?

Sorry again if this is the wrong place to ask!
I hope I can make useful contributions soon.

Regards,

Alex
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Feature Freeze Exception process for Juno

2014-09-03 Thread Dan Genin

On 09/03/2014 07:31 AM, Gary Kotton wrote:


On 9/3/14, 12:50 PM, Nikola Đipanov ndipa...@redhat.com wrote:


On 09/02/2014 09:23 PM, Michael Still wrote:

On Tue, Sep 2, 2014 at 1:40 PM, Nikola Đipanov ndipa...@redhat.com
wrote:

On 09/02/2014 08:16 PM, Michael Still wrote:

Hi.

We're soon to hit feature freeze, as discussed in Thierry's recent
email. I'd like to outline the process for requesting a freeze
exception:

 * your code must already be up for review
 * your blueprint must have an approved spec
 * you need three (3) sponsoring cores for an exception to be
granted

Can core reviewers who have features up for review have this number
lowered to two (2) sponsoring cores, as they in reality then need four
(4) cores (since they themselves are one (1) core but cannot really
vote) making it an order of magnitude more difficult for them to hit
this checkbox?

That's a lot of numbers in that there paragraph.

Let me re-phrase your question... Can a core sponsor an exception they
themselves propose? I don't have a problem with someone doing that,
but you need to remember that does reduce the number of people who
have agreed to review the code for that exception.


Michael has correctly picked up on a hint of snark in my email, so let
me explain where I was going with that:

The reason many features including my own may not make the FF is not
because there was not enough buy in from the core team (let's be
completely honest - I have 3+ other core members working for the same
company that are by nature of things easier to convince), but because of
any of the following:

* Crippling technical debt in some of the key parts of the code
* that we have not been acknowledging as such for a long time
* which leads to proposed code being arbitrarily delayed once it makes
the glaring flaws in the underlying infra apparent
* and that specs process has been completely and utterly useless in
helping uncover (not that process itself is useless, it is very useful
for other things)

I am almost positive we can turn this rather dire situation around
easily in a matter of months, but we need to start doing it! It will not
happen through pinning arbitrary numbers to arbitrary processes.

I will follow up with a more detailed email about what I believe we are
missing, once the FF settles and I have applied some soothing creme to
my burnout wounds, but currently my sentiment is:

Contributing features to Nova nowadays SUCKS!!1 (even as a core
reviewer) We _have_ to change that!

+1

Sadly what you have written above is true. The current process does not
encourage new developers in Nova. I really think that we need to work on
improving our community. I really think that maybe we should sit as a
community at the summit and talk about this.

+2

N.


Michael


 * exceptions must be granted before midnight, Friday this week
(September 5) UTC
 * the exception is valid until midnight Friday next week
(September 12) UTC when all exceptions expire

For reference, our rc1 drops on approximately 25 September, so the
exception period needs to be short to maximise stabilization time.

John Garbutt and I will both be granting exceptions, to maximise our
timezone coverage. We will grant exceptions as they come in and gather
the required number of cores, although I have also carved some time
out in the nova IRC meeting this week for people to discuss specific
exception requests.

Michael



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





smime.p7s
Description: S/MIME Cryptographic Signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Status of Neutron at Juno-3

2014-09-03 Thread Brian Haley
On 09/03/2014 08:17 AM, Kyle Mestery wrote:
 Given how deep the merge queue is (146 currently), we've effectively
 reached feature freeze in Neutron now (likely other projects as well).
 So this morning I'm going to go through and remove BPs from Juno which
 did not make the merge window. I'll also be putting temporary -2s in
 the patches to ensure they don't slip in as well. I'm looking at FFEs
 for the high priority items which are close but didn't quite make it:
 
 https://blueprints.launchpad.net/neutron/+spec/l3-high-availability
 https://blueprints.launchpad.net/neutron/+spec/add-ipset-to-security
 https://blueprints.launchpad.net/neutron/+spec/security-group-rules-for-devices-rpc-call-refactor

I guess I'll be the first to ask for an exception for a Medium since the code
was originally completed in Icehouse:

https://blueprints.launchpad.net/neutron/+spec/l3-metering-mgnt-ext

The neutronclient-side code was committed in January, and the neutron side,
https://review.openstack.org/#/c/70090 has had mostly positive reviews since
then.  I've really just spent the last week re-basing it as things moved along.

Thanks,

-Brian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Rally scenario Issue

2014-09-03 Thread Ajay Kalambur (akalambu)
The reason this is failing is you are specifying a fixed network and at the 
same time asking for a new network to be created using context. What happens is 
the VM gets attached to your fixed network instead of the created Rally network
So you need to modify vm_tasks.py for this case and basically if not fixed and 
using floating ip just skip the network check and the fixed ip code and just 
directly use floating ip to access.
I think some changes are needed to vm_tasks.py to make it work for both fixed 
and floating and support for network context


If you need a local change I can pass along a local change for vm_tasks.py that 
only support floating ip. Let me know do you have floating ips available ?
Ajay


From: masoom alam masoom.a...@gmail.commailto:masoom.a...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, September 2, 2014 at 11:12 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Rally scenario Issue

Hi Ajay,

We are testing the same scenario that you are working one, but getting the 
follow error:

http://paste.openstack.org/show/105029/

Could you be of any help here?

Thanks




On Wed, Sep 3, 2014 at 4:16 AM, Ajay Kalambur (akalambu) 
akala...@cisco.commailto:akala...@cisco.com wrote:
Hi Guys
For the throughput tests I need to be able to install iperf on the cloud image. 
For this DNS server needs to be set. But the current network context should 
also support DNS name server setting
Should we add that into network context?
Ajay



From: Boris Pavlovic bo...@pavlovic.memailto:bo...@pavlovic.me

Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday, August 29, 2014 at 2:08 PM

To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Cc: Harshil Shah (harsshah) harss...@cisco.commailto:harss...@cisco.com
Subject: Re: [openstack-dev] Rally scenario Issue

Timur,

Thanks for pointing Ajay.

Ajay,

 Also I cannot see this failure unless I run rally with –v –d object.

Actually rally is sotring information about all failures. To get information 
about them you can run next command:

rally task results --pprint

It will display all information about all iterations (including exceptions)


Second when most of the steps in the scenario failed like attaching to network, 
ssh and run command why bother reporting the results

Because, bad results are better then nothing...


Best regards,
Boris Pavlovic


On Sat, Aug 30, 2014 at 12:54 AM, Timur Nurlygayanov 
tnurlygaya...@mirantis.commailto:tnurlygaya...@mirantis.com wrote:
Hi Ajay,

looks like you need to use NeutronContext feature to configure Neutron Networks 
during the benchmarks execution.
We now working on merge of two different comits with NeutronContext 
implementation:
https://review.openstack.org/#/c/96300  and 
https://review.openstack.org/#/c/103306

could you please apply commit https://review.openstack.org/#/c/96300 and run 
your benchmarks? Neutron Network with subnetworks and routers will be 
automatically created for each created tenant and you should have the ability 
to connect to VMs. Please, note, that you should add the following part to your 
task JSON to enable Neutron context:
...
context: {
...
neutron_network: {
network_cidr: 10.%s.0.0/16,
}
}
...

Hope this will help.



On Fri, Aug 29, 2014 at 11:42 PM, Ajay Kalambur (akalambu) 
akala...@cisco.commailto:akala...@cisco.com wrote:
Hi
I am trying to run the Rally scenario boot-runcommand-delete. This scenario has 
the following code
 def boot_runcommand_delete(self, image, flavor,
   script, interpreter, username,
   fixed_network=private,
   floating_network=public,
   ip_version=4, port=22,
   use_floatingip=True, **kwargs):
  server = None
floating_ip = None
try:
print fixed network:%s floating network:%s 
%(fixed_network,floating_network)
server = self._boot_server(
self._generate_random_name(rally_novaserver_),
image, flavor, key_name='rally_ssh_key', **kwargs)

self.check_network(server, fixed_network)

The question I have is the instance is created with a call to boot_server but 
no networks are attached to this server instance. Next step it goes and checks 
if the fixed network is attached to the instance and sure enough it fails
At the step highlighted in bold. Also I cannot see this failure unless I run 
rally with –v –d object. So it actually reports benchmark scenario numbers in a 
table with no 

Re: [openstack-dev] [neutron] Status of Neutron at Juno-3

2014-09-03 Thread Mark McClain

On Sep 3, 2014, at 11:04 AM, Brian Haley brian.ha...@hp.com wrote:

 On 09/03/2014 08:17 AM, Kyle Mestery wrote:
 Given how deep the merge queue is (146 currently), we've effectively
 reached feature freeze in Neutron now (likely other projects as well).
 So this morning I'm going to go through and remove BPs from Juno which
 did not make the merge window. I'll also be putting temporary -2s in
 the patches to ensure they don't slip in as well. I'm looking at FFEs
 for the high priority items which are close but didn't quite make it:
 
 https://blueprints.launchpad.net/neutron/+spec/l3-high-availability
 https://blueprints.launchpad.net/neutron/+spec/add-ipset-to-security
 https://blueprints.launchpad.net/neutron/+spec/security-group-rules-for-devices-rpc-call-refactor
 
 I guess I'll be the first to ask for an exception for a Medium since the code
 was originally completed in Icehouse:
 
 https://blueprints.launchpad.net/neutron/+spec/l3-metering-mgnt-ext
 
 The neutronclient-side code was committed in January, and the neutron side,
 https://review.openstack.org/#/c/70090 has had mostly positive reviews since
 then.  I've really just spent the last week re-basing it as things moved 
 along.
 

+1 for FFE.  I think this is good community that fell through the cracks.

mark  


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Treating notifications as a contract

2014-09-03 Thread Sandy Walsh
On 9/3/2014 11:32 AM, Chris Dent wrote:
 On Wed, 3 Sep 2014, Sandy Walsh wrote:

 We're chatting with IBM about CADF and getting down to specifics on
 their applicability to notifications. Once I get StackTach.v3 into
 production I'm keen to get started on revisiting the notification
 format and olso.messaging support for notifications.

 Perhaps a hangout for those keenly interested in doing something about this?
 That seems like a good idea. I'd like to be a part of that.
 Unfortunately I won't be at summit but would like to contribute what
 I can before and after.

 I took some notes on this a few weeks ago and extracted what seemed
 to be the two main threads or ideas the were revealed by the
 conversation that happened in this thread:

  * At the micro level have versioned schema for notifications such that
one end can declare I am sending version X of notification
foo.bar.Y and the other end can effectively deal.

Yes, that's table-stakes I think. Putting structure around the payload
section.

Beyond type and version we should be able to attach meta information
like public/private visibility and perhaps hints for external mapping
(this trait - that trait in CADF, for example).

  * At the macro level standardize a packaging or envelope of all
notifications so that they can be consumed by very similar code.
That is: constrain the notifications in some way so we can also
constrain the consumer code.
That's the intention of what we have now. The top level traits are
standard, the payload is open. We really only require: message_id,
timestamp and event_type. For auditing we need to cover Who, What, When,
Where, Why, OnWhat, OnWhere, FromWhere.

  These ideas serve two different purposes: One is to ensure that
  existing notification use cases are satisfied with robustness and
  provide a contract between two endpoints. The other is to allow a
  fecund notification environment that allows and enables many
  participants.
Good goals. When Producer and Consumer know what to expect, things are
good ... I know to find the Instance ID here. When the consumer
wants to deal with a notification as a generic object, things get tricky
(find the instance ID in the payload, What is the image type?, Is
this an error notification?)

Basically, how do we define the principle artifacts for each service and
grant the consumer easy/consistent access to them? (like the 7-W's above)

I'd really like to find a way to solve that problem.

 Is that a good summary? What did I leave out or get wrong?


Great start! Let's keep it simple and do-able.

We should also review the oslo.messaging notification api ... I've got
some concerns we've lost our way there.

-S


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] [infra] Alpha wheels for Python 3.x

2014-09-03 Thread Doug Hellmann

On Sep 2, 2014, at 3:17 PM, Clark Boylan cboy...@sapwetik.org wrote:

 On Tue, Sep 2, 2014, at 11:30 AM, Yuriy Taraday wrote:
 Hello.
 
 Currently for alpha releases of oslo libraries we generate either
 universal
 or Python 2.x-only wheels. This presents a problem: we can't adopt alpha
 releases in projects where Python 3.x is supported and verified in the
 gate. I've ran into this in change request [1] generated after
 global-requirements change [2]. There we have oslotest library that can't
 be built as a universal wheel because of different requirements (mox vs
 mox3 as I understand is the main difference). Because of that py33 job in
 [1] failed and we can't bump oslotest version in requirements.
 
 I propose to change infra scripts that generate and upload wheels to
 create
 py3 wheels as well as py2 wheels for projects that support Python 3.x (we
 can use setup.cfg classifiers to find that out) but don't support
 universal
 wheels. What do you think about that?
 
 [1] https://review.openstack.org/117940
 [2] https://review.openstack.org/115643
 
 -- 
 
 Kind regards, Yuriy.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 We may find that we will need to have py3k wheels in addition to the
 existing wheels at some point, but I don't think this use case requires
 it. If oslo.test needs to support python2 and python3 it should use mox3
 in both cases which claims to support python2.6, 2.7 and 3.2. Then you
 can ship a universal wheel. This should solve the immediate problem.

That sounds like a good solution to that specific issue. It may also require 
changes in the application test suites, but those changes can be made as we 
move them to use oslotest.

 
 It has been pointed out to me that one case where it won't be so easy is
 oslo.messaging and its use of eventlet under python2. Messaging will
 almost certainly need python 2 and python 3 wheels to be separate. I
 think we should continue to use universal wheels where possible and only
 build python2 and python3 wheels in the special cases where necessary.
 
 The setup.cfg classifiers should be able to do that for us, though PBR
 may need updating? We will also need to learn to upload potentially 1

How do you see that working? We want all of the Oslo libraries to, eventually, 
support both python 2 and 3. How would we use the classifiers to tell when to 
build a universal wheel and when to build separate wheels?

 wheel in our wheel jobs. That bit is likely straight foward. The last
 thing that we need to make sure we do is that we have some testing in
 place for the special wheels. We currently have the requirements
 integration test which runs under python2 checking that we can actually
 install all the things together. This ends up exercising our wheels and
 checking that they actually work. We don't have a python3 equivalent for

We only know the wheels can be installed. We don’t actually have a test that 
installs our code and runs it any more (devstack uses “develop” mode which 
bypasses some of the installation steps, as we found while fixing the recent 
neutron/pbr issue with a missing config file in their packaging instructions).

 that job. It may be better to work out some explicit checking of the
 wheels we produce that applies to both versions of python. I am not
 quite sure how we should approach that yet.

To fix the asymmetric gating we have between pbr and everything else, Robert 
suggested setting up some sort of job to install pbr and then build and install 
a package for the project being tested. We already, as you point out, have a 
job that does this for all of the projects to test changes to pbr itself. Maybe 
we can run the same test under python 2 and 3 as part of the same job?

Doug

 
 Clark
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] [infra] Alpha wheels for Python 3.x

2014-09-03 Thread Doug Hellmann

On Sep 3, 2014, at 5:27 AM, Yuriy Taraday yorik@gmail.com wrote:

 On Tue, Sep 2, 2014 at 11:17 PM, Clark Boylan cboy...@sapwetik.org wrote:
 On Tue, Sep 2, 2014, at 11:30 AM, Yuriy Taraday wrote:
  Hello.
 
  Currently for alpha releases of oslo libraries we generate either
  universal
  or Python 2.x-only wheels. This presents a problem: we can't adopt alpha
  releases in projects where Python 3.x is supported and verified in the
  gate. I've ran into this in change request [1] generated after
  global-requirements change [2]. There we have oslotest library that can't
  be built as a universal wheel because of different requirements (mox vs
  mox3 as I understand is the main difference). Because of that py33 job in
  [1] failed and we can't bump oslotest version in requirements.
 
  I propose to change infra scripts that generate and upload wheels to
  create
  py3 wheels as well as py2 wheels for projects that support Python 3.x (we
  can use setup.cfg classifiers to find that out) but don't support
  universal
  wheels. What do you think about that?
 
  [1] https://review.openstack.org/117940
  [2] https://review.openstack.org/115643
 
  --
 
  Kind regards, Yuriy.
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 We may find that we will need to have py3k wheels in addition to the
 existing wheels at some point, but I don't think this use case requires
 it. If oslo.test needs to support python2 and python3 it should use mox3
 in both cases which claims to support python2.6, 2.7 and 3.2. Then you
 can ship a universal wheel. This should solve the immediate problem.
 
 Yes, I think, it's the way to go with oslotest specifically. Created a change 
 request for this: https://review.openstack.org/118551
 
 It has been pointed out to me that one case where it won't be so easy is
 oslo.messaging and its use of eventlet under python2. Messaging will
 almost certainly need python 2 and python 3 wheels to be separate. I
 think we should continue to use universal wheels where possible and only
 build python2 and python3 wheels in the special cases where necessary.
 
 We can make eventlet an optional dependency of oslo.messaging (through 
 setuptools' extras). In fact I don't quite understand the need for eventlet 
 as direct dependency there since we can just write code that uses threading 
 library and it'll get monkeypatched if consumer app wants to use eventlet.

There is code in the messaging library that makes calls directly into eventlet 
now, IIRC. It sounds like that could be changed, but that’s something to 
consider for a future version.

The last time I looked at setuptools extras they were a documented but 
unimplemented specification. Has that changed?

 
 The setup.cfg classifiers should be able to do that for us, though PBR
 may need updating?
 
 I don't think so - it loads all classifiers from setup.cfg, they should be 
 available through some distutils machinery.
 
 We will also need to learn to upload potentially 1
 wheel in our wheel jobs. That bit is likely straight foward. The last
 thing that we need to make sure we do is that we have some testing in
 place for the special wheels. We currently have the requirements
 integration test which runs under python2 checking that we can actually
 install all the things together. This ends up exercising our wheels and
 checking that they actually work. We don't have a python3 equivalent for
 that job. It may be better to work out some explicit checking of the
 wheels we produce that applies to both versions of python. I am not
 quite sure how we should approach that yet.
 
 I guess we can just repeat that check with Python 3.x. If I see it right, all 
 we need is to repeat loop in pbr/tools/integration.sh with different Python 
 version. The problem might occur that we'll be running this test with Python 
 3.4 that is default on trusty but all our unittests jobs run on 3.3 instead. 
 May be we should drop 3.3 already?
 
 -- 
 
 Kind regards, Yuriy.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Treating notifications as a contract

2014-09-03 Thread gordon chung
  For example: It appears that CADF was designed for this sort of thing and 
   was considered at some point in the past. It would be useful to know  
  more of that story if there are any pointers.   My initial reaction is 
  that CADF has the stank of enterprisey all over  it rather than less is 
  more and worse is better but that's a  completely uninformed and thus 
  unfair opinion.  TBH I don't know enough about CADF, but I know a man who 
  does ;)  (gordc, I'm looking at you!)** so i was on vacation when this 
  thread popped up. i'll just throw a disclaimer, i didn't read the initial 
  conversion thread... also, i just read what i typed below and ignore the 
  fact it sounds like a sales pitch. **
CADF is definitely a well-defined open standard with contributions from 
multiple companies so there are a lot of use cases, case and point the daunting 
100+ pg spec [1].
the purpose of CADF was to be an auditable event model to describe cloud events 
(basically what our notifications are in OpenStack). regarding CADF in 
OpenStack[2], pyCADF has now been moved under the Keystone umbrella to handle 
auditing.  Keystone thus far has done a great job incorporating pyCADF into 
their notification messages.
while the spec is quite verbose, there is a short intro to CADF events and how 
to define them in the pycadf docs [3]. we also did a talk at the Atlanta summit 
[4] (apologies for my lack of presentation skills). lastly, i know we 
previously had a bunch of slides describing/explaining CADF at a highlevel. 
i'll let ibmers find a copy to post to slideshare or the like.
 * At the micro level have versioned schema for notifications such that
 one end can declare I am sending version X of notification
 foo.bar.Y and the other end can effectively deal.
the event model has a mandatory typeURI field where you could define a version
 These ideas serve two different purposes: One is to ensure that
 existing notification use cases are satisfied with robustness and
 provide a contract between two endpoints. The other is to allow a
 fecund notification environment that allows and enables many
 participants.
CADF is designed to be extensible so even if a use cases is not specifically 
defined in spec, the model can be extended to accommodate. additionally, one of 
the chairs of the CADF spec is also a contributor to pyCADF so there are 
opportunities to shape the future of the CADF (something we did, while building 
pyCADF).
 Another approach would be to hone in on the producer-side that's
 currently the heaviest user of notifications, i.e. nova, and propose
 the strawman to nova-specs
i'd love for OpenStack to converge on a standard (whether CADF or not). 
personal experience tells me it'll be difficult, but i think more and more have 
realised just making the 'wild west' even wilder isn't helping.
[1] 
http://www.dmtf.org/sites/default/files/standards/documents/DSP0262_1.0.0.pdf[2]
 http://www.dmtf.org/standards/cadf
[3] http://docs.openstack.org/developer/pycadf/event_concept.html[4] 
https://www.openstack.org/summit/openstack-summit-atlanta-2014/session-videos/presentation/an-overview-of-cloud-auditing-support-for-openstack
cheers,
gord  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Pre-5.1 and master builds ISO are available for download

2014-09-03 Thread Dmitry Pyzhov
Dmitry,

I totally agree that we should support nightly builds in upgrades. I've
created a blueprint for this:
https://blueprints.launchpad.net/fuel/+spec/upgrade-nightly


On Tue, Sep 2, 2014 at 3:24 AM, Dmitry Borodaenko dborodae...@mirantis.com
wrote:

 We should not confuse beta and rc builds, normally betas predate RCs and
 serve a different purpose. In that sense, the nightlies we currently
 publish are closest to what beta builds should be.

 As discussed earlier in the thread, we already have full versioning and
 provenance information in each build, so there is not a lot of value in
 inventing a parallel versioning scheme just for the time period when our
 builds are feature complete but not yet stable enough to declare an RC. The
 only benefit is to explicitly indicate the beta status of these builds, and
 we can achieve that without messing with versions. For example, by
 generating a summary table of all community builds that have passed the
 tests (using same build numbers we already have).

 Not supporting upgrades from/to intermediate builds is a limitation that
 we should not discard as inevitable, overcoming it should be in our
 backlog. Image based provisioning should make it much easier to support.

 My 2c,
 -Dmitry
 I would not use beta word anywhere at all. These are nightly builds,
 pre-5.1. So it will become 5.1 eventually, but for the moment - it is just
 master branch. We've not even reached HCF.

 After we reach HCF, we will start calling builds as Release Candidates
 (RC1, RC2, etc.)  - and QA team runs acceptance testing against them. This
 can be considered as another name instead of beta-1, etc.

 Anyone can go to fuel-master-IP:8000/api/version to get sha commits of
 git repos a particular build was created of. Yes, these are development
 builds, and there will be no upgrade path provided from development build
 to 5.1 release or any other release. We might want to think about it
 though, if we could do it in theory, but I confirm what Evgeny says - we do
 not support it now.



 On Wed, Aug 27, 2014 at 1:11 PM, Evgeniy L e...@mirantis.com wrote:

 Hi guys, I have to say something about beta releases.

 As far as I know our beta release has the same version
 5.1 as our final release.

 I think this versions should be different, because in case
 of some problem it will be much easier to identify what
 version we are trying to debug.

 Also from the irc channel I've heard that somebody wanted
 to upgrade his system to stable version, right now it's impossible
 because upgrade system uses this version for names of
 containers/images/temporary directories and we have
 validation which prevents the user to run upgrade to the
 same version.

 In upgrade script we use python module [1] to compare versions
 for validation.
 Let me give an example how development versions can look like

 5.1a1 # alpha
 5.1b1 # beta 1
 5.1b1 # beta 2
 5.1b1 # beta 3
 5.1# final release

 [1]
 http://epydoc.sourceforge.net/stdlib/distutils.version.StrictVersion-class.html

 Thanks,


 On Tue, Aug 26, 2014 at 11:15 AM, Mike Scherbakov 
 mscherba...@mirantis.com wrote:

 Igor,
 thanks a lot for improving UX over it - this table allows me to see
 which ISO passed verification tests.


 On Mon, Aug 25, 2014 at 7:54 PM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 I would also like to add that you can use our library called devops
 along with system tests we use for QA and CI. These tests use libvirt and
 kvm so that you can easily fire up an environment with specific
 configuration (Centos/Ubuntu Nova/Neutron Ceph/Swift and so on). All the
 documentation how to use this library is here:
 http://docs.mirantis.com/fuel-dev/devops.html. If you find any bugs or
 gaps in documentation, please feel free to file bugs to
 https://launchpad.net/fuel.


 On Mon, Aug 25, 2014 at 6:39 PM, Igor Shishkin ishish...@mirantis.com
 wrote:

 Hi all,
 along with building your own ISO following instructions [1], you can
 always download nightly build [2] and run it, by using virtualbox scripts
 [3], for example.

 For your conveniency, you can see a build status table on CI [4].
 First tab now refers to pre-5.1 builds, and second - to master builds.
 BVT columns stands for Build Verification Test, which is essentially
 full HA deploy deployment test.

 Currently pre-5.1 and master builds are actually built from same
 master branch. As soon as we call for Hard Code Freeze, pre-5.1 builds 
 will
 be reconfigured to use stable/5.1 branch.

 Thanks,

 [1]
 http://docs.mirantis.com/fuel-dev/develop/env.html#building-the-fuel-iso
 [2] https://wiki.openstack.org/wiki/Fuel#Nightly_builds
 [3] https://github.com/stackforge/fuel-main/tree/master/virtualbox
 [4] https://fuel-jenkins.mirantis.com/view/ISO/
 --
 Igor Shishkin
 DevOps




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Kilo Cycle Goals Exercise

2014-09-03 Thread Joe Gordon
As you all know, there has recently been several very active discussions
around how to improve assorted aspects of our development process. One idea
that was brought up is to come up with a list of cycle goals/project
priorities for Kilo [0].

To that end, I would like to propose an exercise as discussed in the TC
meeting yesterday [1]:
Have anyone interested (especially TC members) come up with a list of what
they think the project wide Kilo cycle goals should be and post them on
this thread by end of day Wednesday, September 10th. After which time we
can begin discussing the results.
The goal of this exercise is to help us see if our individual world views
align with the greater community, and to get the ball rolling on a larger
discussion of where as a project we should be focusing more time.


best,
Joe Gordon

[0]
http://lists.openstack.org/pipermail/openstack-dev/2014-August/041929.html
[1]
http://eavesdrop.openstack.org/meetings/tc/2014/tc.2014-09-02-20.04.log.html
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Treating notifications as a contract

2014-09-03 Thread Doug Hellmann

On Sep 3, 2014, at 9:50 AM, Sandy Walsh sandy.wa...@rackspace.com wrote:

 Is there anything slated for the Paris summit around this?
 
 I just spent nearly a week parsing Nova notifications and the pain of no 
 schema has overtaken me. 
 
 We're chatting with IBM about CADF and getting down to specifics on their 
 applicability to notifications. Once I get StackTach.v3 into production I'm 
 keen to get started on revisiting the notification format and olso.messaging 
 support for notifications. 
 
 Perhaps a hangout for those keenly interested in doing something about this?

Julien did start some work on it a while back, but it was shelved for other 
more pressing things at the time.

https://blueprints.launchpad.net/oslo.messaging/+spec/notification-structured
https://wiki.openstack.org/wiki/Oslo/blueprints/notification-structured

It would be good to start building up a set of requirements in anticipation of 
a cross-project or Oslo session at the summit.

Doug

 
 Thoughts?
 -S
 
 
 From: Eoghan Glynn [egl...@redhat.com]
 Sent: Monday, July 14, 2014 8:53 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all] Treating notifications as a contract
 
 So what we need to figure out is how exactly this common structure can be
 accommodated without reverting back to what Sandy called the wild west
 in another post.
 
 I got the impression that wild west is what we've already got
 (within the payload)?
 
 Yeah, exactly, that was my interpretation too.
 
 So basically just to ensure that the lightweight-schema/common-structure
 notion doesn't land us back not too far beyond square one (if there are
 too many degrees-of-freedom in that declaration of a list of dicts with
 certain required fields that you had envisaged in an earlier post).
 
 For example you could write up a brief wiki walking through how an
 existing widely-consumed notification might look under your vision,
 say compute.instance.start.end. Then post a link back here as an RFC.
 
 Or, possibly better, maybe submit up a strawman spec proposal to one
 of the relevant *-specs repos and invite folks to review in the usual
 way?
 
 Would oslo-specs (as in messaging) be the right place for that?
 
 That's a good question.
 
 Another approach would be to hone in on the producer-side that's
 currently the heaviest user of notifications, i.e. nova, and propose
 the strawman to nova-specs given that (a) that's where much of the
 change will be needed, and (b) many of the notification patterns
 originated in nova and then were subsequently aped by other projects
 as they were spun up.
 
 My thinking is the right thing to do is bounce around some questions
 here (or perhaps in a new thread if this one has gone far enough off
 track to have dropped people) and catch up on some loose ends.
 
 Absolutely!
 
 For example: It appears that CADF was designed for this sort of thing and
 was considered at some point in the past. It would be useful to know
 more of that story if there are any pointers.
 
 My initial reaction is that CADF has the stank of enterprisey all over
 it rather than less is more and worse is better but that's a
 completely uninformed and thus unfair opinion.
 
 TBH I don't know enough about CADF, but I know a man who does ;)
 
 (gordc, I'm looking at you!)
 
 Another question (from elsewhere in the thread) is if it is worth, in
 the Ironic notifications, to try and cook up something generic or to
 just carry on with what's being used.
 
 Well, my gut instinct is that the content of the Ironic notifications
 is perhaps on the outlier end of the spectrum compared to the more
 traditional notifications we see emitted by nova, cinder etc. So it
 may make better sense to concentrate initially on how contractizing
 these more established notifications might play out.
 
 This feels like something that we should be thinking about with an eye
 to the K* cycle - would you agree?
 
 Yup.
 
 Thanks for helping to tease this all out and provide some direction on
 where to go next.
 
 Well thank *you* for picking up the baton on this and running with it :)
 
 Cheers,
 Eoghan
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Status of Neutron at Juno-3

2014-09-03 Thread Kyle Mestery
On Wed, Sep 3, 2014 at 10:19 AM, Mark McClain m...@mcclain.xyz wrote:

 On Sep 3, 2014, at 11:04 AM, Brian Haley brian.ha...@hp.com wrote:

 On 09/03/2014 08:17 AM, Kyle Mestery wrote:
 Given how deep the merge queue is (146 currently), we've effectively
 reached feature freeze in Neutron now (likely other projects as well).
 So this morning I'm going to go through and remove BPs from Juno which
 did not make the merge window. I'll also be putting temporary -2s in
 the patches to ensure they don't slip in as well. I'm looking at FFEs
 for the high priority items which are close but didn't quite make it:

 https://blueprints.launchpad.net/neutron/+spec/l3-high-availability
 https://blueprints.launchpad.net/neutron/+spec/add-ipset-to-security
 https://blueprints.launchpad.net/neutron/+spec/security-group-rules-for-devices-rpc-call-refactor

 I guess I'll be the first to ask for an exception for a Medium since the code
 was originally completed in Icehouse:

 https://blueprints.launchpad.net/neutron/+spec/l3-metering-mgnt-ext

 The neutronclient-side code was committed in January, and the neutron side,
 https://review.openstack.org/#/c/70090 has had mostly positive reviews since
 then.  I've really just spent the last week re-basing it as things moved 
 along.


 +1 for FFE.  I think this is good community that fell through the cracks.

I agree, and I've marked it as RC1 now. I'll sort through these with
ttx on Friday and get more clarity on it's official status.

Thanks,
Kyle

 mark


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Goals for 5.1.1 6.0

2014-09-03 Thread Dmitry Pyzhov
Feature blockers:
Versioning https://blueprints.launchpad.net/fuel/+spec/nailgun-versioning
for REST API
https://blueprints.launchpad.net/fuel/+spec/nailgun-versioning-api, UI,
serialization
https://blueprints.launchpad.net/fuel/+spec/nailgun-versioning-rpc

Ongoing activities:
Nailgun plugins
https://blueprints.launchpad.net/fuel/+spec/nailgun-plugins

Stability and Reliability:
Docs for serialization data
Docs for REST API data
https://blueprints.launchpad.net/fuel/+spec/documentation-on-rest-api-input-output
Nailgun unit tests restructure
Image based provisioning
https://blueprints.launchpad.net/fuel/+spec/image-based-provisioning
Granular deployment
https://blueprints.launchpad.net/fuel/+spec/granular-deployment-based-on-tasks
Artifact-based build system
Power management
Fencing https://blueprints.launchpad.net/fuel/+spec/ha-fencing

Features:
Advanced networking
https://blueprints.launchpad.net/fuel/+spec/advanced-networking (blocked
by Multi L2 support)

Some of this items will not fit 6.0, I guess. But we should work on them
now.



On Thu, Aug 28, 2014 at 4:26 PM, Mike Scherbakov mscherba...@mirantis.com
wrote:

 Hi Fuelers,
 while we are busy with last bugs which block us from releasing 5.1, we
 need to start thinking about upcoming releases. Some of you already started
 POC, some - specs, and I see discussions in ML and IRC.

 From overall strategy perspective, focus for 6.0 is:

- OpenStack Juno release
- Certificate 100-node deployment. In terms of OpenStack, if not
possible for Juno, let's do for Icehouse
- Send anonymous stats about deployment (deployment modes, features
used, etc.)
- Stability and Reliability

 Let's take a little break and think, in a first order, about features,
 sustaining items and bugs which block us from releasing whether 5.1.1 or
 6.0.
 We have to start creating blueprints (and moving them to 6.0 milestone)
 and make sure there are critical bugs assigned to appropriate milestone, if
 there are any.

 Examples which come to my mind immediately:

- Use service token to auth in Keystone for upgrades (affects 5.1.1),
instead of plain admin login / pass. Otherwise it affects security, and
user should keep password in plain text
- Decrease upgrade tarball size

 Please come up with blueprints and LP bugs links, and short explanation
 why it's a blocker for upcoming releases.

 Thanks,
 --
 Mike Scherbakov
 #mihgen


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] [infra] Alpha wheels for Python 3.x

2014-09-03 Thread Clark Boylan
On Wed, Sep 3, 2014, at 08:22 AM, Doug Hellmann wrote:
 
 On Sep 2, 2014, at 3:17 PM, Clark Boylan cboy...@sapwetik.org wrote:
 
  On Tue, Sep 2, 2014, at 11:30 AM, Yuriy Taraday wrote:
  Hello.
  
  Currently for alpha releases of oslo libraries we generate either
  universal
  or Python 2.x-only wheels. This presents a problem: we can't adopt alpha
  releases in projects where Python 3.x is supported and verified in the
  gate. I've ran into this in change request [1] generated after
  global-requirements change [2]. There we have oslotest library that can't
  be built as a universal wheel because of different requirements (mox vs
  mox3 as I understand is the main difference). Because of that py33 job in
  [1] failed and we can't bump oslotest version in requirements.
  
  I propose to change infra scripts that generate and upload wheels to
  create
  py3 wheels as well as py2 wheels for projects that support Python 3.x (we
  can use setup.cfg classifiers to find that out) but don't support
  universal
  wheels. What do you think about that?
  
  [1] https://review.openstack.org/117940
  [2] https://review.openstack.org/115643
  
  -- 
  
  Kind regards, Yuriy.
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  We may find that we will need to have py3k wheels in addition to the
  existing wheels at some point, but I don't think this use case requires
  it. If oslo.test needs to support python2 and python3 it should use mox3
  in both cases which claims to support python2.6, 2.7 and 3.2. Then you
  can ship a universal wheel. This should solve the immediate problem.
 
 That sounds like a good solution to that specific issue. It may also
 require changes in the application test suites, but those changes can be
 made as we move them to use oslotest.
 
  
  It has been pointed out to me that one case where it won't be so easy is
  oslo.messaging and its use of eventlet under python2. Messaging will
  almost certainly need python 2 and python 3 wheels to be separate. I
  think we should continue to use universal wheels where possible and only
  build python2 and python3 wheels in the special cases where necessary.
  
  The setup.cfg classifiers should be able to do that for us, though PBR
  may need updating? We will also need to learn to upload potentially 1
 
 How do you see that working? We want all of the Oslo libraries to,
 eventually, support both python 2 and 3. How would we use the classifiers
 to tell when to build a universal wheel and when to build separate
 wheels?
 
The classifiers provide info on the versions of python we support. By
default we can build python2 wheel if only 2 is supported, build python3
wheel if only 3 is supported, build a universal wheel if both are
supported. Then we can add a setup.cfg flag to override the universal
wheel default to build both a python2 and python3 wheel instead. Dstufft
and mordred should probably comment on this idea before we implement
anything.

  wheel in our wheel jobs. That bit is likely straight foward. The last
  thing that we need to make sure we do is that we have some testing in
  place for the special wheels. We currently have the requirements
  integration test which runs under python2 checking that we can actually
  install all the things together. This ends up exercising our wheels and
  checking that they actually work. We don't have a python3 equivalent for
 
 We only know the wheels can be installed. We don’t actually have a test
 that installs our code and runs it any more (devstack uses “develop” mode
 which bypasses some of the installation steps, as we found while fixing
 the recent neutron/pbr issue with a missing config file in their
 packaging instructions).
 
Yup, this would be a second level of testing that we should consider as
well. The post install can it do stuff test. We moved our unittesting
away from using sdist installs in preference for setup.py develop
equivalent which means that our unittests no longer cover some of this
stuff for us. Sorry if I hijacked this thread into a how do we test
release artifacts? thread.

  that job. It may be better to work out some explicit checking of the
  wheels we produce that applies to both versions of python. I am not
  quite sure how we should approach that yet.
 
 To fix the asymmetric gating we have between pbr and everything else,
 Robert suggested setting up some sort of job to install pbr and then
 build and install a package for the project being tested. We already, as
 you point out, have a job that does this for all of the projects to test
 changes to pbr itself. Maybe we can run the same test under python 2 and
 3 as part of the same job?
 
And to get that extra level of tested we can run the unittests in that
package install. We won't be able to run everything against python3
because of missing support (well we can test installs but 

Re: [openstack-dev] [Nova] Feature Freeze Exception process for Juno

2014-09-03 Thread Solly Ross
 I will follow up with a more detailed email about what I believe we are
 missing, once the FF settles and I have applied some soothing creme to
 my burnout wounds, but currently my sentiment is:
 
 Contributing features to Nova nowadays SUCKS!!1 (even as a core
 reviewer) We _have_ to change that!

I think this is *very* important.

rant
For instance, I have/had two patch series
up. One is of length 2 and is relatively small.  It's basically sitting there
with one +2 on each patch.  I will now most likely have to apply for a FFE
to get it merged, not because there's more changes to be made before it can get 
merged
(there was one small nit posted yesterday) or because it's a huge patch that 
needs a lot
of time to review, but because it just took a while to get reviewed by cores,
and still only appears to have been looked at by one core.

For the other patch series (which is admittedly much bigger), it was hard just 
to
get reviews (and it was something where I actually *really* wanted several 
opinions,
because the patch series touched a couple of things in a very significant way).

Now, this is not my first contribution to OpenStack, or to Nova, for that 
matter.  I
know things don't always get in.  It's frustrating, however, when it seems like 
the
reason something didn't get in wasn't because it was fundamentally flawed, but 
instead
because it didn't get reviews until it was too late to actually take that 
feedback into
account, or because it just didn't get much attention review-wise at all.  If I 
were a
new contributor to Nova who had successfully gotten a major blueprint approved 
and
the implemented, only to see it get rejected like this, I might get turned off 
of Nova,
and go to work on one of the other OpenStack projects that seemed to move a bit 
faster.
/rant

So, it's silly to rant without actually providing any ideas on how to fix it.
One suggestion would be, for each approved blueprint, to have one or two cores
explicitly marked as being responsible for providing at least some feedback on
that patch.  This proposal has issues, since we have a lot of blueprints and 
only
twenty cores, who also have their own stuff to work on.  However, I think the
general idea of having guaranteed reviewers is not unsound by itself.  Perhaps
we should have a loose tier of reviewers between core and everybody else.
These reviewers would be known good reviewers who would follow the 
implementation
of particular blueprints if a core did not have the time.  Then, when those 
reviewers
gave the +1 to all the patches in a series, they could ping a core, who could 
feel
more comfortable giving a +2 without doing a deep inspection of the code.

That's just one suggestion, though.  Whatever the solution may be, this is a
problem that we need to fix.  While I enjoyed going through the blueprint 
process
this cycle (not sarcastic -- I actually enjoyed the whole structured feedback 
thing),
the follow up to that was not the most pleasant.

One final note: the specs referenced above didn't get approved until Spec 
Freeze, which
seemed to leave me with less time to implement things.  In fact, it seemed that 
a lot
of specs didn't get approved until spec freeze.  Perhaps if we had more 
staggered
approval of specs, we'd have more staggered submission of patches, and thus 
less of a
sudden influx of patches in the couple weeks before feature proposal freeze.

Best Regards,
Solly Ross

- Original Message -
 From: Nikola Đipanov ndipa...@redhat.com
 To: openstack-dev@lists.openstack.org
 Sent: Wednesday, September 3, 2014 5:50:09 AM
 Subject: Re: [openstack-dev] [Nova] Feature Freeze Exception process for Juno
 
 On 09/02/2014 09:23 PM, Michael Still wrote:
  On Tue, Sep 2, 2014 at 1:40 PM, Nikola Đipanov ndipa...@redhat.com wrote:
  On 09/02/2014 08:16 PM, Michael Still wrote:
  Hi.
 
  We're soon to hit feature freeze, as discussed in Thierry's recent
  email. I'd like to outline the process for requesting a freeze
  exception:
 
  * your code must already be up for review
  * your blueprint must have an approved spec
  * you need three (3) sponsoring cores for an exception to be granted
 
  Can core reviewers who have features up for review have this number
  lowered to two (2) sponsoring cores, as they in reality then need four
  (4) cores (since they themselves are one (1) core but cannot really
  vote) making it an order of magnitude more difficult for them to hit
  this checkbox?
  
  That's a lot of numbers in that there paragraph.
  
  Let me re-phrase your question... Can a core sponsor an exception they
  themselves propose? I don't have a problem with someone doing that,
  but you need to remember that does reduce the number of people who
  have agreed to review the code for that exception.
  
 
 Michael has correctly picked up on a hint of snark in my email, so let
 me explain where I was going with that:
 
 The reason many features including my own may not make the FF is not
 because 

Re: [openstack-dev] Rally scenario Issue

2014-09-03 Thread masoom alam
Please pass on your VMTasks.py. Further, floating ips are available.

Thanks for the help!


On Wed, Sep 3, 2014 at 8:07 PM, Ajay Kalambur (akalambu) akala...@cisco.com
 wrote:

  The reason this is failing is you are specifying a fixed network and at
 the same time asking for a new network to be created using context. What
 happens is the VM gets attached to your fixed network instead of the
 created Rally network
 So you need to modify vm_tasks.py for this case and basically if not fixed
 and using floating ip just skip the network check and the fixed ip code and
 just directly use floating ip to access.
 I think some changes are needed to vm_tasks.py to make it work for both
 fixed and floating and support for network context


  If you need a local change I can pass along a local change for
 vm_tasks.py that only support floating ip. Let me know do you have floating
 ips available ?
 Ajay


   From: masoom alam masoom.a...@gmail.com

 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Tuesday, September 2, 2014 at 11:12 PM

 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] Rally scenario Issue

   Hi Ajay,

  We are testing the same scenario that you are working one, but getting
 the follow error:

  http://paste.openstack.org/show/105029/

  Could you be of any help here?

  Thanks




 On Wed, Sep 3, 2014 at 4:16 AM, Ajay Kalambur (akalambu) 
 akala...@cisco.com wrote:

  Hi Guys
 For the throughput tests I need to be able to install iperf on the cloud
 image. For this DNS server needs to be set. But the current network context
 should also support DNS name server setting
 Should we add that into network context?
 Ajay



   From: Boris Pavlovic bo...@pavlovic.me

 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
  Date: Friday, August 29, 2014 at 2:08 PM

 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Cc: Harshil Shah (harsshah) harss...@cisco.com
 Subject: Re: [openstack-dev] Rally scenario Issue

   Timur,

  Thanks for pointing Ajay.

  Ajay,

   Also I cannot see this failure unless I run rally with –v –d object.


  Actually rally is sotring information about all failures. To get
 information about them you can run next command:

  *rally task results --pprint*

  It will display all information about all iterations (including
 exceptions)


   Second when most of the steps in the scenario failed like attaching to
 network, ssh and run command why bother reporting the results


  Because, bad results are better then nothing...


  Best regards,
 Boris Pavlovic


 On Sat, Aug 30, 2014 at 12:54 AM, Timur Nurlygayanov 
 tnurlygaya...@mirantis.com wrote:

   Hi Ajay,

  looks like you need to use NeutronContext feature to configure Neutron
 Networks during the benchmarks execution.
  We now working on merge of two different comits with NeutronContext
 implementation:
 https://review.openstack.org/#/c/96300  and
 https://review.openstack.org/#/c/103306

  could you please apply commit https://review.openstack.org/#/c/96300
 and run your benchmarks? Neutron Network with subnetworks and routers will
 be automatically created for each created tenant and you should have the
 ability to connect to VMs. Please, note, that you should add the following
 part to your task JSON to enable Neutron context:
 ...
 context: {
 ...
 neutron_network: {
 network_cidr: 10.%s.0.0/16,
 }
 }
 ...

  Hope this will help.



  On Fri, Aug 29, 2014 at 11:42 PM, Ajay Kalambur (akalambu) 
 akala...@cisco.com wrote:

   Hi
 I am trying to run the Rally scenario boot-runcommand-delete. This
 scenario has the following code
   def boot_runcommand_delete(self, image, flavor,
script, interpreter, username,
fixed_network=private,
floating_network=public,
ip_version=4, port=22,
use_floatingip=True, **kwargs):
server = None
 floating_ip = None
 try:
 print fixed network:%s floating network:%s
 %(fixed_network,floating_network)
 server = self._boot_server(
 self._generate_random_name(rally_novaserver_),
 image, flavor, key_name='rally_ssh_key', **kwargs)

  *self.check_network(server, fixed_network)*

  The question I have is the instance is created with a call to
 boot_server but no networks are attached to this server instance. Next step
 it goes and checks if the fixed network is attached to the instance and
 sure enough it fails
 At the step highlighted in bold. Also I cannot see this failure unless
 I run rally with –v –d object. So it actually reports benchmark scenario
 numbers in a table with no errors when I 

Re: [openstack-dev] Kilo Cycle Goals Exercise

2014-09-03 Thread Doug Hellmann

On Sep 3, 2014, at 11:37 AM, Joe Gordon joe.gord...@gmail.com wrote:

 As you all know, there has recently been several very active discussions
 around how to improve assorted aspects of our development process. One idea
 that was brought up is to come up with a list of cycle goals/project
 priorities for Kilo [0].
 
 To that end, I would like to propose an exercise as discussed in the TC 
 meeting yesterday [1]:
 Have anyone interested (especially TC members) come up with a list of what 
 they think the project wide Kilo cycle goals should be and post them on this 
 thread by end of day Wednesday, September 10th. After which time we can begin 
 discussing the results.
 The goal of this exercise is to help us see if our individual world views 
 align with the greater community, and to get the ball rolling on a larger 
 discussion of where as a project we should be focusing more time.

Thanks for starting this discussion, Joe. It’s important for us to start 
working on “OpenStack” as a whole, in addition to our individual projects. 

1. Sean has done a lot of analysis and started a spec on standardizing logging 
guidelines where he is gathering input from developers, deployers, and 
operators [1]. Because it is far enough for us to see real progress, it’s a 
good place for us to start experimenting with how to drive cross-project 
initiatives involving code and policy changes from outside of a single project. 
We have a couple of potentially related specs in Oslo as part of the oslo.log 
graduation work [2] [3], but I think most of the work will be within the 
applications.

[1] https://review.openstack.org/#/c/91446/
[2] 
https://blueprints.launchpad.net/oslo.log/+spec/app-agnostic-logging-parameters
[3] https://blueprints.launchpad.net/oslo.log/+spec/remove-context-adapter

2. A longer-term topic is standardizing our notification content and format. 
See the thread Treating notifications as a contract” for details. We could set 
a goal for Kilo of establishing the requirements and proposing a spec, with 
implementation to begin in L.

3. Another long-term topic is standardizing our APIs so that we use consistent 
terminology and formatting (I think we have at least 3 forms of errors returned 
now?). I’m not sure we have anyone ready to drive this, yet, so I don’t think 
it’s something to consider for Kilo.

4. I would also like to see the unified SDK and command line client projects 
continued (or resumed, I haven’t been following the SDK work closely). Both of 
those projects will eventually make using OpenStack clouds easier. It would be 
nice to see some movement towards a “user tools” program to encompass both of 
these projects, perhaps with an eye on incubation at the end of Kilo.

5. And we should also be considering the Python 3 porting work. We’ve made some 
progress with the Oslo libraries, with oslo.messaging  eventlet still our main 
blocker. We need to put together a more concrete plan to finish that work and 
for tackling applications, as well as a team willing to help projects through 
their transition. This is very long term, but does need attention, and I think 
it’s reasonable to ask for a plan by the end of Kilo.

From a practical standpoint, we do need to work out details like where we make 
decisions about the plans for these projects once the general idea is approved. 
We’ve done some of this in the Oslo project in the past (log translations, for 
example) but I don’t think that’s the right place for projects at this scale. A 
new openstack-specs repository would give us a place to work on them, but we 
need to answer the question of how to decide what is approved.

Doug

 
 
 best,
 Joe Gordon
 
 [0] http://lists.openstack.org/pipermail/openstack-dev/2014-August/041929.html
 [1] 
 http://eavesdrop.openstack.org/meetings/tc/2014/tc.2014-09-02-20.04.log.html
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [glance] do NOT ever sort requirements.txt

2014-09-03 Thread Dolph Mathews
On Wed, Sep 3, 2014 at 8:25 AM, Sean Dague s...@dague.net wrote:

 On 09/03/2014 09:03 AM, Daniel P. Berrange wrote:
  On Wed, Sep 03, 2014 at 08:37:17AM -0400, Sean Dague wrote:
  I'm not sure why people keep showing up with sort requirements patches
  like - https://review.openstack.org/#/c/76817/6, however, they do.
 
  All of these need to be -2ed with predjudice.
 
  requirements.txt is not a declarative interface. The order is important
  as pip processes it in the order it is. Changing the order has impacts
  on the overall integration which can cause wedges later.
 
  Can  requirements.txt contain comment lines ?  If so, it would be
  worth adding
 
 # The ordering of modules in this file is important
 # Do not attempt to re-sort the lines
 
  Because 6 months hence people will have probably forgotten about
  this mail, or if they're new contributors, never know it existed.

 The point is that core review team members should know. In this case at
 least one glance core +2ed this change.

 Regular contributors can be educated by core team members.


Regardless, tribal knowledge should be documented, and doing so in
requirements files is probably the best place for that.



 -Sean

 --
 Sean Dague
 http://dague.net

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] [infra] Alpha wheels for Python 3.x

2014-09-03 Thread Doug Hellmann

On Sep 3, 2014, at 11:57 AM, Clark Boylan cboy...@sapwetik.org wrote:

 On Wed, Sep 3, 2014, at 08:22 AM, Doug Hellmann wrote:
 
 On Sep 2, 2014, at 3:17 PM, Clark Boylan cboy...@sapwetik.org wrote:
 
 On Tue, Sep 2, 2014, at 11:30 AM, Yuriy Taraday wrote:
 Hello.
 
 Currently for alpha releases of oslo libraries we generate either
 universal
 or Python 2.x-only wheels. This presents a problem: we can't adopt alpha
 releases in projects where Python 3.x is supported and verified in the
 gate. I've ran into this in change request [1] generated after
 global-requirements change [2]. There we have oslotest library that can't
 be built as a universal wheel because of different requirements (mox vs
 mox3 as I understand is the main difference). Because of that py33 job in
 [1] failed and we can't bump oslotest version in requirements.
 
 I propose to change infra scripts that generate and upload wheels to
 create
 py3 wheels as well as py2 wheels for projects that support Python 3.x (we
 can use setup.cfg classifiers to find that out) but don't support
 universal
 wheels. What do you think about that?
 
 [1] https://review.openstack.org/117940
 [2] https://review.openstack.org/115643
 
 -- 
 
 Kind regards, Yuriy.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 We may find that we will need to have py3k wheels in addition to the
 existing wheels at some point, but I don't think this use case requires
 it. If oslo.test needs to support python2 and python3 it should use mox3
 in both cases which claims to support python2.6, 2.7 and 3.2. Then you
 can ship a universal wheel. This should solve the immediate problem.
 
 That sounds like a good solution to that specific issue. It may also
 require changes in the application test suites, but those changes can be
 made as we move them to use oslotest.
 
 
 It has been pointed out to me that one case where it won't be so easy is
 oslo.messaging and its use of eventlet under python2. Messaging will
 almost certainly need python 2 and python 3 wheels to be separate. I
 think we should continue to use universal wheels where possible and only
 build python2 and python3 wheels in the special cases where necessary.
 
 The setup.cfg classifiers should be able to do that for us, though PBR
 may need updating? We will also need to learn to upload potentially 1
 
 How do you see that working? We want all of the Oslo libraries to,
 eventually, support both python 2 and 3. How would we use the classifiers
 to tell when to build a universal wheel and when to build separate
 wheels?
 
 The classifiers provide info on the versions of python we support. By
 default we can build python2 wheel if only 2 is supported, build python3
 wheel if only 3 is supported, build a universal wheel if both are
 supported. Then we can add a setup.cfg flag to override the universal
 wheel default to build both a python2 and python3 wheel instead. Dstufft
 and mordred should probably comment on this idea before we implement
 anything.

OK. I’m not aware of any python-3-only projects, and the flag to override the 
universal wheel is the piece I was missing. I think there’s already a 
setuptools flag related to whether or not we should build universal wheels, 
isn’t there?

 
 wheel in our wheel jobs. That bit is likely straight foward. The last
 thing that we need to make sure we do is that we have some testing in
 place for the special wheels. We currently have the requirements
 integration test which runs under python2 checking that we can actually
 install all the things together. This ends up exercising our wheels and
 checking that they actually work. We don't have a python3 equivalent for
 
 We only know the wheels can be installed. We don’t actually have a test
 that installs our code and runs it any more (devstack uses “develop” mode
 which bypasses some of the installation steps, as we found while fixing
 the recent neutron/pbr issue with a missing config file in their
 packaging instructions).
 
 Yup, this would be a second level of testing that we should consider as
 well. The post install can it do stuff test. We moved our unittesting
 away from using sdist installs in preference for setup.py develop
 equivalent which means that our unittests no longer cover some of this
 stuff for us. Sorry if I hijacked this thread into a how do we test
 release artifacts? thread.

I wonder if we should have a flag in devstack to control whether we install in 
develop or “regular” mode? That would let us test real installations, but still 
have editable versions for local developer systems.

 
 that job. It may be better to work out some explicit checking of the
 wheels we produce that applies to both versions of python. I am not
 quite sure how we should approach that yet.
 
 To fix the asymmetric gating we have between pbr and everything else,
 Robert suggested setting up 

Re: [openstack-dev] [all] [glance] do NOT ever sort requirements.txt

2014-09-03 Thread Doug Hellmann

On Sep 3, 2014, at 12:20 PM, Dolph Mathews dolph.math...@gmail.com wrote:

 
 On Wed, Sep 3, 2014 at 8:25 AM, Sean Dague s...@dague.net wrote:
 On 09/03/2014 09:03 AM, Daniel P. Berrange wrote:
  On Wed, Sep 03, 2014 at 08:37:17AM -0400, Sean Dague wrote:
  I'm not sure why people keep showing up with sort requirements patches
  like - https://review.openstack.org/#/c/76817/6, however, they do.
 
  All of these need to be -2ed with predjudice.
 
  requirements.txt is not a declarative interface. The order is important
  as pip processes it in the order it is. Changing the order has impacts
  on the overall integration which can cause wedges later.
 
  Can  requirements.txt contain comment lines ?  If so, it would be
  worth adding
 
 # The ordering of modules in this file is important
 # Do not attempt to re-sort the lines
 
  Because 6 months hence people will have probably forgotten about
  this mail, or if they're new contributors, never know it existed.
 
 The point is that core review team members should know. In this case at
 least one glance core +2ed this change.
 
 Regular contributors can be educated by core team members.
 
 Regardless, tribal knowledge should be documented, and doing so in 
 requirements files is probably the best place for that.

+1

Write-it-down-ly,
Doug

  
 
 -Sean
 
 --
 Sean Dague
 http://dague.net
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] znc as a service (was Re: [nova] Is the BP approval process broken?)

2014-09-03 Thread Kashyap Chamarthy
On Wed, Sep 03, 2014 at 02:23:41PM +0200, Thierry Carrez wrote:
 Stefano Maffulli wrote:
  On 08/29/2014 11:17 AM, John Garbutt wrote:
  After moving to use ZNC, I find IRC works much better for me now, but
  I am still learning really.
  
  There! this sentence has two very important points worth highlighting:
  
  1- when people say IRC they mean IRC + a hack to overcome its limitation
  2- IRC+znc is complex, not many people are used to it
 
 Note that ZNC is not the only IRC proxy out there. Bip is also working
 quite well.

Yes, from practical experience, Bip has been very robust.
 
[. . .]

 We could at least document the steps required to set up a proxy. 

FWIW, I posted my local notes to configure bip IRC proxy[1] in a virtual
machine, and an example `/etc/bip.conf`[2]. /me has been using this
setup for about three years without any disruption.

 I agree with Ryan that running an IRC proxy for someone else
 creates... interesting privacy issues that may just hinder adoption of
 said solution.

  [1] https://kashyapc.fedorapeople.org/notes-bip-IRC-proxy/README
  [2] https://kashyapc.fedorapeople.org/notes-bip-IRC-proxy/bip.conf

--
/kashyap

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra][Neutron] How to verify link to logs for disabled third-party CI

2014-09-03 Thread Gary Duan
Thanks Joshua. I will give it a try.

Gary


On Wed, Sep 3, 2014 at 1:21 AM, Joshua Hesketh joshua.hesk...@rackspace.com
 wrote:

  On 9/3/14 12:11 PM, Gary Duan wrote:

 Hi,

  Our CI system is disabled due to a running bug and wrong log link. I
 have manually verified the system with sandbox and two Neutron testing
 patches. However, with CI disabled, I am not able to see its review comment
 on any patch.

  Is there a way that I can see what the comment will look like when CI is
 disabled?


 Hi Gary,

 If you are using zuul you can use the smtp reporter[0] to email you a
 report in the format as it will appear in gerrit. Otherwise you'll need to
 look at what you'll be posting via ssh (if communicating directly with the
 gerrit api).

 Cheers,
 Josh

 [0] http://ci.openstack.org/zuul/reporters.html#smtp


  Thanks,
 Gary


 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] znc as a service (was Re: [nova] Is the BP approval process broken?)

2014-09-03 Thread Clark Boylan
On Wed, Sep 3, 2014, at 07:26 AM, Ryan Brown wrote:
 On 09/03/2014 09:35 AM, Sylvain Bauza wrote:
  Re: ZNC as a service, I think it's OK provided the implementation is
  open-sourced with openstack-infra repo group, as for Gerrit, Zuul and
  others.
  The only problem I can see is how to provide IRC credentials to this, as
  I don't want to share my creds up to the service.
  
  -Sylvain
 There are more than just adoption (user trust) problems. An Open Source
 implementation wouldn't solve the liability concerns, because users
 would still have logs of their (potentially sensitive) credentials and
 conversations on servers run by OpenStack Infra.
 
 This is different from Gerrit/Zuul etc which just display code/changes
 and run/display tests on those public items. There isn't anything
 sensitive to be leaked there. Storing credentials and private messages
 is a different story, and would require much more security work than
 just storing code and test results.
 
 -- 
 Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

This doesn't solve the privacy issues, but subway [0] was built
specifically to tackle the problem of making persistent IRC easy without
needing to understand screen/tmux or znc/bip.

Maybe we can sidestep the privacy concerns by providing scipts/puppet
manifests/disk image builder elements/something that individuals or
groups of people that have some form of trust between each other can use
to easily spin up something like subway for persistent access.
Unfortunately, this assumes that individuals or groups of people will
have a way to run a persistent service on a server of some sort which
may not always be the case.

[0] https://github.com/thedjpetersen/subway

Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [glance] do NOT ever sort requirements.txt

2014-09-03 Thread Dolph Mathews
On Wed, Sep 3, 2014 at 11:23 AM, Doug Hellmann d...@doughellmann.com
wrote:


 On Sep 3, 2014, at 12:20 PM, Dolph Mathews dolph.math...@gmail.com
 wrote:


 On Wed, Sep 3, 2014 at 8:25 AM, Sean Dague s...@dague.net wrote:

 On 09/03/2014 09:03 AM, Daniel P. Berrange wrote:
  On Wed, Sep 03, 2014 at 08:37:17AM -0400, Sean Dague wrote:
  I'm not sure why people keep showing up with sort requirements
 patches
  like - https://review.openstack.org/#/c/76817/6, however, they do.
 
  All of these need to be -2ed with predjudice.
 
  requirements.txt is not a declarative interface. The order is important
  as pip processes it in the order it is. Changing the order has impacts
  on the overall integration which can cause wedges later.
 
  Can  requirements.txt contain comment lines ?  If so, it would be
  worth adding
 
 # The ordering of modules in this file is important
 # Do not attempt to re-sort the lines
 
  Because 6 months hence people will have probably forgotten about
  this mail, or if they're new contributors, never know it existed.

 The point is that core review team members should know. In this case at
 least one glance core +2ed this change.

 Regular contributors can be educated by core team members.


 Regardless, tribal knowledge should be documented, and doing so in
 requirements files is probably the best place for that.


 +1

 Write-it-down-ly,
 Doug


The blocked review Sean mentioned above happens to reference a bug that
I've now marked invalid against all projects that hadn't already applied
fixes for it:

  https://bugs.launchpad.net/glance/+bug/1285478

Instead, I've opened a second task to add a note to all the requirements
files:

  https://bugs.launchpad.net/keystone/+bug/1365061

All associated patches:


https://review.openstack.org/#/q/I64ae9191863564e278a35d42ec9cd743a233028e,n,z







 -Sean

 --
 Sean Dague
 http://dague.net

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Domain-specific Drivers

2014-09-03 Thread Adam Young

On 08/26/2014 06:12 AM, Henry Nash wrote:

Hi

It was fully merged for Juno-2 - so if you are having problems, feel 
free to share the settings in you main config and keystone.heat.config 
files


Henry
On 26 Aug 2014, at 10:26, Bruno Luis Dos Santos Bompastor 
bruno.bompas...@cern.ch mailto:bruno.bompas...@cern.ch wrote:



Hi folks!

I would like to know what is the status on the Domain-specific 
Drivers feature for Juno.
I see that there's documentation on this already but I was not able 
to use it with the master branch.


I was trying to configure LDAP on the default domain and SQL for heat 
domain but with no luck.


Is the feature ready?


http://adam.younglogic.com/2014/08/getting-service-users-out-of-ldap/


Best Regards,

Bruno Bompastor.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org 
mailto:OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] VMWare NSX CI - recheck info

2014-09-03 Thread Sukhdev Kapur
Hi Kurt,

We (Arista) were one of the early adapters of the CI systems. We built our
system based upon the Neutron requirements as of late last year/early this
year. Our CI has been up and operational since January of this year. This
is before (or in parallel to Jay Pipes effort of Zuul based CIs).

We have invested a lot of effort in getting this done. In fact, I helped
many vendors setting up their Jenkin master/slaves, etc.
Additionally, we put an effort in coming up with a patch to support
recheck matching - as it was not supported in Gerritt Plugin.
Please see our wiki [1] which has a link to the Google doc describing the
recheck patch.

At the time the requirement was to support recheck no bug/bug#. Our
system is built to support this syntax.
The current Neutron Third party test systems are described in [2] and if
you look at the requirements described in [3], it states that a single
recheck should trigger all test systems.

Having said that, I understand the rationale of your argument. on this
thread, and actually agree with your point.
I have seen similar comments on various ML threads.

My suggestion is that this should be done in a coordinated manner so that
everybody understands the requirements, rather than simply throwing it on
the mailing list and assuming it is accepted by everybody. This is what
leads to the confusion. Some people will take it as a marching orders,
others may miss the thread and completely miss the communication.

Kyle Mestry (Neutron PTL) and Edgar Magana (Neutron Core) are proposing a
session at Kilo Summit in Paris to cover third party CI systems.
I would propose that you please coordinate with them and get your point of
view incorporated into this session. I have copied them both on this email
so that they can share their wisdom on the subject as well.

Thanks for all the good work by you and the infra team - making things
easier for us.

regards..
-Sukhdev

[1] https://wiki.openstack.org/wiki/Arista-third-party-testing
[2] https://wiki.openstack.org/wiki/NeutronThirdPartyTesting
[3] http://ci.openstack.org/third_party.html




On Wed, Sep 3, 2014 at 4:59 AM, Kurt Taylor krtay...@us.ibm.com wrote:

 Weidong Shao weidongs...@gmail.com wrote on 09/02/2014 09:35:30 PM:

  From:
 
  Weidong Shao weidongs...@gmail.com
 

  Subject:
 
  [OpenStack-Infra] VMWare NSX CI - recheck info

 
  Hi,
 
  I have a CL (https://review.openstack.org/#/c/116094/) that fails in
  VMWare NSX CI. I could not figure out the proper way to do a recheck
  on that.  On the error log page, there is a link to the team's wiki,
  from which this instruction is given:
  To issue a recheck, submit a comment with the following text:
  vmware-recheck
 
  I did that but it did not trigger a recheck.
  Could someone help me on this?

 As of today, all systems, project and third-party, should restart
 test jobs with just recheck comment. Including information after
 recheck doesn't break anything upstream.

 See: https://review.openstack.org/#/c/108724/  and
 the discussion at: https://review.openstack.org/#/c/100133/  and
 https://review.openstack.org/#/c/109565/

 Since most third-party systems are actually supporting some form of
 recheck-system name I am proposing a new change to the comment
 syntax to accommodate the need for triggering specific third-party
 system rechecks.

 Kurt Taylor (krtaylor)


 
  A related note on NSX CI:
 
  Apparently, the VMWare CI does not follow the guideline from http://
  ci.openstack.org/third_party.html to include contacts, recheck
  instruction etc. Some other CIs provide better messages on failure. e.g,
 
  A10
  Contact: a10-openstack...@a10networks.com
  Additional information: https://wiki.openstack.org/wiki/
  ThirdPartySystems/A10_Networks_CI
  Or from RYU CI
  Build failed. Leave a comment with 'recheck-ryu' to rerun a check.
  ('recheck' will be ignored.)
  Could someone from the NSX/Openstack team make the necessary changes?
 
  Thanks,
  Wei___


 ___
 OpenStack-Infra mailing list
 openstack-in...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo.messaging][feature freeze exception] Can I get an exception for AMQP 1.0?

2014-09-03 Thread Ken Giusti
Hello,

I'm proposing a freeze exception for the oslo.messaging AMQP 1.0
driver:

   https://review.openstack.org/#/c/75815/

Blueprint:

   
https://blueprints.launchpad.net/oslo.messaging/+spec/amqp10-driver-implementation

I presented this work at the Juno summit [1]. The associated spec has
been approved and merged [2].

The proposed patch has been in review since before icehouse, with a
couple of non-binding +1's.  A little more time is necessary to get
core reviews.

The patch includes a number of functional tests, and I've proposed a
CI check that will run those tests [3].  This patch is currently
pending support for bare fedora 20 nodes in CI.  I'm planning to add
additional test cases and devstack support in the future.

I'm in the process of adding documentation to the RPC section of the
Openstack manual.

Justification:

I think there's a benefit to have this driver available as an
_experimental_ feature in Juno, and the risk of inclusion is minimal
as the driver is optional, disabled by default, and will not have
impact on any system that does not explicitly enable it.

Unlike previous versions of the protocol, AMQP 1.0 is the official
standard for AMQP messaging (ISO/IEC 19464).  Support for it is
arriving from multiple different messaging system vendors [4].

Having access to AMQP 1.0 functionality in openstack sooner rather
than later gives the developers of AMQP 1.0 messaging systems the
opportunity to validate their AMQP 1.0 support in the openstack
environment.  Likewise, easier access to this driver by the openstack
developer community will help us find and fix any issues in a timely
manner as adoption of the standard grows.

Please consider this feature to be a part of Juno-3 release.

Thanks,

Ken


-- 
Ken Giusti  (kgiu...@gmail.com)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [third-party] collecting recheck command to re-trigger third party CI

2014-09-03 Thread Akihiro Motoki
Hi Neutron team,

There are many third party CI in Neutron and we sometimes/usually
want to retrigger third party CI to confirm results.
A comment syntax varies across third party CI, so I think it is useful
to gather recheck command in one place. I struggled to know how to
rerun a specific CI.

I added to recheck command column in the list of Neutron plugins and
drivers [1].
Could you add recheck command of your CI in the table?
If it is not available, please add N/A.

Note that supporting recheck is one of the requirements of third party
testing. [2]
I understand not all CIs support it due to various reasons, but
collecting it is useful for developers and reviewers.

A syntax of recheck command is under discussion in infra review [3].
I believe the column of recheck command is still useful even after
the official syntax is defined because it is not an easy thing to know
each CI system name.

[1] 
https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers#Existing_Plugin_and_Drivers
[2] http://ci.openstack.org/third_party.html#requirements
[3] https://review.openstack.org/#/c/118623/

Thanks,
Akihiro

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Rally scenario Issue

2014-09-03 Thread Harshil Shah (harsshah)
Hello Timur,

I have one question, so I am testing rally boot and delete scenario, it seems 
to work fine when tenant is 1 and user is 1. Although if I change tenant to 2 
and users 2, I see below error. Any idea why this can happen ?

=

2014-09-03 10:51:18.138 17441 CRITICAL rally [-] ServiceUnavailable: Unable to 
create the network. No tenant network is available for allocation.

cat boot-runcommand-delete.json

{

VMTasks.boot_runcommand_delete: [

{

args: {

flavor: {

name: m1.small

},

image: {

name: Ubuntu Server 14.04

},

fixed_network: net04,

floating_network: net04_ext,

use_floatingip: true,

script: doc/samples/tasks/support/instance_dd_test.sh,

interpreter: /bin/sh,

username: ubuntu

},

runner: {

type: constant,

times: 10,

concurrency: 2

},

context: {

neutron_network: {

network_cidr: 10.%s.0.0/16,

},

users: {

tenants: 2, == Works with this value being 1

users_per_tenant: 2

}

}

}

]

}

==

Thanks,
Harshil

From: Timur Nurlygayanov 
tnurlygaya...@mirantis.commailto:tnurlygaya...@mirantis.com
Date: Friday, August 29, 2014 at 1:54 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Cc: Harshil Shah harss...@cisco.commailto:harss...@cisco.com
Subject: Re: [openstack-dev] Rally scenario Issue

Hi Ajay,

looks like you need to use NeutronContext feature to configure Neutron Networks 
during the benchmarks execution.
We now working on merge of two different comits with NeutronContext 
implementation:
https://review.openstack.org/#/c/96300  and 
https://review.openstack.org/#/c/103306

could you please apply commit https://review.openstack.org/#/c/96300 and run 
your benchmarks? Neutron Network with subnetworks and routers will be 
automatically created for each created tenant and you should have the ability 
to connect to VMs. Please, note, that you should add the following part to your 
task JSON to enable Neutron context:
...
context: {
...
neutron_network: {
network_cidr: 10.%s.0.0/16,
}
}
...

Hope this will help.



On Fri, Aug 29, 2014 at 11:42 PM, Ajay Kalambur (akalambu) 
akala...@cisco.commailto:akala...@cisco.com wrote:
Hi
I am trying to run the Rally scenario boot-runcommand-delete. This scenario has 
the following code
 def boot_runcommand_delete(self, image, flavor,
   script, interpreter, username,
   fixed_network=private,
   floating_network=public,
   ip_version=4, port=22,
   use_floatingip=True, **kwargs):
  server = None
floating_ip = None
try:
print fixed network:%s floating network:%s 
%(fixed_network,floating_network)
server = self._boot_server(
self._generate_random_name(rally_novaserver_),
image, flavor, key_name='rally_ssh_key', **kwargs)

self.check_network(server, fixed_network)

The question I have is the instance is created with a call to boot_server but 
no networks are attached to this server instance. Next step it goes and checks 
if the fixed network is attached to the instance and sure enough it fails
At the step highlighted in bold. Also I cannot see this failure unless I run 
rally with –v –d object. So it actually reports benchmark scenario numbers in a 
table with no errors when I run with
rally task start boot-and-delete.json

And reports results. First what am I missing in this case. Thing is I am using 
neutron not nova-network
Second when most of the steps in the scenario failed like attaching to network, 
ssh and run command why bother reporting the results

Ajay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--

Timur,
QA Engineer
OpenStack Projects
Mirantis Inc

[http://www.openstacksv.com/]http://www.openstacksv.com/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Feature Freeze Exception process for Juno

2014-09-03 Thread Vladik Romanovsky
+1

I had several pacthes in start lxc from block device series. The blueprint 
was waiting since Icehouse.
In Juno it was approved, however, besides Daniel Berrange no one was looking at 
these patches.
Now it's being pushed to Kilo, regadless of the fact that everything is +2ed.

Normally, I don't actively pursue people to get approvals, as I was getting 
angry pushback from cores,
at the begining of my way with openstack.

I don't understand what is the proper way to get work done.

Vladik 

- Original Message -
 From: Solly Ross sr...@redhat.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Wednesday, September 3, 2014 11:57:29 AM
 Subject: Re: [openstack-dev] [Nova] Feature Freeze Exception process for Juno
 
  I will follow up with a more detailed email about what I believe we are
  missing, once the FF settles and I have applied some soothing creme to
  my burnout wounds, but currently my sentiment is:
  
  Contributing features to Nova nowadays SUCKS!!1 (even as a core
  reviewer) We _have_ to change that!
 
 I think this is *very* important.
 
 rant
 For instance, I have/had two patch series
 up. One is of length 2 and is relatively small.  It's basically sitting there
 with one +2 on each patch.  I will now most likely have to apply for a FFE
 to get it merged, not because there's more changes to be made before it can
 get merged
 (there was one small nit posted yesterday) or because it's a huge patch that
 needs a lot
 of time to review, but because it just took a while to get reviewed by cores,
 and still only appears to have been looked at by one core.
 
 For the other patch series (which is admittedly much bigger), it was hard
 just to
 get reviews (and it was something where I actually *really* wanted several
 opinions,
 because the patch series touched a couple of things in a very significant
 way).
 
 Now, this is not my first contribution to OpenStack, or to Nova, for that
 matter.  I
 know things don't always get in.  It's frustrating, however, when it seems
 like the
 reason something didn't get in wasn't because it was fundamentally flawed,
 but instead
 because it didn't get reviews until it was too late to actually take that
 feedback into
 account, or because it just didn't get much attention review-wise at all.  If
 I were a
 new contributor to Nova who had successfully gotten a major blueprint
 approved and
 the implemented, only to see it get rejected like this, I might get turned
 off of Nova,
 and go to work on one of the other OpenStack projects that seemed to move a
 bit faster.
 /rant
 
 So, it's silly to rant without actually providing any ideas on how to fix it.
 One suggestion would be, for each approved blueprint, to have one or two
 cores
 explicitly marked as being responsible for providing at least some feedback
 on
 that patch.  This proposal has issues, since we have a lot of blueprints and
 only
 twenty cores, who also have their own stuff to work on.  However, I think the
 general idea of having guaranteed reviewers is not unsound by itself.
 Perhaps
 we should have a loose tier of reviewers between core and everybody else.
 These reviewers would be known good reviewers who would follow the
 implementation
 of particular blueprints if a core did not have the time.  Then, when those
 reviewers
 gave the +1 to all the patches in a series, they could ping a core, who
 could feel
 more comfortable giving a +2 without doing a deep inspection of the code.
 
 That's just one suggestion, though.  Whatever the solution may be, this is a
 problem that we need to fix.  While I enjoyed going through the blueprint
 process
 this cycle (not sarcastic -- I actually enjoyed the whole structured
 feedback thing),
 the follow up to that was not the most pleasant.
 
 One final note: the specs referenced above didn't get approved until Spec
 Freeze, which
 seemed to leave me with less time to implement things.  In fact, it seemed
 that a lot
 of specs didn't get approved until spec freeze.  Perhaps if we had more
 staggered
 approval of specs, we'd have more staggered submission of patches, and thus
 less of a
 sudden influx of patches in the couple weeks before feature proposal freeze.
 
 Best Regards,
 Solly Ross
 
 - Original Message -
  From: Nikola Đipanov ndipa...@redhat.com
  To: openstack-dev@lists.openstack.org
  Sent: Wednesday, September 3, 2014 5:50:09 AM
  Subject: Re: [openstack-dev] [Nova] Feature Freeze Exception process for
  Juno
  
  On 09/02/2014 09:23 PM, Michael Still wrote:
   On Tue, Sep 2, 2014 at 1:40 PM, Nikola Đipanov ndipa...@redhat.com
   wrote:
   On 09/02/2014 08:16 PM, Michael Still wrote:
   Hi.
  
   We're soon to hit feature freeze, as discussed in Thierry's recent
   email. I'd like to outline the process for requesting a freeze
   exception:
  
   * your code must already be up for review
   * your blueprint must have an approved spec
   * you need three 

Re: [openstack-dev] znc as a service (was Re: [nova] Is the BP approval process broken?)

2014-09-03 Thread Spencer Krum
Here is a docker image that will bring up subway.


https://registry.hub.docker.com/u/nibalizer/subway/


On Wed, Sep 3, 2014 at 10:04 AM, Clark Boylan cboy...@sapwetik.org wrote:

 On Wed, Sep 3, 2014, at 07:26 AM, Ryan Brown wrote:
  On 09/03/2014 09:35 AM, Sylvain Bauza wrote:
   Re: ZNC as a service, I think it's OK provided the implementation is
   open-sourced with openstack-infra repo group, as for Gerrit, Zuul and
   others.
   The only problem I can see is how to provide IRC credentials to this,
 as
   I don't want to share my creds up to the service.
  
   -Sylvain
  There are more than just adoption (user trust) problems. An Open Source
  implementation wouldn't solve the liability concerns, because users
  would still have logs of their (potentially sensitive) credentials and
  conversations on servers run by OpenStack Infra.
 
  This is different from Gerrit/Zuul etc which just display code/changes
  and run/display tests on those public items. There isn't anything
  sensitive to be leaked there. Storing credentials and private messages
  is a different story, and would require much more security work than
  just storing code and test results.
 
  --
  Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 This doesn't solve the privacy issues, but subway [0] was built
 specifically to tackle the problem of making persistent IRC easy without
 needing to understand screen/tmux or znc/bip.

 Maybe we can sidestep the privacy concerns by providing scipts/puppet
 manifests/disk image builder elements/something that individuals or
 groups of people that have some form of trust between each other can use
 to easily spin up something like subway for persistent access.
 Unfortunately, this assumes that individuals or groups of people will
 have a way to run a persistent service on a server of some sort which
 may not always be the case.

 [0] https://github.com/thedjpetersen/subway

 Clark

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Spencer Krum
(619)-980-7820
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [rally][iperf] Benchmarking network performance

2014-09-03 Thread Ajay Kalambur (akalambu)
Hi
Looking into the following blueprint which requires that network performance 
tests be done as part of a scenario
I plan to implement this using iperf and basically a scenario which includes a 
client/server VM pair

The client than sends out TCP traffic using iperf to server and the VM 
throughput is recorded
I have a WIP patch attached to this email

The patch has a dependency on following 2 review
https://review.openstack.org/#/c/103306/
 https://review.openstack.org/#/c/96300https://review.openstack.org/#/c/96300




On top of this it creates a new VM performance scenario and uses floating ips 
to access the VM and download iperf to the VM and than run throughout tests
The code will  be made more modular but this patch will give you a good idea of 
whats in store.
We also need to handle the case next where no floating ip is available and we 
assume direct access. We need to have ssh to install the tool and drive the 
tests

Please look at the attached diff and let me know if overall the flow looks fine
If it does I can make the code more modular and proceed. Note this is Work in 
progress still


Ajay



rally_perf.diff
Description: rally_perf.diff
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [glance] do NOT ever sort requirements.txt

2014-09-03 Thread Kuvaja, Erno
 -Original Message-
 From: Sean Dague [mailto:s...@dague.net]
 Sent: 03 September 2014 13:37
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [all] [glance] do NOT ever sort requirements.txt
 
 I'm not sure why people keep showing up with sort requirements patches
 like - https://review.openstack.org/#/c/76817/6, however, they do.
 
 All of these need to be -2ed with predjudice.
 
 requirements.txt is not a declarative interface. The order is important as pip
 processes it in the order it is. Changing the order has impacts on the overall
 integration which can cause wedges later.
 
 So please stop.
 
   -Sean
 
 --
 Sean Dague
 http://dague.net


Hi Sean  all,

Could you please open this up a little bit? What are we afraid breaking 
regarding the order of these requirements? I tried to go through pip 
documentation but I could not find reason of specific order of the lines, 
references to keep the order there was 'though.

I'm now assuming one thing here as I do not know if that's the case. None of 
the packages enables/disables functionality depending of what has been 
installed on the system before, but they have their own dependencies to provide 
those. Based on this assumption I can think of only one scenario causing us 
issues. That is us abusing the example in point 2 of 
https://pip.pypa.io/en/latest/user_guide.html#requirements-files meaning; we 
install package X depending on package Y=1.0,2.0 before installing package Z 
depending on Y=1.0 to ensure that package Y2.0 without pinning package Y in 
our requirements.txt. I certainly hope that this is not the case as depending 
3rd party vendor providing us specific version of dependency package would be 
extremely stupid.

Other than that I really don't know how the order could cause us issues, but I 
would be really happy to learn something new today if that is the case or if my 
assumption went wrong.

Best Regards,
Erno (jokke_) Kuvaja
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][feature freeze exception] Proposal for using Launcher/ProcessLauncher for launching services

2014-09-03 Thread Kekane, Abhishek
Hi  Erno,


I agree that we must document what all config parameters will be reloaded after 
SIGHUP signal is processed, that's the reason why we have added DocImpact tag 
to patch https://review.openstack.org/#/c/117988/. We will test what parameters 
are reloaded and report them to the Doc team.


Our use case:


We want to use SIGHUP signal to reload filesystem store related config 
parameters namely filesystem_store_datadir and filesystem_store_datadirs 
which are very crucial in the production environment especially for people 
using NFS. In case, the filesystem is approaching at full capacity, 
administrator can add more storage and configured it via the above parameters 
which will be taken into effect upon sending SIGHUP signal. Secondly, most of 
the OpenStack services uses service framework and it does handle reloading of 
configuration files via SIGHUP signal which glance cannot without this patch.


Thanks  Regards,


Abhishek Kekane


From: Kekane, Abhishek
Sent: Wednesday, September 03, 2014 9:39 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: RE: [openstack-dev] [glance][feature freeze exception] Proposal for 
using Launcher/ProcessLauncher for launching services

Hi All,

Please give your support me for applying the  freeze exception for using 
oslo-incubator service framework in glance, based on the following blueprint:

https://blueprints.launchpad.net/glance/+spec/use-common-service-framework

I have ensured that after making these changes everything is working smoothly.

I have done the functional testing for following three scenarios:

1.   Enabled SSL and checked requests are processed by the Api service 
before and after SIGHUP signal

2.   Disabled SSL and  checked requests are processed by the Api service 
before and after SIGHUP signal

3.   I have also ensured reloading of the parameters like 
ilesystem_store_datadir, filesystem_store_datadirs are  working effectively 
after sending the SIGHUP signal.

To test 1st and 2nd I have created a python script which will send multiple 
requests to glance at a time and added a chron job to send a SIGHUP signal to 
the parent process.
I have tested above script for 1 hour and confirmed every request has been 
processed successfully.

Please consider this feature to be a part of Juno release.



Thanks  Regards,

Abhishek Kekane


From: Kekane, Abhishek [mailto:abhishek.kek...@nttdata.com]
Sent: 02 September 2014 19:11
To: OpenStack Development Mailing List (openstack-dev@lists.openstack.org)
Subject: [openstack-dev] [glance][feature freeze exception] Proposal for using 
Launcher/ProcessLauncher for launching services

Hi All,

I'd like to ask for a feature freeze exception for using oslo-incubator service 
framework in glance, based on the following blueprint:

https://blueprints.launchpad.net/glance/+spec/use-common-service-framework


The code to implement this feature is under review at present.

1. Sync oslo-incubator service module in glance: 
https://review.openstack.org/#/c/117135/2
2. Use Launcher/ProcessLauncher in glance: 
https://review.openstack.org/#/c/117988/


If we have this feature in glance then we can able to use features like reload 
glance configuration file without restart, graceful shutdown etc.
Also it will use common code like other OpenStack projects nova, keystone, 
cinder does.


We are ready to address all the concerns of the community if they have any.


Thanks  Regards,

Abhishek Kekane

__
Disclaimer:This email and any attachments are sent in strictest confidence for 
the sole use of the addressee and may contain legally privileged, confidential, 
and proprietary data. If you are not the intended recipient, please advise the 
sender by replying promptly to this email and then delete and destroy this 
email and any attachments without any further use, copying or forwarding

__
Disclaimer:This email and any attachments are sent in strictest confidence for 
the sole use of the addressee and may contain legally privileged, confidential, 
and proprietary data.  If you are not the intended recipient, please advise the 
sender by replying promptly to this email and then delete and destroy this 
email and any attachments without any further use, copying or forwarding___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [glance] do NOT ever sort requirements.txt

2014-09-03 Thread Clark Boylan


On Wed, Sep 3, 2014, at 11:51 AM, Kuvaja, Erno wrote:
  -Original Message-
  From: Sean Dague [mailto:s...@dague.net]
  Sent: 03 September 2014 13:37
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: [openstack-dev] [all] [glance] do NOT ever sort requirements.txt
  
  I'm not sure why people keep showing up with sort requirements patches
  like - https://review.openstack.org/#/c/76817/6, however, they do.
  
  All of these need to be -2ed with predjudice.
  
  requirements.txt is not a declarative interface. The order is important as 
  pip
  processes it in the order it is. Changing the order has impacts on the 
  overall
  integration which can cause wedges later.
  
  So please stop.
  
  -Sean
  
  --
  Sean Dague
  http://dague.net
 
 
 Hi Sean  all,
 
 Could you please open this up a little bit? What are we afraid breaking
 regarding the order of these requirements? I tried to go through pip
 documentation but I could not find reason of specific order of the lines,
 references to keep the order there was 'though.
 
 I'm now assuming one thing here as I do not know if that's the case. None
 of the packages enables/disables functionality depending of what has been
 installed on the system before, but they have their own dependencies to
 provide those. Based on this assumption I can think of only one scenario
 causing us issues. That is us abusing the example in point 2 of
 https://pip.pypa.io/en/latest/user_guide.html#requirements-files meaning;
 we install package X depending on package Y=1.0,2.0 before installing
 package Z depending on Y=1.0 to ensure that package Y2.0 without
 pinning package Y in our requirements.txt. I certainly hope that this is
 not the case as depending 3rd party vendor providing us specific version
 of dependency package would be extremely stupid.
 
 Other than that I really don't know how the order could cause us issues,
 but I would be really happy to learn something new today if that is the
 case or if my assumption went wrong.
 
 Best Regards,
 Erno (jokke_) Kuvaja
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

The issue is described in the bug that Josh linked
(https://github.com/pypa/pip/issues/988). Basically pip doesn't do
dependency resolution in a way that lets you treat requirements as order
independent. For that to be the case pip would have to evaluate all
dependencies together then install the intersection of those
dependencies. Instead it iterates over the list(s) in order and
evaluates each dependency as it is found.

Your example basically describes where this breaks. You can both depend
on the same dependency at different versions and pip will install a
version that satisfies only one of the dependencies and not the other
leading to a failed install. However I think a more common case is that
openstack will pin a dependency and say Y=1.0,2.0 and the X dependency
will say Y=1.0. If the X dependency comes first you get version 2.5
which is not valid for your specification of Y=1.0,2.0 and pip fails.
You fix this by listing Y before X dependency that installs Y with less
restrictive boundaries.

Another example of a slightly different failure would be hacking,
flake8, pep8, and pyflakes. Hacking installs a specific version of
flake8, pep8, and pyflakes so that we do static lint checking with
consistent checks each release. If you sort this list alphabetically
instead of allowing hacking to install its deps flake8 will come first
and you can get a different version of pep8. Different versions of pep8
check different things and now the gate has broken.

The most problematic thing is you can't count on your dependencies from
not breaking you if they come first (because they are evaluated first).
So in cases where we know order is important (hacking and pbr and
probably a handful of others) we should be listing them as early as
possible in the requirements.

Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Nova][All] requests 2.4.0 breaks glanceclient

2014-09-03 Thread Gregory Haynes
Excerpts from Kuvaja, Erno's message of 2014-09-03 12:30:08 +:
 Hi All,
 
 While investigating glanceclient gating issues we narrowed it down to 
 requests 2.4.0 which was released 2014-08-29. Urllib3 seems to be raising new 
 ProtocolError which does not get catched and breaks at least glanceclient.
 Following error can be seen on console ProtocolError: ('Connection 
 aborted.', gaierror(-2, 'Name or service not known')).
 
 Unfortunately we hit on such issue just under the freeze. Apparently this 
 breaks novaclient as well and there is change 
 (https://review.openstack.org/#/c/118332/ )proposed to requirements to limit 
 the version 2.4.0.
 
 Is there any other projects using requirements and seeing issues with the 
 latest version?

Weve run into this in tripleo, specifically with os-collect-config.
Heres the upstream bug:
https://github.com/kennethreitz/requests/issues/2192

We had to pin it in our project to unwedge CI (otherwise we would be
blocked on cutting an os-collect-config release).

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Nova][All] requests 2.4.0 breaks glanceclient

2014-09-03 Thread Sean Dague
On 09/03/2014 03:12 PM, Gregory Haynes wrote:
 Excerpts from Kuvaja, Erno's message of 2014-09-03 12:30:08 +:
 Hi All,

 While investigating glanceclient gating issues we narrowed it down to 
 requests 2.4.0 which was released 2014-08-29. Urllib3 seems to be raising 
 new ProtocolError which does not get catched and breaks at least 
 glanceclient.
 Following error can be seen on console ProtocolError: ('Connection 
 aborted.', gaierror(-2, 'Name or service not known')).

 Unfortunately we hit on such issue just under the freeze. Apparently this 
 breaks novaclient as well and there is change 
 (https://review.openstack.org/#/c/118332/ )proposed to requirements to limit 
 the version 2.4.0.

 Is there any other projects using requirements and seeing issues with the 
 latest version?
 
 Weve run into this in tripleo, specifically with os-collect-config.
 Heres the upstream bug:
 https://github.com/kennethreitz/requests/issues/2192
 
 We had to pin it in our project to unwedge CI (otherwise we would be
 blocked on cutting an os-collect-config release).

Ok, given the details of the bug, I'd be ok with a != 2.4.0, it looks
like they are working on a merge now.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging][feature freeze exception] Can I get an exception for AMQP 1.0?

2014-09-03 Thread Doug Hellmann

On Sep 3, 2014, at 2:03 PM, Ken Giusti kgiu...@gmail.com wrote:

 Hello,
 
 I'm proposing a freeze exception for the oslo.messaging AMQP 1.0
 driver:
 
   https://review.openstack.org/#/c/75815/
 
 Blueprint:
 
   
 https://blueprints.launchpad.net/oslo.messaging/+spec/amqp10-driver-implementation
 
 I presented this work at the Juno summit [1]. The associated spec has
 been approved and merged [2].
 
 The proposed patch has been in review since before icehouse, with a
 couple of non-binding +1's.  A little more time is necessary to get
 core reviews.
 
 The patch includes a number of functional tests, and I've proposed a
 CI check that will run those tests [3].  This patch is currently
 pending support for bare fedora 20 nodes in CI.  I'm planning to add
 additional test cases and devstack support in the future.
 
 I'm in the process of adding documentation to the RPC section of the
 Openstack manual.
 
 Justification:
 
 I think there's a benefit to have this driver available as an
 _experimental_ feature in Juno, and the risk of inclusion is minimal
 as the driver is optional, disabled by default, and will not have
 impact on any system that does not explicitly enable it.
 
 Unlike previous versions of the protocol, AMQP 1.0 is the official
 standard for AMQP messaging (ISO/IEC 19464).  Support for it is
 arriving from multiple different messaging system vendors [4].
 
 Having access to AMQP 1.0 functionality in openstack sooner rather
 than later gives the developers of AMQP 1.0 messaging systems the
 opportunity to validate their AMQP 1.0 support in the openstack
 environment.  Likewise, easier access to this driver by the openstack
 developer community will help us find and fix any issues in a timely
 manner as adoption of the standard grows.
 
 Please consider this feature to be a part of Juno-3 release.
 
 Thanks,
 
 Ken

Ken,

I think we’re generally in favor of including the new driver, but before I say 
so officially can you fill us in on the state of the additional external 
libraries it needs? I see pyngus on pypi, and you mention in the “Request to 
include AMQP 1.0 support in Juno-3” thread that proton is being packaged in 
EPEL and work is ongoing for Debian. Is that done (it has only been a few days, 
I know)?

I would like to avoid having that packaging work be a blocker, so if we do 
include the driver, what do you think is the best way to convey the 
instructions for installing those packages? I know you’ve done some work 
recently on documenting the experimental status, did that include installation 
tips?

Thanks,
Doug


 
 
 -- 
 Ken Giusti  (kgiu...@gmail.com)
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [UX] [Horizon] [Heat] Merlin project (formerly known as cross-project UI library for Heat/Mistral/Murano/Solum) plans for PoC and more

2014-09-03 Thread Drago Rosson
Zane,

Thank you. I have added ASL 2.0.

Drago

On 8/28/14, 9:10 PM, Zane Bitter zbit...@redhat.com wrote:

On 28/08/14 13:31, Drago Rosson wrote:
 You are in luck, because I have just now open-sourced Barricade! Check
it
 out [4].

 [4]https://github.com/rackerlabs/barricade

Please add a license (preferably ASL 2.0). Open Source doesn't mean
the source is on GitHub, it means that the code is licensed under a
particular set of terms.

- ZB

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unexpected error in OpenStack Nova

2014-09-03 Thread Hossein Zabolzadeh
Any Idea?


On Wed, Sep 3, 2014 at 6:41 PM, Hossein Zabolzadeh zabolza...@gmail.com
wrote:

 Hi,
 After successful installation of both keystone and nova, I tried to
 execute the 'nova list' command by the folllowing env variables(My
 Deployment Model is single machine deployment):
 export OS_USERNAME=admin
 export OS_PASSWORD=...
 export OS_TENANT_NAME=service
 export OS_AUTH_URL=http://10.0.0.1:5000

 But the following unknown error was occurred:
 ERROR: attribute 'message' of 'exceptions.BaseException' objects (HTTP
 300)

 My nova.conf has the following configuration to connect to keystone:
 [keystone_authtoken]
 auth_uri = localhost:5000
 auth_host = 10.0.0.1
 auth_port = 35357
 auth_protocol = http
 admin_tenant_name = service
 admin_user = nova
 admin_password = nova_pass

 How can I solve the problem?
 Thanks in advance.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [glance] do NOT ever sort requirements.txt

2014-09-03 Thread Kuvaja, Erno
 -Original Message-
 From: Clark Boylan [mailto:cboy...@sapwetik.org]
 Sent: 03 September 2014 20:10
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [all] [glance] do NOT ever sort requirements.txt
 
 
 
 On Wed, Sep 3, 2014, at 11:51 AM, Kuvaja, Erno wrote:
   -Original Message-
   From: Sean Dague [mailto:s...@dague.net]
   Sent: 03 September 2014 13:37
   To: OpenStack Development Mailing List (not for usage questions)
   Subject: [openstack-dev] [all] [glance] do NOT ever sort
   requirements.txt
  
   I'm not sure why people keep showing up with sort requirements
   patches like - https://review.openstack.org/#/c/76817/6, however, they
 do.
  
   All of these need to be -2ed with predjudice.
  
   requirements.txt is not a declarative interface. The order is
   important as pip processes it in the order it is. Changing the order
   has impacts on the overall integration which can cause wedges later.
  
   So please stop.
  
 -Sean
  
   --
   Sean Dague
   http://dague.net
  
 
  Hi Sean  all,
 
  Could you please open this up a little bit? What are we afraid
  breaking regarding the order of these requirements? I tried to go
  through pip documentation but I could not find reason of specific
  order of the lines, references to keep the order there was 'though.
 
  I'm now assuming one thing here as I do not know if that's the case.
  None of the packages enables/disables functionality depending of what
  has been installed on the system before, but they have their own
  dependencies to provide those. Based on this assumption I can think of
  only one scenario causing us issues. That is us abusing the example in
  point 2 of
  https://pip.pypa.io/en/latest/user_guide.html#requirements-files
  meaning; we install package X depending on package Y=1.0,2.0 before
  installing package Z depending on Y=1.0 to ensure that package Y2.0
  without pinning package Y in our requirements.txt. I certainly hope
  that this is not the case as depending 3rd party vendor providing us 
  specific
 version of dependency package would be extremely stupid.
 
  Other than that I really don't know how the order could cause us
  issues, but I would be really happy to learn something new today if
  that is the case or if my assumption went wrong.
 
  Best Regards,
  Erno (jokke_) Kuvaja
 
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 The issue is described in the bug that Josh linked
 (https://github.com/pypa/pip/issues/988). Basically pip doesn't do
 dependency resolution in a way that lets you treat requirements as order
 independent. For that to be the case pip would have to evaluate all
 dependencies together then install the intersection of those dependencies.
 Instead it iterates over the list(s) in order and evaluates each dependency as
 it is found.
 
 Your example basically describes where this breaks. You can both depend on
 the same dependency at different versions and pip will install a version that
 satisfies only one of the dependencies and not the other leading to a failed
 install. However I think a more common case is that openstack will pin a
 dependency and say Y=1.0,2.0 and the X dependency will say Y=1.0. If
 the X dependency comes first you get version 2.5 which is not valid for your
 specification of Y=1.0,2.0 and pip fails.
 You fix this by listing Y before X dependency that installs Y with less 
 restrictive
 boundaries.
 
 Another example of a slightly different failure would be hacking, flake8,
 pep8, and pyflakes. Hacking installs a specific version of flake8, pep8, and
 pyflakes so that we do static lint checking with consistent checks each
 release. If you sort this list alphabetically instead of allowing hacking to 
 install
 its deps flake8 will come first and you can get a different version of pep8.
 Different versions of pep8 check different things and now the gate has
 broken.
 
 The most problematic thing is you can't count on your dependencies from
 not breaking you if they come first (because they are evaluated first).
 So in cases where we know order is important (hacking and pbr and probably
 a handful of others) we should be listing them as early as possible in the
 requirements.
 
 Clark
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Thanks Clark,

To be honest the issue nor your explanation did clarify this to me. Please 
forgive me hunting this, but it seems to be extremely important topic so I 
would like to understand where it's coming from (and hopefully others will 
benefit from it 

Re: [openstack-dev] [infra][qa][neutron] Neutron full job, advanced services, and the integrated gate

2014-09-03 Thread Joe Gordon
On Tue, Aug 26, 2014 at 4:47 PM, Salvatore Orlando sorla...@nicira.com
wrote:

 TL; DR
 A few folks are proposing to stop running tests for neutron advanced
 services [ie: (lb|vpn|fw)aas] in the integrated gate, and run them only on
 the neutron gate.

 Reason: projects like nova are 100% orthogonal to neutron advanced
 services. Also, there have been episodes in the past of unreliability of
 tests for these services, and it would be good to limit affected projects
 considering that more api tests and scenarios are being added.

 -

 So far the neutron full job runs tests (api and scenarios) for neutron
 core functionality as well as neutron advanced services, which run as
 neutron service plugin.

 It's highly unlikely, if not impossible, that changes in projects such as
 nova, glance or ceilometer can have an impact on the stability of these
 services.
 On the other hand, instability in these services can trigger gate failures
 in unrelated projects as long as tests for these services are run in the
 neutron full job in the integrated gate. There have already been several
 gate-breaking bugs in lbaas scenario tests are firewall api tests.
 Admittedly, advanced services do not have the same level of coverage as
 core neutron functionality. Therefore as more tests are being added, there
 is an increased possibility of unearthing dormant bugs.


I support this split but for slightly different reasons.  I am under the
impression that neutron advanced services are not ready for prime time. If
that is correct I don't think we should be gating on things that aren't
ready.



 For this reason we are proposing to not run anymore tests for neutron
 advanced services in the integrated gate, but keep them running on the
 neutron gate.
 This means we will have two neutron jobs:
 1) check-tempest-dsvm-neutron-full which will run only core neutron
 functionality
 2) check-tempest-dsvm-neutron-full-ext which will be what the neutron full
 job is today.


Using my breakdown, the extended job would include experimental neutron
features.



 The former will be part of the integrated gate, the latter will be part of
 the neutron gate.
 Considering that other integrating services should not have an impact on
 neutron advanced services, this should not make gate testing asymmetric.

 However, there might be exceptions for:
 - orchestration project like heat which in the future might leverage
 capabilities like load balancing
 - oslo-* libraries, as changes in them might have an impact on neutron
 advanced services, since they consume those libraries


Once another service starts consuming an advanced feature I think it makes
sense to move it to the main neutron-full job. Especially if we assume that
things will only depend on neutron features that are not too experimental.



 Another good question is whether extended tests should be performed as
 part of functional or tempest checks. My take on this is that scenario
 tests should always be part of tempest. On the other hand I reckon API
 tests should exclusively be part of functional tests, but as so far tempest
 is running a gazillion of API tests, this is probably a discussion for the
 medium/long term.

 In order to add this new job there are a few patches under review:
 [1] and [2] Introduces the 'full-ext' job and devstack-gate support for it.
 [3] Are the patches implementing a blueprint which will enable us to
 specify for which extensions test should be executed.

 Finally, one more note about smoketests. Although we're planning to get
 rid of them soon, we still have failures in the pg job because of [4]. For
 this reasons smoketests are still running for postgres in the integrated
 gate. As load balancing and firewall API tests are part of it, they should
 be removed from the smoke test executed on the integrated gate ([5], [6]).
 This is a temporary measure until the postgres issue is fixed.


++



 Regards,
 Salvatore

 [1] https://review.openstack.org/#/c/114933/
 [2] https://review.openstack.org/#/c/114932/
 [3]
 https://review.openstack.org/#/q/status:open+branch:master+topic:bp/branchless-tempest-extensions,n,z
 [4] https://bugs.launchpad.net/nova/+bug/1305892
 [5] https://review.openstack.org/#/c/115022/
 [6] https://review.openstack.org/#/c/115023/


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] (Non-)consistency of the Ironic hash ring implementation

2014-09-03 Thread Robert Collins
On 3 September 2014 23:50, Nejc Saje ns...@redhat.com wrote:

Forgive my slowness :).

 Disclaimer: in Ironic terms, node = conductor, key = host

Sadly not inside the hash_ring code :/. host == conductor, key == data.

 The test I linked fails with Ironic hash ring code (specifically the part
 that tests consistency). With 1000 keys being mapped to 10 nodes, when you
 add a node:
 - current ceilometer code remaps around 7% of the keys ( 1/#nodes)
 - Ironic code remaps  90% of the keys

Ok thats pretty definitive and definitely not intended. However
remapping 7% when adding 10% capacity is also undesirable - we'd like
to remap 1/11 - +-9%.

 The problem lies in the way you build your hash ring[1]. You initialize a
 statically-sized array and divide its fields among nodes. When you do a
 lookup, you check which field in the array the key hashes to and then return
 the node that that field belongs to. This is the wrong approach because when
 you add a new node, you will resize the array and assign the fields
 differently, but the same key will still hash to the same field and will
 therefore hash to a different node.

You're referring to part2host where we round-robin using mod to map a
partition (hash value of key) to a host(conductor). Then when we add a
conductor this entire map scales out - yup I see the issue.

Have you filed a bug for this?

 Nodes must be hashed onto the ring as well, statically chopping up the ring
 and dividing it among nodes isn't enough for consistency.

 Cheers,
 Nejc



 The Ironic hash_ring implementation uses a hash:
  def _get_partition(self, data):
  try:
  return (struct.unpack_from('I',
 hashlib.md5(data).digest())[0]
   self.partition_shift)
  except TypeError:
  raise exception.Invalid(
  _(Invalid data supplied to HashRing.get_hosts.))


 so I don't see the fixed size thing you're referring to. Could you
 point a little more specifically? Thanks!

 -Rob

 On 1 September 2014 19:48, Nejc Saje ns...@redhat.com wrote:

 Hey guys,

 in Ceilometer we're using consistent hash rings to do workload
 partitioning[1]. We've considered generalizing your hash ring
 implementation
 and moving it up to oslo, but unfortunately your implementation is not
 actually consistent, which is our requirement.

 Since you divide your ring into a number of equal sized partitions,
 instead
 of hashing hosts onto the ring, when you add a new host,
 an unbound amount of keys get re-mapped to different hosts (instead of
 the
 1/#nodes remapping guaranteed by hash ring). I've confirmed this with the
 test in aforementioned patch[2].

 If this is good enough for your use-case, great, otherwise we can get a
 generalized hash ring implementation into oslo for use in both projects
 or
 we can both use an external library[3].

 Cheers,
 Nejc

 [1] https://review.openstack.org/#/c/113549/
 [2]
 https://review.openstack.org/#/c/113549/21/ceilometer/tests/test_utils.py
 [3] https://pypi.python.org/pypi/hash_ring

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][qa][neutron] Neutron full job, advanced services, and the integrated gate

2014-09-03 Thread Sean Dague
On 08/26/2014 07:47 PM, Salvatore Orlando wrote:
 TL; DR
 A few folks are proposing to stop running tests for neutron advanced
 services [ie: (lb|vpn|fw)aas] in the integrated gate, and run them only
 on the neutron gate.
 
 Reason: projects like nova are 100% orthogonal to neutron advanced
 services. Also, there have been episodes in the past of unreliability of
 tests for these services, and it would be good to limit affected
 projects considering that more api tests and scenarios are being added.
 
 -
 
 So far the neutron full job runs tests (api and scenarios) for neutron
 core functionality as well as neutron advanced services, which run
 as neutron service plugin.
 
 It's highly unlikely, if not impossible, that changes in projects such
 as nova, glance or ceilometer can have an impact on the stability of
 these services.
 On the other hand, instability in these services can trigger gate
 failures in unrelated projects as long as tests for these services are
 run in the neutron full job in the integrated gate. There have already
 been several gate-breaking bugs in lbaas scenario tests are firewall api
 tests.
 Admittedly, advanced services do not have the same level of coverage as
 core neutron functionality. Therefore as more tests are being added,
 there is an increased possibility of unearthing dormant bugs.
 
 For this reason we are proposing to not run anymore tests for neutron
 advanced services in the integrated gate, but keep them running on the
 neutron gate.
 This means we will have two neutron jobs:
 1) check-tempest-dsvm-neutron-full which will run only core neutron
 functionality
 2) check-tempest-dsvm-neutron-full-ext which will be what the neutron
 full job is today.
 
 The former will be part of the integrated gate, the latter will be part
 of the neutron gate.
 Considering that other integrating services should not have an impact on
 neutron advanced services, this should not make gate testing asymmetric.
 
 However, there might be exceptions for:
 - orchestration project like heat which in the future might leverage
 capabilities like load balancing
 - oslo-* libraries, as changes in them might have an impact on neutron
 advanced services, since they consume those libraries
 
 Another good question is whether extended tests should be performed as
 part of functional or tempest checks. My take on this is that scenario
 tests should always be part of tempest. On the other hand I reckon API
 tests should exclusively be part of functional tests, but as so far
 tempest is running a gazillion of API tests, this is probably a
 discussion for the medium/long term. 
 
 In order to add this new job there are a few patches under review:
 [1] and [2] Introduces the 'full-ext' job and devstack-gate support for it.
 [3] Are the patches implementing a blueprint which will enable us to
 specify for which extensions test should be executed.
 
 Finally, one more note about smoketests. Although we're planning to get
 rid of them soon, we still have failures in the pg job because of [4].
 For this reasons smoketests are still running for postgres in the
 integrated gate. As load balancing and firewall API tests are part of
 it, they should be removed from the smoke test executed on the
 integrated gate ([5], [6]). This is a temporary measure until the
 postgres issue is fixed.
 
 Regards,
 Salvatore
 
 [1] https://review.openstack.org/#/c/114933/
 [2] https://review.openstack.org/#/c/114932/
 [3] 
 https://review.openstack.org/#/q/status:open+branch:master+topic:bp/branchless-tempest-extensions,n,z
 [4] https://bugs.launchpad.net/nova/+bug/1305892
 [5] https://review.openstack.org/#/c/115022/
 [6] https://review.openstack.org/#/c/115023/

+1

I realistically think that we should think about neutron as 2 things.
The L2/L3 services, and the advanced services. L2/L3 seem appropriate to
ensure are tightly integrated to the rest of OpenStack. The advanced
services really are a different beast (and honestly might be better as a
separate OpenStack service that's not neutron).

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [third-party] [infra] New mailing lists for third party announcements and account requests

2014-09-03 Thread Elizabeth K. Joseph
On Fri, Aug 29, 2014 at 12:47 PM, Elizabeth K. Joseph
l...@princessleia.com wrote:
 Third-party-request

 This list is the new place to request the creation or modification of
 your third party account. Note that old requests sent to the
 openstack-infra mailing list don't need to be resubmitted, they are
 already in the queue for creation.

 It would also be helpful for third party operators to join this
 mailing list as well as the -announce list in order to reply when they
 can to distribute workload and support new participants to thethird
 party community.

 http://lists.openstack.org/cgi-bin/mailman/listinfo/third-party-request

I learned this week that -request is a reserved namespace in mailman, oops :)

The mailing list has been renamed to third-party-requests:

http://lists.openstack.org/cgi-bin/mailman/listinfo/third-party-requests

If you've signed up already, you're still subscribed.

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [glance] do NOT ever sort requirements.txt

2014-09-03 Thread Clark Boylan


On Wed, Sep 3, 2014, at 01:06 PM, Kuvaja, Erno wrote:
  -Original Message-
  From: Clark Boylan [mailto:cboy...@sapwetik.org]
  Sent: 03 September 2014 20:10
  To: openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [all] [glance] do NOT ever sort 
  requirements.txt
  
  
  
  On Wed, Sep 3, 2014, at 11:51 AM, Kuvaja, Erno wrote:
-Original Message-
From: Sean Dague [mailto:s...@dague.net]
Sent: 03 September 2014 13:37
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [all] [glance] do NOT ever sort
requirements.txt
   
I'm not sure why people keep showing up with sort requirements
patches like - https://review.openstack.org/#/c/76817/6, however, they
  do.
   
All of these need to be -2ed with predjudice.
   
requirements.txt is not a declarative interface. The order is
important as pip processes it in the order it is. Changing the order
has impacts on the overall integration which can cause wedges later.
   
So please stop.
   
-Sean
   
--
Sean Dague
http://dague.net
   
  
   Hi Sean  all,
  
   Could you please open this up a little bit? What are we afraid
   breaking regarding the order of these requirements? I tried to go
   through pip documentation but I could not find reason of specific
   order of the lines, references to keep the order there was 'though.
  
   I'm now assuming one thing here as I do not know if that's the case.
   None of the packages enables/disables functionality depending of what
   has been installed on the system before, but they have their own
   dependencies to provide those. Based on this assumption I can think of
   only one scenario causing us issues. That is us abusing the example in
   point 2 of
   https://pip.pypa.io/en/latest/user_guide.html#requirements-files
   meaning; we install package X depending on package Y=1.0,2.0 before
   installing package Z depending on Y=1.0 to ensure that package Y2.0
   without pinning package Y in our requirements.txt. I certainly hope
   that this is not the case as depending 3rd party vendor providing us 
   specific
  version of dependency package would be extremely stupid.
  
   Other than that I really don't know how the order could cause us
   issues, but I would be really happy to learn something new today if
   that is the case or if my assumption went wrong.
  
   Best Regards,
   Erno (jokke_) Kuvaja
  
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  The issue is described in the bug that Josh linked
  (https://github.com/pypa/pip/issues/988). Basically pip doesn't do
  dependency resolution in a way that lets you treat requirements as order
  independent. For that to be the case pip would have to evaluate all
  dependencies together then install the intersection of those dependencies.
  Instead it iterates over the list(s) in order and evaluates each dependency 
  as
  it is found.
  
  Your example basically describes where this breaks. You can both depend on
  the same dependency at different versions and pip will install a version 
  that
  satisfies only one of the dependencies and not the other leading to a failed
  install. However I think a more common case is that openstack will pin a
  dependency and say Y=1.0,2.0 and the X dependency will say Y=1.0. If
  the X dependency comes first you get version 2.5 which is not valid for your
  specification of Y=1.0,2.0 and pip fails.
  You fix this by listing Y before X dependency that installs Y with less 
  restrictive
  boundaries.
  
  Another example of a slightly different failure would be hacking, flake8,
  pep8, and pyflakes. Hacking installs a specific version of flake8, pep8, and
  pyflakes so that we do static lint checking with consistent checks each
  release. If you sort this list alphabetically instead of allowing hacking 
  to install
  its deps flake8 will come first and you can get a different version of pep8.
  Different versions of pep8 check different things and now the gate has
  broken.
  
  The most problematic thing is you can't count on your dependencies from
  not breaking you if they come first (because they are evaluated first).
  So in cases where we know order is important (hacking and pbr and probably
  a handful of others) we should be listing them as early as possible in the
  requirements.
  
  Clark
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 Thanks Clark,
 
 To be honest the issue nor your explanation did clarify 

Re: [openstack-dev] [nova] libvirt version_cap, a postmortem

2014-09-03 Thread Joe Gordon
On Sat, Aug 30, 2014 at 9:08 AM, Mark McLoughlin mar...@redhat.com wrote:


 Hey

 The libvirt version_cap debacle continues to come up in conversation and
 one perception of the whole thing appears to be:

   A controversial patch was ninjaed by three Red Hat nova-cores and
   then the same individuals piled on with -2s when a revert was proposed
   to allow further discussion.

 I hope it's clear to everyone why that's a pretty painful thing to hear.
 However, I do see that I didn't behave perfectly here. I apologize for
 that.

 In order to understand where this perception came from, I've gone back
 over the discussions spread across gerrit and the mailing list in order
 to piece together a precise timeline. I've appended that below.

 Some conclusions I draw from that tedious exercise:


Thank you for going through and doing this.



  - Some people came at this from the perspective that we already have
a firm, unwritten policy that all code must have functional written
tests. Others see that test all the things is interpreted as a
worthy aspiration, but is only one of a number of nuanced factors
that needs to be taken into account when considering the addition of
a new feature.


Confusion over our testing policy sounds like the crux of one of the issues
here. Having so many unwritten policies has led to confusion in the past
which is why I started
http://docs.openstack.org/developer/nova/devref/policies.html, hopefully by
writing these things down in the future this sort of confusion will arise
less often.

Until this whole debacle I didn't even know there was a dissenting opinion
on what our testing policy is. In every conversation I have seen up until
this point, the question was always how to raise the bar on testing.  I
don't expect us to be able to get to the bottom of this issue in a ML
thread, but hopefully we can begin the testing policy conversation here so
that we may be able to make a breakthrough and the summit.




i.e. the former camp saw Dan Smith's devref addition as attempting
to document an existing policy (perhaps even a more forgiving
version of an existing policy), whereas other see it as a dramatic
shift to a draconian implementation of test all the things.

  - Dan Berrange, Russell and I didn't feel like we were ninjaing a
controversial patch - you can see our perspective expressed in
multiple places. The patch would have helped the live snapshot
issue, and has other useful applications. It does not affect the
broader testing debate.

Johannes was a solitary voice expressing concerns with the patch,
and you could see that Dan was particularly engaged in trying to
address those concerns and repeating his feeling that the patch was
orthogonal to the testing debate.

That all being said - the patch did merge too quickly.

  - What exacerbates the situation - particularly when people attempt to
look back at what happened - is how spread out our conversations
are. You look at the version_cap review and don't see any of the
related discussions on the devref policy review nor the mailing list
threads. Our disjoint methods of communicating contribute to
misunderstandings.

  - When it came to the revert, a couple of things resulted in
misunderstandings, hurt feelings and frayed tempers - (a) that our
retrospective veto revert policy wasn't well understood and (b)
a feeling that there was private, in-person grumbling about us at
the mid-cycle while we were absent, with no attempt to talk to us
directly.


While I cannot speak for anyone else, I did grumble a bit at the mid-cycle
about the behavior on Dan's first devref patch,
https://review.openstack.org/#/c/103923/. This was the first time I saw 3
'-2's on a single patch revision. To me 1 or 2 '-2's gives the perception
of 'hold on there, lets discuss this more first,' but 3 '-2's is just
piling on and is very confrontational in nature. I was taken aback by this
behavior and still don't know what to say or even if my reaction is
justified.



 To take an even further step back - successful communities like ours
 require a huge amount of trust between the participants. Trust requires
 communication and empathy. If communication breaks down and the pressure
 we're all under erodes our empathy for each others' positions, then
 situations can easily get horribly out of control.

 This isn't a pleasant situation and we should all strive for better.
 However, I tend to measure our flamewars against this:

   https://mail.gnome.org/archives/gnome-2-0-list/2001-June/msg00132.html

 GNOME in June 2001 was my introduction to full-time open-source
 development, so this episode sticks out in my mind. The two individuals
 in that email were/are immensely capable and reasonable people, yet ...

 So far, we're doing pretty okay compared to that and many other
 open-source flamewars. Let's make sure we continue that way by avoiding
 

Re: [openstack-dev] [Glance][Nova][All] requests 2.4.0 breaks glanceclient

2014-09-03 Thread Ian Cordasco
On 9/3/14, 2:20 PM, Sean Dague s...@dague.net wrote:

On 09/03/2014 03:12 PM, Gregory Haynes wrote:
 Excerpts from Kuvaja, Erno's message of 2014-09-03 12:30:08 +:
 Hi All,

 While investigating glanceclient gating issues we narrowed it down to
requests 2.4.0 which was released 2014-08-29. Urllib3 seems to be
raising new ProtocolError which does not get catched and breaks at
least glanceclient.
 Following error can be seen on console ProtocolError: ('Connection
aborted.', gaierror(-2, 'Name or service not known')).

 Unfortunately we hit on such issue just under the freeze. Apparently
this breaks novaclient as well and there is change
(https://review.openstack.org/#/c/118332/ )proposed to requirements to
limit the version 2.4.0.

 Is there any other projects using requirements and seeing issues with
the latest version?
 
 Weve run into this in tripleo, specifically with os-collect-config.
 Heres the upstream bug:
 https://github.com/kennethreitz/requests/issues/2192
 
 We had to pin it in our project to unwedge CI (otherwise we would be
 blocked on cutting an os-collect-config release).

Ok, given the details of the bug, I'd be ok with a != 2.4.0, it looks
like they are working on a merge now.

There’s a patch waiting to be merged here:
https://github.com/kennethreitz/requests/pull/2193. Unfortunately, it
might take a while for Kenneth to show up, merge it, and cut a minor
release.

-
Ian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][qa][neutron] Neutron full job, advanced services, and the integrated gate

2014-09-03 Thread Salvatore Orlando
On 3 September 2014 22:10, Joe Gordon joe.gord...@gmail.com wrote:




 On Tue, Aug 26, 2014 at 4:47 PM, Salvatore Orlando sorla...@nicira.com
 wrote:

 TL; DR
 A few folks are proposing to stop running tests for neutron advanced
 services [ie: (lb|vpn|fw)aas] in the integrated gate, and run them only on
 the neutron gate.

 Reason: projects like nova are 100% orthogonal to neutron advanced
 services. Also, there have been episodes in the past of unreliability of
 tests for these services, and it would be good to limit affected projects
 considering that more api tests and scenarios are being added.

 -

 So far the neutron full job runs tests (api and scenarios) for neutron
 core functionality as well as neutron advanced services, which run as
 neutron service plugin.

 It's highly unlikely, if not impossible, that changes in projects such as
 nova, glance or ceilometer can have an impact on the stability of these
 services.
 On the other hand, instability in these services can trigger gate
 failures in unrelated projects as long as tests for these services are run
 in the neutron full job in the integrated gate. There have already been
 several gate-breaking bugs in lbaas scenario tests are firewall api tests.
 Admittedly, advanced services do not have the same level of coverage as
 core neutron functionality. Therefore as more tests are being added, there
 is an increased possibility of unearthing dormant bugs.


 I support this split but for slightly different reasons.  I am under the
 impression that neutron advanced services are not ready for prime time. If
 that is correct I don't think we should be gating on things that aren't
 ready.


I deliberately avoided going into this field in my first post as I did not
want my personal opinions to appear as those of the Neutron project core
team.
Neutron has so far 5 service plugins. Of those I believe l3 and metering
are part of what is neutron core functionality, and, as stated by Sean,
should be tested as part of the integrated gate. Metering is a bit of an
accessory service so I'm +/- 0 on whether it should be part or not of
integrated gate tests.

For load balancing, v1 has been considered fairly stable for a while.
However, as it's being overhauled with lbaas v2 activities, I might
question its production readiness.
For VPN, we just do not have yet enough data points to assess its stability
in the gate (no scenario test), or production readiness.
For firewall, my impression is that it still considered an experimental
feature, but I might be mistaken.

Considering the above I would also subscribe to Joe's point - under the
assumption that only things that are production ready should be tested in
the integrated gate.





 For this reason we are proposing to not run anymore tests for neutron
 advanced services in the integrated gate, but keep them running on the
 neutron gate.
 This means we will have two neutron jobs:
 1) check-tempest-dsvm-neutron-full which will run only core neutron
 functionality
 2) check-tempest-dsvm-neutron-full-ext which will be what the neutron
 full job is today.


 Using my breakdown, the extended job would include experimental neutron
 features.




 The former will be part of the integrated gate, the latter will be part
 of the neutron gate.
 Considering that other integrating services should not have an impact on
 neutron advanced services, this should not make gate testing asymmetric.

 However, there might be exceptions for:
 - orchestration project like heat which in the future might leverage
 capabilities like load balancing
 - oslo-* libraries, as changes in them might have an impact on neutron
 advanced services, since they consume those libraries


 Once another service starts consuming an advanced feature I think it makes
 sense to move it to the main neutron-full job. Especially if we assume that
 things will only depend on neutron features that are not too experimental.


Correct. Shifting services from neutron's full-ext to the integrated gate
full job should be easy especially if these projects are spun out.




 Another good question is whether extended tests should be performed as
 part of functional or tempest checks. My take on this is that scenario
 tests should always be part of tempest. On the other hand I reckon API
 tests should exclusively be part of functional tests, but as so far tempest
 is running a gazillion of API tests, this is probably a discussion for the
 medium/long term.

 In order to add this new job there are a few patches under review:
 [1] and [2] Introduces the 'full-ext' job and devstack-gate support for
 it.
 [3] Are the patches implementing a blueprint which will enable us to
 specify for which extensions test should be executed.

 Finally, one more note about smoketests. Although we're planning to get
 rid of them soon, we still have failures in the pg job because of [4]. For
 this reasons smoketests are still running for postgres in the integrated
 gate. As load balancing 

Re: [openstack-dev] [neutron] [third-party] collecting recheck command to re-trigger third party CI

2014-09-03 Thread Kevin Benton
+1

It would also be really helpful if the recheck command is shown in the
output of the results from each CI so the developer doesn't even have to
visit this page.


On Wed, Sep 3, 2014 at 11:03 AM, Akihiro Motoki amot...@gmail.com wrote:

 Hi Neutron team,

 There are many third party CI in Neutron and we sometimes/usually
 want to retrigger third party CI to confirm results.
 A comment syntax varies across third party CI, so I think it is useful
 to gather recheck command in one place. I struggled to know how to
 rerun a specific CI.

 I added to recheck command column in the list of Neutron plugins and
 drivers [1].
 Could you add recheck command of your CI in the table?
 If it is not available, please add N/A.

 Note that supporting recheck is one of the requirements of third party
 testing. [2]
 I understand not all CIs support it due to various reasons, but
 collecting it is useful for developers and reviewers.

 A syntax of recheck command is under discussion in infra review [3].
 I believe the column of recheck command is still useful even after
 the official syntax is defined because it is not an easy thing to know
 each CI system name.

 [1]
 https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers#Existing_Plugin_and_Drivers
 [2] http://ci.openstack.org/third_party.html#requirements
 [3] https://review.openstack.org/#/c/118623/

 Thanks,
 Akihiro

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [glance] do NOT ever sort requirements.txt

2014-09-03 Thread Kuvaja, Erno
 -Original Message-
 From: Clark Boylan [mailto:cboy...@sapwetik.org]
 Sent: 03 September 2014 21:57
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [all] [glance] do NOT ever sort requirements.txt
 
 
 
 On Wed, Sep 3, 2014, at 01:06 PM, Kuvaja, Erno wrote:
   -Original Message-
   From: Clark Boylan [mailto:cboy...@sapwetik.org]
   Sent: 03 September 2014 20:10
   To: openstack-dev@lists.openstack.org
   Subject: Re: [openstack-dev] [all] [glance] do NOT ever sort
   requirements.txt
  
  
  
   On Wed, Sep 3, 2014, at 11:51 AM, Kuvaja, Erno wrote:
 -Original Message-
 From: Sean Dague [mailto:s...@dague.net]
 Sent: 03 September 2014 13:37
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [all] [glance] do NOT ever sort
 requirements.txt

 I'm not sure why people keep showing up with sort requirements
 patches like - https://review.openstack.org/#/c/76817/6,
 however, they
   do.

 All of these need to be -2ed with predjudice.

 requirements.txt is not a declarative interface. The order is
 important as pip processes it in the order it is. Changing the
 order has impacts on the overall integration which can cause wedges
 later.

 So please stop.

   -Sean

 --
 Sean Dague
 http://dague.net

   
Hi Sean  all,
   
Could you please open this up a little bit? What are we afraid
breaking regarding the order of these requirements? I tried to go
through pip documentation but I could not find reason of specific
order of the lines, references to keep the order there was 'though.
   
I'm now assuming one thing here as I do not know if that's the case.
None of the packages enables/disables functionality depending of
what has been installed on the system before, but they have their
own dependencies to provide those. Based on this assumption I can
think of only one scenario causing us issues. That is us abusing
the example in point 2 of
https://pip.pypa.io/en/latest/user_guide.html#requirements-files
meaning; we install package X depending on package Y=1.0,2.0
before installing package Z depending on Y=1.0 to ensure that
package Y2.0 without pinning package Y in our requirements.txt. I
certainly hope that this is not the case as depending 3rd party
vendor providing us specific
   version of dependency package would be extremely stupid.
   
Other than that I really don't know how the order could cause us
issues, but I would be really happy to learn something new today
if that is the case or if my assumption went wrong.
   
Best Regards,
Erno (jokke_) Kuvaja
   
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-de
 v
   
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
   The issue is described in the bug that Josh linked
   (https://github.com/pypa/pip/issues/988). Basically pip doesn't do
   dependency resolution in a way that lets you treat requirements as
   order independent. For that to be the case pip would have to
   evaluate all dependencies together then install the intersection of those
 dependencies.
   Instead it iterates over the list(s) in order and evaluates each
   dependency as it is found.
  
   Your example basically describes where this breaks. You can both
   depend on the same dependency at different versions and pip will
   install a version that satisfies only one of the dependencies and
   not the other leading to a failed install. However I think a more
   common case is that openstack will pin a dependency and say
   Y=1.0,2.0 and the X dependency will say Y=1.0. If the X
   dependency comes first you get version 2.5 which is not valid for your
 specification of Y=1.0,2.0 and pip fails.
   You fix this by listing Y before X dependency that installs Y with
   less restrictive boundaries.
  
   Another example of a slightly different failure would be hacking,
   flake8, pep8, and pyflakes. Hacking installs a specific version of
   flake8, pep8, and pyflakes so that we do static lint checking with
   consistent checks each release. If you sort this list alphabetically
   instead of allowing hacking to install its deps flake8 will come first 
   and you
 can get a different version of pep8.
   Different versions of pep8 check different things and now the gate
   has broken.
  
   The most problematic thing is you can't count on your dependencies
   from not breaking you if they come first (because they are evaluated
 first).
   So in cases where we know order is important (hacking and pbr and
   probably a handful of others) we should 

Re: [openstack-dev] [neutron] [third-party] collecting recheck command to re-trigger third party CI

2014-09-03 Thread Anita Kuno
On 09/03/2014 02:03 PM, Akihiro Motoki wrote:
 Hi Neutron team,
 
 There are many third party CI in Neutron and we sometimes/usually
 want to retrigger third party CI to confirm results.
 A comment syntax varies across third party CI, so I think it is useful
 to gather recheck command in one place. I struggled to know how to
 rerun a specific CI.
 
 I added to recheck command column in the list of Neutron plugins and
 drivers [1].
 Could you add recheck command of your CI in the table?
 If it is not available, please add N/A.
 
 Note that supporting recheck is one of the requirements of third party
 testing. [2]
 I understand not all CIs support it due to various reasons, but
 collecting it is useful for developers and reviewers.
 
 A syntax of recheck command is under discussion in infra review [3].
 I believe the column of recheck command is still useful even after
 the official syntax is defined because it is not an easy thing to know
 each CI system name.
 
 [1] 
 https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers#Existing_Plugin_and_Drivers
 [2] http://ci.openstack.org/third_party.html#requirements
 [3] https://review.openstack.org/#/c/118623/
 
 Thanks,
 Akihiro
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
Let's add the infra meeting log where this was discussed as reference
material as well.

http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-08-19-19.02.log.html
timestamp: 19:48:38

Thanks,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Alpha release of Congress

2014-09-03 Thread Tim Hinrichs
Hi all,

The alpha release of Congress is now available!  We'd love any and all feedback.

Components and Features
- Support for policy monitoring
- Policy engine implementation
- Message-passing architecture
- Drivers for Nova and Neutron
- Devstack integration
- Documentation
- REST API

README (with install instructions):
https://github.com/stackforge/congress/blob/master/README.rst

Tutorial:
https://github.com/stackforge/congress/blob/master/doc/source/tutorial-tenant-sharing.rst

Troubleshooting:
https://github.com/stackforge/congress/blob/master/doc/source/troubleshooting.rst

Contributors (IRC, Reviews, Code):
Sergio Cazzolato (sjcazzol)
Peter Balland (pballand)
Mohammad Banikazemi (banix)
Rajdeep Dua (rajdeep)
Conner Ferguson (Radu)
Tim Hinrichs (thinrichs)
Gokul Kandiraju (gokul)
Harrison Kelly (harrisonkelly)
Prabhakar Kudva (kudva)
Susanta Nanda (skn)
Sean Roberts (sarob)
Iben Rodriguez (iben)
Aaron Rosen (arosen)
Derick Winkworth (cloudtoad)
Alex Yip (alexsyip)


Sincerely,
The Congress team


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo.messaging][feature freeze exception] Can I get an exception for AMQP 1.0?

2014-09-03 Thread Ken Giusti
On Wed Sep 3 19:23:52 UTC 2014, Doug Hellmann wrote:
On Sep 3, 2014, at 2:03 PM, Ken Giusti kgiusti at gmail.com wrote:
 Hello,

 I'm proposing a freeze exception for the oslo.messaging AMQP 1.0
 driver:

   https://review.openstack.org/#/c/75815/
SNIP
 Thanks,

 Ken

Ken,

I think we’re generally in favor of including the new driver, but
before I say so officially can you fill us in on the state of the
additional external libraries it needs? I see pyngus on pypi, and you
mention in the “Request to include AMQP 1.0 support in Juno-3” thread
that proton is being packaged in EPEL and work is ongoing for
Debian. Is that done (it has only been a few days, I know)?


Hi Doug,

Yes, AMQP 1.0 tech is pretty new, so the availability of the packages
is the sticking point.

That said, there are two different dependencies to consider:
oslo.messaging dependencies, and AMQP 1.0 support in the brokers.

For oslo.messaging, the dependency is on the Proton C developer
libraries.  This library is currently available on EPEL for centos6+
and fedora. It is not available in the Ubuntu repos yet - though they
have recently been accepted to Debian sid.  For developers, the QPID
project maintains a PPA that can be used to get the packages on
Debian/Ubuntu (though this is not acceptable for openstack CI
support).  The python bindings
that interface with this library are available on pypi (see the
amqp1-requirements.txt in the driver patch).

Qpid upstream has been shipping with 1.0 support for awhile now, but
unfortunately the popular distros don't have the latest Qpid brokers
available.  Qpid with AMQP 1.0 support is available via EPEL for
centos7 and fedora. RedHat deprecated Qpid on rhel6, so now centos6 is
stuck with an old version of qpid since we can't override base
packages in EPEL.  Same deal with Debian/Ubuntu, though the QPID PPA
should have the latest packages (I'll have to follow up on that).


I would like to avoid having that packaging work be a blocker, so if
we do include the driver, what do you think is the best way to convey
the instructions for installing those packages? I know you’ve done
some work recently on documenting the experimental status, did that
include installation tips?

Not at present, but that's what I'm working on at the moment.  I'll
definitely call out the installation dependencies and configuration
settings necessary for folks to get the driver up and running with a
minimum of pain.

I assume I'll have to update at least:

The wiki: https://wiki.openstack.org/wiki/Oslo/Messaging
The user manuals:
http://docs.openstack.org/icehouse/config-reference/content/configuring-rpc.html

I can also add a README to the protocols/amqp directory, if that makes sense.


Thanks,
Doug

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] integration tests in python-saharaclient

2014-09-03 Thread Andrew Lazarev
Hi team,

Today I've realized that we have some tests called 'integration'
in python-saharaclient. Also I've found out that Jenkins doesn't use them
and they can't be run starting from April because of typo in tox.ini.

Does anyone know what these tests are? Does anyone mind if I delete them
since we don't use them anyway?

Thanks,
Andrew.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][IPv6] Neighbor Discovery for HA

2014-09-03 Thread Carl Baldwin
It should be noted that send_arp_for_ha is a configuration option
that preceded the more recent in-progress work to add VRRP controlled
HA to Neutron's router.  The option was added, I believe, to cause the
router to send (default) 3 GARPs to the external gateway if the router
was removed from one network node and added to another by some
external script or manual intervention.  It did not send anything on
the internal network ports.

VRRP is a different story and the code in review [1] sends GARPs on
internal and external ports.

Hope this helps avoid confusion in this discussion.

Carl

[1] https://review.openstack.org/#/c/70700/37/neutron/agent/l3_ha_agent.py

On Mon, Sep 1, 2014 at 8:52 PM, Xu Han Peng pengxu...@gmail.com wrote:
 Anthony,

 Thanks for your reply.

 If HA method like VRRP are used for IPv6 router, according to the VRRP RFC
 with IPv6 included, the servers should be auto-configured with the active
 router's LLA as the default route before the failover happens and still
 remain that route after the failover. In other word, there should be no need
 to use two LLAs for default route of a subnet unless load balance is
 required.

 When the backup router become the master router, the backup router should be
 responsible for sending out an unsolicited ND neighbor advertisement with
 the associated LLA (the previous master's LLA) immediately to update the
 bridge learning state and sending out router advertisement with the same
 options with the previous master to maintain the route and bridge learning.

 This is shown in http://tools.ietf.org/html/rfc5798#section-4.1 and the
 actions backup router should take after failover is documented here:
 http://tools.ietf.org/html/rfc5798#section-6.4.2. The need for immediate
 messaging sending and periodic message sending is documented here:
 http://tools.ietf.org/html/rfc5798#section-2.4

 Since the keepalived manager support for L3 HA is merged:
 https://review.openstack.org/#/c/68142/43. And keepalived release 1.2.0
 supports VRRP IPv6 features ( http://www.keepalived.org/changelog.html, see
 Release 1.2.0 | VRRP IPv6 Release). I think we can check if keepalived can
 satisfy our requirement here and if that will cause any conflicts with
 RADVD.

 Thoughts?

 Xu Han


 On 08/28/2014 10:11 PM, Veiga, Anthony wrote:



 Anthony and Robert,

 Thanks for your reply. I don't know if the arping is there for NAT, but I am
 pretty sure it's for HA setup to broadcast the router's own change since the
 arping is controlled by send_arp_for_ha config. By checking the man page
 of arping, you can find the arping -A we use in code is sending out ARP
 REPLY instead of ARP REQUEST. This is like saying I am here instead of
 where are you. I didn't realized this either until Brain pointed this out
 at my code review below.


 That’s what I was trying to say earlier.  Sending out the RA is the same
 effect.  RA says “I’m here, oh and I’m also a router” and should supersede
 the need for an unsolicited NA.  The only thing to consider here is that RAs
 are from LLAs.  If you’re doing IPv6 HA, you’ll need to have two gateway IPs
 for the RA of the standby to work.  So far as I know, I think there’s still
 a bug out on this since you can only have one gateway per subnet.



 http://linux.die.net/man/8/arping

 https://review.openstack.org/#/c/114437/2/neutron/agent/l3_agent.py

 Thoughts?

 Xu Han


 On 08/27/2014 10:01 PM, Veiga, Anthony wrote:


 Hi Xuhan,

 What I saw is that GARP is sent to the gateway port and also to the router
 ports, from a neutron router. I’m not sure why it’s sent to the router ports
 (internal network). My understanding for arping to the gateway port is that
 it is needed for proper NAT operation. Since we are not planning to support
 ipv6 NAT, so this is not required/needed for ipv6 any more?


 I agree that this is no longer necessary.


 There is an abandoned patch that disabled the arping for ipv6 gateway port:
 https://review.openstack.org/#/c/77471/3/neutron/agent/l3_agent.py

 thanks,
 Robert

 On 8/27/14, 1:03 AM, Xuhan Peng pengxu...@gmail.com wrote:

 As a follow-up action of yesterday's IPv6 sub-team meeting, I would like to
 start a discussion about how to support l3 agent HA when IP version is IPv6.

 This problem is triggered by bug [1] where sending gratuitous arp packet for
 HA doesn't work for IPv6 subnet gateways. This is because neighbor discovery
 instead of ARP should be used for IPv6.

 My thought to solve this problem turns into how to send out neighbor
 advertisement for IPv6 routers just like sending ARP reply for IPv4 routers
 after reading the comments on code review [2].

 I searched for utilities which can do this and only find a utility called
 ndsend [3] as part of vzctl on ubuntu. I could not find similar tools on
 other linux distributions.

 There are comments in yesterday's meeting that it's the new router's job to
 send out RA and there is no need for neighbor discovery. But we didn't get
 enough time to 

[openstack-dev] [Nova] [feature freeze exception] Move to oslo.db

2014-09-03 Thread Andrey Kurilin
Hi All!

I'd like to ask for a feature freeze exception for porting nova to use
oslo.db.

This change not only removes 3k LOC, but fixes 4 bugs(see commit message
for more details) and provides relevant, stable common db code.

Main maintainers of oslo.db(Roman Podoliaka and Victor Sergeyev) are OK
with this.

Joe Gordon and Matt Riedemann are already signing up, so we need one more
vote from Core developer.

By the way a lot of core projects are using already oslo.db for a while:
keystone, cinder, glance, ceilometer, ironic, heat, neutron and sahara. So
migration to oslo.db won’t produce any unexpected issues.

Patch is here: https://review.openstack.org/#/c/101901/

--
Best regards,
Andrey Kurilin.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][IPv6] Neighbor Discovery for HA

2014-09-03 Thread Martinx - ジェームズ
Sounds impressive!   :-D


On 1 September 2014 23:52, Xu Han Peng pengxu...@gmail.com wrote:

  Anthony,

 Thanks for your reply.

 If HA method like VRRP are used for IPv6 router, according to the VRRP RFC
 with IPv6 included, the servers should be auto-configured with the active
 router's LLA as the default route before the failover happens and still
 remain that route after the failover. In other word, there should be no
 need to use two LLAs for default route of a subnet unless load balance is
 required.

 When the backup router become the master router, the backup router should
 be responsible for sending out an unsolicited ND neighbor advertisement
 with the associated LLA (the previous master's LLA) immediately to update
 the bridge learning state and sending out router advertisement with the
 same options with the previous master to maintain the route and bridge
 learning.

 This is shown in http://tools.ietf.org/html/rfc5798#section-4.1 and the
 actions backup router should take after failover is documented here:
 http://tools.ietf.org/html/rfc5798#section-6.4.2. The need for immediate
 messaging sending and periodic message sending is documented here:
 http://tools.ietf.org/html/rfc5798#section-2.4

 Since the keepalived manager support for L3 HA is merged:
 https://review.openstack.org/#/c/68142/43. And keepalived release 1.2.0
 supports VRRP IPv6 features ( http://www.keepalived.org/changelog.html,
 see Release 1.2.0 | VRRP IPv6 Release). I think we can check if keepalived
 can satisfy our requirement here and if that will cause any conflicts with
 RADVD.

 Thoughts?

 Xu Han


 On 08/28/2014 10:11 PM, Veiga, Anthony wrote:



   Anthony and Robert,

 Thanks for your reply. I don't know if the arping is there for NAT, but I
 am pretty sure it's for HA setup to broadcast the router's own change since
 the arping is controlled by send_arp_for_ha config. By checking the man
 page of arping, you can find the arping -A we use in code is sending out
 ARP REPLY instead of ARP REQUEST. This is like saying I am here instead
 of where are you. I didn't realized this either until Brain pointed this
 out at my code review below.


  That’s what I was trying to say earlier.  Sending out the RA is the same
 effect.  RA says “I’m here, oh and I’m also a router” and should supersede
 the need for an unsolicited NA.  The only thing to consider here is that
 RAs are from LLAs.  If you’re doing IPv6 HA, you’ll need to have two
 gateway IPs for the RA of the standby to work.  So far as I know, I think
 there’s still a bug out on this since you can only have one gateway per
 subnet.



 http://linux.die.net/man/8/arping

 https://review.openstack.org/#/c/114437/2/neutron/agent/l3_agent.py

 Thoughts?

 Xu Han


 On 08/27/2014 10:01 PM, Veiga, Anthony wrote:


 Hi Xuhan,

  What I saw is that GARP is sent to the gateway port and also to the
 router ports, from a neutron router. I’m not sure why it’s sent to the
 router ports (internal network). My understanding for arping to the gateway
 port is that it is needed for proper NAT operation. Since we are not
 planning to support ipv6 NAT, so this is not required/needed for ipv6 any
 more?


  I agree that this is no longer necessary.


  There is an abandoned patch that disabled the arping for ipv6 gateway
 port:  https://review.openstack.org/#/c/77471/3/neutron/agent/l3_agent.py

  thanks,
 Robert

   On 8/27/14, 1:03 AM, Xuhan Peng pengxu...@gmail.com wrote:

   As a follow-up action of yesterday's IPv6 sub-team meeting, I would
 like to start a discussion about how to support l3 agent HA when IP version
 is IPv6.

  This problem is triggered by bug [1] where sending gratuitous arp packet
 for HA doesn't work for IPv6 subnet gateways. This is because neighbor
 discovery instead of ARP should be used for IPv6.

  My thought to solve this problem turns into how to send out neighbor
 advertisement for IPv6 routers just like sending ARP reply for IPv4 routers
 after reading the comments on code review [2].

  I searched for utilities which can do this and only find a utility
 called ndsend [3] as part of vzctl on ubuntu. I could not find similar
 tools on other linux distributions.

  There are comments in yesterday's meeting that it's the new router's job
 to send out RA and there is no need for neighbor discovery. But we didn't
 get enough time to finish the discussion.


  Because OpenStack runs the l3 agent, it is the router.  Instead of
 needing to do gratuitous ARP to alert all clients of the new MAC, a simple
 RA from the new router for the same prefix would accomplish the same,
 without having to resort to a special package to generate unsolicited NA
 packets.  RAs must be generated from the l3 agent anyway if it’s the
 gateway, and we’re doing that via radvd now.  The HA failover simply needs
 to start the proper radvd process on the secondary gateway and resume
 normal operation.


  Can you comment your thoughts about how to solve this problem in this
 

Re: [openstack-dev] [neutron] [third-party] collecting recheck command to re-trigger third party CI

2014-09-03 Thread Sukhdev Kapur
Incidentally, I replied to Kurt's email this morning on this subject -
below is what I wrote.
There are several threads with such information. While it is great to
express opinions on the MLs, converting them into decisions/conclusions
should be done in a formal fashion. I suggested that this be addressed in a
session in Kilo - hosted by Kyle/Edgar.

-Sukhdev
---

Hi Kurt,

We (Arista) were one of the early adapters of the CI systems. We built our
system based upon the Neutron requirements as of late last year/early this
year. Our CI has been up and operational since January of this year. This
is before (or in parallel to Jay Pipes effort of Zuul based CIs).

We have invested a lot of effort in getting this done. In fact, I helped
many vendors setting up their Jenkin master/slaves, etc.
Additionally, we put an effort in coming up with a patch to support
recheck matching - as it was not supported in Gerritt Plugin.
Please see our wiki [1] which has a link to the Google doc describing the
recheck patch.

At the time the requirement was to support recheck no bug/bug#. Our
system is built to support this syntax.
The current Neutron Third party test systems are described in [2] and if
you look at the requirements described in [3], it states that a single
recheck should trigger all test systems.

Having said that, I understand the rationale of your argument. on this
thread, and actually agree with your point.
I have seen similar comments on various ML threads.

My suggestion is that this should be done in a coordinated manner so that
everybody understands the requirements, rather than simply throwing it on
the mailing list and assuming it is accepted by everybody. This is what
leads to the confusion. Some people will take it as a marching orders,
others may miss the thread and completely miss the communication.

Kyle Mestry (Neutron PTL) and Edgar Magana (Neutron Core) are proposing a
session at Kilo Summit in Paris to cover third party CI systems.
I would propose that you please coordinate with them and get your point of
view incorporated into this session. I have copied them both on this email
so that they can share their wisdom on the subject as well.

Thanks for all the good work by you and the infra team - making things
easier for us.

regards..
-Sukhdev

[1] https://wiki.openstack.org/wiki/Arista-third-party-testing
[2] https://wiki.openstack.org/wiki/NeutronThirdPartyTesting
[3] http://ci.openstack.org/third_party.html


On Wed, Sep 3, 2014 at 2:34 PM, Anita Kuno ante...@anteaya.info wrote:

 On 09/03/2014 02:03 PM, Akihiro Motoki wrote:
  Hi Neutron team,
 
  There are many third party CI in Neutron and we sometimes/usually
  want to retrigger third party CI to confirm results.
  A comment syntax varies across third party CI, so I think it is useful
  to gather recheck command in one place. I struggled to know how to
  rerun a specific CI.
 
  I added to recheck command column in the list of Neutron plugins and
  drivers [1].
  Could you add recheck command of your CI in the table?
  If it is not available, please add N/A.
 
  Note that supporting recheck is one of the requirements of third party
  testing. [2]
  I understand not all CIs support it due to various reasons, but
  collecting it is useful for developers and reviewers.
 
  A syntax of recheck command is under discussion in infra review [3].
  I believe the column of recheck command is still useful even after
  the official syntax is defined because it is not an easy thing to know
  each CI system name.
 
  [1]
 https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers#Existing_Plugin_and_Drivers
  [2] http://ci.openstack.org/third_party.html#requirements
  [3] https://review.openstack.org/#/c/118623/
 
  Thanks,
  Akihiro
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 Let's add the infra meeting log where this was discussed as reference
 material as well.


 http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-08-19-19.02.log.html
 timestamp: 19:48:38

 Thanks,
 Anita.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] (Non-)consistency of the Ironic hash ring implementation

2014-09-03 Thread Robert Collins
On 4 September 2014 00:13, Eoghan Glynn egl...@redhat.com wrote:


 On 09/02/2014 11:33 PM, Robert Collins wrote:
  The implementation in ceilometer is very different to the Ironic one -
  are you saying the test you linked fails with Ironic, or that it fails
  with the ceilometer code today?

 Disclaimer: in Ironic terms, node = conductor, key = host

 The test I linked fails with Ironic hash ring code (specifically the
 part that tests consistency). With 1000 keys being mapped to 10 nodes,
 when you add a node:
 - current ceilometer code remaps around 7% of the keys ( 1/#nodes)
 - Ironic code remaps  90% of the keys

 So just to underscore what Nejc is saying here ...

 The key point is the proportion of such baremetal-nodes that would
 end up being re-assigned when a new conductor is fired up.

That was 100% clear, but thanks for making sure.

The question was getting a proper understanding of why it was
happening in Ironic.

The ceilometer hashring implementation is good, but it uses the same
terms very differently (e.g. replicas for partitions) - I'm adapting
the key fix back into Ironic - I'd like to see us converge on a single
implementation, and making sure the Ironic one is suitable for
ceilometer seems applicable here (since ceilometer seems to need less
from the API),

If reassigning was cheap Ironic wouldn't have bothered having a hash ring :)

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] New meeting rotation starting next week

2014-09-03 Thread Sukhdev Kapur
+1

It will be very useful to have neutron specific meetings on iCal -
considering that they will be moving around...

-Sukhdev



On Mon, Sep 1, 2014 at 6:58 PM, Kevin Benton blak...@gmail.com wrote:

 Unfortunately the master ICAL has so many meetings that it's not useful to
 have displaying as part of a normal calendar.
 I was hoping for a Neutron-specific one similar to Tripleo's.


 On Mon, Sep 1, 2014 at 6:52 PM, Anne Gentle a...@openstack.org wrote:

 Look on https://wiki.openstack.org/wiki/Meetings for a link to an iCal
 feed of all OpenStack meetings.


 https://www.google.com/calendar/ical/bj05mroquq28jhud58esggq...@group.calendar.google.com/public/basic.ics





 On Mon, Sep 1, 2014 at 8:26 PM, Kevin Benton blak...@gmail.com wrote:

 Is it possible to put an iCal on the wiki so we can automatically see
 when meetings are updated/cancelled/moved?
  On Sep 1, 2014 6:23 PM, Kyle Mestery mest...@mestery.com wrote:

 Per discussion again today in the Neutron meeting, next week we'll
 start rotating the meeting. This will mean next week we'll meet on
 Tuesday (9-9-2014) at 1400 UTC in #openstack-meeting-alt.

 I've updated the Neutron meeting page [1] as well as the meeting wiki
 page [2] with the new details on the meeting page.

 Please add any agenda items to the page.

 Looking forward to seeing some new faces who can't normally join us at
 the 2100UTC slot!

 Thanks,
 Kyle

 [1] https://wiki.openstack.org/wiki/Network/Meetings
 [2] https://wiki.openstack.org/wiki/Meetings#Neutron_team_meeting

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Feature Freeze Exception process for Juno

2014-09-03 Thread Joe Gordon
On Wed, Sep 3, 2014 at 2:50 AM, Nikola Đipanov ndipa...@redhat.com wrote:

 On 09/02/2014 09:23 PM, Michael Still wrote:
  On Tue, Sep 2, 2014 at 1:40 PM, Nikola Đipanov ndipa...@redhat.com
 wrote:
  On 09/02/2014 08:16 PM, Michael Still wrote:
  Hi.
 
  We're soon to hit feature freeze, as discussed in Thierry's recent
  email. I'd like to outline the process for requesting a freeze
  exception:
 
  * your code must already be up for review
  * your blueprint must have an approved spec
  * you need three (3) sponsoring cores for an exception to be
 granted
 
  Can core reviewers who have features up for review have this number
  lowered to two (2) sponsoring cores, as they in reality then need four
  (4) cores (since they themselves are one (1) core but cannot really
  vote) making it an order of magnitude more difficult for them to hit
  this checkbox?
 
  That's a lot of numbers in that there paragraph.
 
  Let me re-phrase your question... Can a core sponsor an exception they
  themselves propose? I don't have a problem with someone doing that,
  but you need to remember that does reduce the number of people who
  have agreed to review the code for that exception.
 

 Michael has correctly picked up on a hint of snark in my email, so let
 me explain where I was going with that:

 The reason many features including my own may not make the FF is not
 because there was not enough buy in from the core team (let's be
 completely honest - I have 3+ other core members working for the same
 company that are by nature of things easier to convince), but because of
 any of the following:


I find the statement about having multiple cores at the same company very
concerning. To quote Mark McLoughlin, It is assumed that all core team
members are wearing their upstream hat and aren't there merely to
represent their employers interests [0]. Your statement appears to be in
direct conflict with Mark's idea of what core reviewer is, and idea that
IMHO is one of the basic tenants of OpenStack development.

[0] http://lists.openstack.org/pipermail/openstack-dev/2013-July/012073.html




 * Crippling technical debt in some of the key parts of the code
 * that we have not been acknowledging as such for a long time
 * which leads to proposed code being arbitrarily delayed once it makes
 the glaring flaws in the underlying infra apparent
 * and that specs process has been completely and utterly useless in
 helping uncover (not that process itself is useless, it is very useful
 for other things)

 I am almost positive we can turn this rather dire situation around
 easily in a matter of months, but we need to start doing it! It will not
 happen through pinning arbitrary numbers to arbitrary processes.


Nova is big and complex enough that I don't think any one person is able to
identify what we need to work on to make things better. That is one of the
reasons why I have the project priorities patch [1] up. I would like to see
nova as a team discuss and come up with what we think we need to focus on
to get us back on track.


[1] https://review.openstack.org/#/c/112733/



 I will follow up with a more detailed email about what I believe we are
 missing, once the FF settles and I have applied some soothing creme to
 my burnout wounds, but currently my sentiment is:

 Contributing features to Nova nowadays SUCKS!!1 (even as a core
 reviewer) We _have_ to change that!


Yes, I can agree with you on this part, things in nova land are not good.



 N.

  Michael
 
  * exceptions must be granted before midnight, Friday this week
  (September 5) UTC
  * the exception is valid until midnight Friday next week
  (September 12) UTC when all exceptions expire
 
  For reference, our rc1 drops on approximately 25 September, so the
  exception period needs to be short to maximise stabilization time.
 
  John Garbutt and I will both be granting exceptions, to maximise our
  timezone coverage. We will grant exceptions as they come in and gather
  the required number of cores, although I have also carved some time
  out in the nova IRC meeting this week for people to discuss specific
  exception requests.
 
  Michael
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Feature Freeze Exception process for Juno

2014-09-03 Thread Joe Gordon
On Wed, Sep 3, 2014 at 8:57 AM, Solly Ross sr...@redhat.com wrote:

  I will follow up with a more detailed email about what I believe we are
  missing, once the FF settles and I have applied some soothing creme to
  my burnout wounds, but currently my sentiment is:
 
  Contributing features to Nova nowadays SUCKS!!1 (even as a core
  reviewer) We _have_ to change that!

 I think this is *very* important.

 rant
 For instance, I have/had two patch series
 up. One is of length 2 and is relatively small.  It's basically sitting
 there
 with one +2 on each patch.  I will now most likely have to apply for a
 FFE
 to get it merged, not because there's more changes to be made before it
 can get merged
 (there was one small nit posted yesterday) or because it's a huge patch
 that needs a lot
 of time to review, but because it just took a while to get reviewed by
 cores,
 and still only appears to have been looked at by one core.

 For the other patch series (which is admittedly much bigger), it was hard
 just to
 get reviews (and it was something where I actually *really* wanted several
 opinions,
 because the patch series touched a couple of things in a very significant
 way).

 Now, this is not my first contribution to OpenStack, or to Nova, for that
 matter.  I
 know things don't always get in.  It's frustrating, however, when it seems
 like the
 reason something didn't get in wasn't because it was fundamentally flawed,
 but instead
 because it didn't get reviews until it was too late to actually take that
 feedback into
 account, or because it just didn't get much attention review-wise at all.
 If I were a
 new contributor to Nova who had successfully gotten a major blueprint
 approved and
 the implemented, only to see it get rejected like this, I might get turned
 off of Nova,
 and go to work on one of the other OpenStack projects that seemed to move
 a bit faster.
 /rant

 So, it's silly to rant without actually providing any ideas on how to fix
 it.
 One suggestion would be, for each approved blueprint, to have one or two
 cores
 explicitly marked as being responsible for providing at least some
 feedback on
 that patch.  This proposal has issues, since we have a lot of blueprints
 and only
 twenty cores, who also have their own stuff to work on.  However, I think
 the
 general idea of having guaranteed reviewers is not unsound by itself.
 Perhaps
 we should have a loose tier of reviewers between core and everybody
 else.
 These reviewers would be known good reviewers who would follow the
 implementation
 of particular blueprints if a core did not have the time.  Then, when
 those reviewers
 gave the +1 to all the patches in a series, they could ping a core, who
 could feel
 more comfortable giving a +2 without doing a deep inspection of the code.

 That's just one suggestion, though.  Whatever the solution may be, this is
 a
 problem that we need to fix.  While I enjoyed going through the blueprint
 process
 this cycle (not sarcastic -- I actually enjoyed the whole structured
 feedback thing),
 the follow up to that was not the most pleasant.

 One final note: the specs referenced above didn't get approved until Spec
 Freeze, which
 seemed to leave me with less time to implement things.  In fact, it seemed
 that a lot
 of specs didn't get approved until spec freeze.  Perhaps if we had more
 staggered
 approval of specs, we'd have more staggered submission of patches, and
 thus less of a
 sudden influx of patches in the couple weeks before feature proposal
 freeze.



While you raise some good points, albeit not new ones just long standing
issues that we really need to address, Nikola appears to not be commenting
on the shortage of reviews but rather on the amount of technical debt Nova
has.


 Best Regards,
 Solly Ross

 - Original Message -
  From: Nikola Đipanov ndipa...@redhat.com
  To: openstack-dev@lists.openstack.org
  Sent: Wednesday, September 3, 2014 5:50:09 AM
  Subject: Re: [openstack-dev] [Nova] Feature Freeze Exception process for
 Juno
 
  On 09/02/2014 09:23 PM, Michael Still wrote:
   On Tue, Sep 2, 2014 at 1:40 PM, Nikola Đipanov ndipa...@redhat.com
 wrote:
   On 09/02/2014 08:16 PM, Michael Still wrote:
   Hi.
  
   We're soon to hit feature freeze, as discussed in Thierry's recent
   email. I'd like to outline the process for requesting a freeze
   exception:
  
   * your code must already be up for review
   * your blueprint must have an approved spec
   * you need three (3) sponsoring cores for an exception to be
 granted
  
   Can core reviewers who have features up for review have this number
   lowered to two (2) sponsoring cores, as they in reality then need four
   (4) cores (since they themselves are one (1) core but cannot really
   vote) making it an order of magnitude more difficult for them to hit
   this checkbox?
  
   That's a lot of numbers in that there paragraph.
  
   Let me re-phrase your question... Can a core sponsor an exception they
   

Re: [openstack-dev] [Nova] Feature Freeze Exception process for Juno

2014-09-03 Thread Boris Pavlovic
Joe,

Nova is big and complex enough that I don't think any one person is able to
 identify what we need to work on to make things better.


Oh this is really bad.., if there is no person that understand what is
happening in project and where it should move...
IMHO Project can't evolve without straight direction ..

So probably it's time to think how to make Nova simpler? (refactoring and
so on?)

Best regards,
Boris Pavlovic


On Thu, Sep 4, 2014 at 4:16 AM, Joe Gordon joe.gord...@gmail.com wrote:




 On Wed, Sep 3, 2014 at 8:57 AM, Solly Ross sr...@redhat.com wrote:

  I will follow up with a more detailed email about what I believe we are
  missing, once the FF settles and I have applied some soothing creme to
  my burnout wounds, but currently my sentiment is:
 
  Contributing features to Nova nowadays SUCKS!!1 (even as a core
  reviewer) We _have_ to change that!

 I think this is *very* important.

 rant
 For instance, I have/had two patch series
 up. One is of length 2 and is relatively small.  It's basically sitting
 there
 with one +2 on each patch.  I will now most likely have to apply for a
 FFE
 to get it merged, not because there's more changes to be made before it
 can get merged
 (there was one small nit posted yesterday) or because it's a huge patch
 that needs a lot
 of time to review, but because it just took a while to get reviewed by
 cores,
 and still only appears to have been looked at by one core.

 For the other patch series (which is admittedly much bigger), it was hard
 just to
 get reviews (and it was something where I actually *really* wanted
 several opinions,
 because the patch series touched a couple of things in a very significant
 way).

 Now, this is not my first contribution to OpenStack, or to Nova, for that
 matter.  I
 know things don't always get in.  It's frustrating, however, when it
 seems like the
 reason something didn't get in wasn't because it was fundamentally
 flawed, but instead
 because it didn't get reviews until it was too late to actually take that
 feedback into
 account, or because it just didn't get much attention review-wise at
 all.  If I were a
 new contributor to Nova who had successfully gotten a major blueprint
 approved and
 the implemented, only to see it get rejected like this, I might get
 turned off of Nova,
 and go to work on one of the other OpenStack projects that seemed to move
 a bit faster.
 /rant

 So, it's silly to rant without actually providing any ideas on how to fix
 it.
 One suggestion would be, for each approved blueprint, to have one or two
 cores
 explicitly marked as being responsible for providing at least some
 feedback on
 that patch.  This proposal has issues, since we have a lot of blueprints
 and only
 twenty cores, who also have their own stuff to work on.  However, I think
 the
 general idea of having guaranteed reviewers is not unsound by itself.
 Perhaps
 we should have a loose tier of reviewers between core and everybody
 else.
 These reviewers would be known good reviewers who would follow the
 implementation
 of particular blueprints if a core did not have the time.  Then, when
 those reviewers
 gave the +1 to all the patches in a series, they could ping a core, who
 could feel
 more comfortable giving a +2 without doing a deep inspection of the
 code.

 That's just one suggestion, though.  Whatever the solution may be, this
 is a
 problem that we need to fix.  While I enjoyed going through the blueprint
 process
 this cycle (not sarcastic -- I actually enjoyed the whole structured
 feedback thing),
 the follow up to that was not the most pleasant.

 One final note: the specs referenced above didn't get approved until Spec
 Freeze, which
 seemed to leave me with less time to implement things.  In fact, it
 seemed that a lot
 of specs didn't get approved until spec freeze.  Perhaps if we had more
 staggered
 approval of specs, we'd have more staggered submission of patches, and
 thus less of a
 sudden influx of patches in the couple weeks before feature proposal
 freeze.



 While you raise some good points, albeit not new ones just long standing
 issues that we really need to address, Nikola appears to not be commenting
 on the shortage of reviews but rather on the amount of technical debt Nova
 has.


 Best Regards,
 Solly Ross

 - Original Message -
  From: Nikola Đipanov ndipa...@redhat.com
  To: openstack-dev@lists.openstack.org
  Sent: Wednesday, September 3, 2014 5:50:09 AM
  Subject: Re: [openstack-dev] [Nova] Feature Freeze Exception process
 for Juno
 
  On 09/02/2014 09:23 PM, Michael Still wrote:
   On Tue, Sep 2, 2014 at 1:40 PM, Nikola Đipanov ndipa...@redhat.com
 wrote:
   On 09/02/2014 08:16 PM, Michael Still wrote:
   Hi.
  
   We're soon to hit feature freeze, as discussed in Thierry's recent
   email. I'd like to outline the process for requesting a freeze
   exception:
  
   * your code must already be up for review
   * your blueprint must have an approved spec
   * you 

  1   2   >