Re: [openstack-dev] [Ironic] Exceptional approval request for Cisco Driver Blueprint

2014-08-07 Thread Jay Faulkner
Hey,

I agree with Dmitry that this spec has a huge scope. If you resubmitted one 
with only the power interface, that could be considered for an exception.

A few specific reasons:

1) Auto-enrollment -- should probably be held off at the moment
 - This is something that was talked about extensively at mid-cycle meetup and 
will be a topic of much debate in Ironic for Kilo. Whatever comes out of that 
debate, if it ends up being considered within scope, would be what your spec 
would want to integrate with.
 - I'd suggest you come in IRC, say hello, and work with us as we go into kilo 
figuring out if auto-enrollment belongs in Ironic and if so, how your hardware 
could integrate with that system.

2) Power driver
 - If you split this out into another spec and resubmitted, it'd be at least a 
small enough scope to be considered.  Just as a note though; Ironic has very 
specific priorities for Juno, the top of which is getting graduated. This means 
some new features have fallen aside in favor of graduation requirements.

Thanks,
Jay

From: Dmitry Tantsur dtant...@redhat.com
Sent: Thursday, August 07, 2014 4:56 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Ironic] Exceptional approval request for Cisco 
Driver Blueprint

Hi!

Didn't read the spec thoroughly, but I'm concerned by it's huge scope.
It's actually several specs squashed into one (not too detailed). My
vote is splitting it into a chain of specs (at least 3: power driver,
discovery, other configurations) and seek exception separately.
Actually, I'm +1 on making exception for power driver, but -0 on the
others, until I see a separate spec for them.

Dmitry.

On Thu, 2014-08-07 at 09:30 +0530, GopiKrishna Saripuri wrote:
 Hi,


 I've submitted Ironic Cisco driver blueprint post proposal freeze
 date. This driver is critical for Cisco and few customers to test as
 part of their private cloud expansion. The driver implementation is
 ready along with unit-tests. Will submit the code for review once
 blueprint is accepted.


 The Blueprint review link: https://review.openstack.org/#/c/110217/


 Please let me know If its possible to include this in Juno release.



 Regards
 GopiKrishna S
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] Core team proposals

2014-08-07 Thread Dean Troyer
I want to nominate Ian Wienand (IRC: ianw) to the DevStack core team.  Ian
has been a consistent contributor and reviewer for some time now.  He also
manages the Red Hat CI that runs tests on Fedora, RHEL and CentOS so those
platforms have been a particular point of interest for him.  Ian has also
been active in the config and devstack-gate projects among others.

Reviews: https://review.openstack.org/#/q/reviewer:%22Ian+Wienand+%22,n,z

Stackalytics:
http://stackalytics.com/?user_id=iwienandmetric=marksmodule=devstackrelease=all

I also want to (finally?) remove long-standing core team members Vish
Ishaya and Jesse Andrews, who between them were responsible for instigating
the whole 'build a stack script' back in the day.

Please respond in the usual manner, +1 or concerns.

Thanks
dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-07 Thread Joe Gordon
On Tue, Aug 5, 2014 at 9:03 AM, Thierry Carrez thie...@openstack.org
wrote:

 Hi everyone,

 With the incredible growth of OpenStack, our development community is
 facing complex challenges. How we handle those might determine the
 ultimate success or failure of OpenStack.

 With this cycle we hit new limits in our processes, tools and cultural
 setup. This resulted in new limiting factors on our overall velocity,
 which is frustrating for developers. This resulted in the burnout of key
 firefighting resources. This resulted in tension between people who try
 to get specific work done and people who try to keep a handle on the big
 picture.

 It all boils down to an imbalance between strategic and tactical
 contributions. At the beginning of this project, we had a strong inner
 group of people dedicated to fixing all loose ends. Then a lot of
 companies got interested in OpenStack and there was a surge in tactical,
 short-term contributions. We put on a call for more resources to be
 dedicated to strategic contributions like critical bugfixing,
 vulnerability management, QA, infrastructure... and that call was
 answered by a lot of companies that are now key members of the OpenStack
 Foundation, and all was fine again. But OpenStack contributors kept on
 growing, and we grew the narrowly-focused population way faster than the
 cross-project population.


 At the same time, we kept on adding new projects to incubation and to
 the integrated release, which is great... but the new developers you get
 on board with this are much more likely to be tactical than strategic
 contributors. This also contributed to the imbalance. The penalty for
 that imbalance is twofold: we don't have enough resources available to
 solve old, known OpenStack-wide issues; but we also don't have enough
 resources to identify and fix new issues.

 We have several efforts under way, like calling for new strategic
 contributors, driving towards in-project functional testing, making
 solving rare issues a more attractive endeavor, or hiring resources
 directly at the Foundation level to help address those. But there is a
 topic we haven't raised yet: should we concentrate on fixing what is
 currently in the integrated release rather than adding new projects ?


TL;DR: Our development model is having growing pains. until we sort out the
growing pains adding more projects spreads us too thin.

In addition to the issues mentioned above, with the scale of OpenStack
today we have many major cross project issues to address and no good place
to discuss them.



 We seem to be unable to address some key issues in the software we
 produce, and part of it is due to strategic contributors (and core
 reviewers) being overwhelmed just trying to stay afloat of what's
 happening. For such projects, is it time for a pause ? Is it time to
 define key cycle goals and defer everything else ?



I really like this idea, as Michael and others alluded to in above, we are
attempting to set cycle goals for Kilo in Nova. but I think it is worth
doing for all of OpenStack. We would like to make a list of key goals
before the summit so that we can plan our summit sessions around the goals.
On a really high level one way to look at this is, in Kilo we need to pay
down our technical debt.

The slots/runway idea is somewhat separate from defining key cycle goals;
we can be approve blueprints based on key cycle goals without doing slots.
 But with so many concurrent blueprints up for review at any given time,
the review teams are doing a lot of multitasking and humans are not very
good at multitasking. Hopefully slots can help address this issue, and
hopefully allow us to actually merge more blueprints in a given cycle.



 On the integrated release side, more projects means stretching our
 limited strategic resources more. Is it time for the Technical Committee
 to more aggressively define what is in and what is out ? If we go
 through such a redefinition, shall we push currently-integrated projects
 that fail to match that definition out of the integrated release inner
 circle ?

 The TC discussion on what the integrated release should or should not
 include has always been informally going on. Some people would like to
 strictly limit to end-user-facing projects. Some others suggest that
 OpenStack should just be about integrating/exposing/scaling smart
 functionality that lives in specialized external projects, rather than
 trying to outsmart those by writing our own implementation. Some others
 are advocates of carefully moving up the stack, and to resist from
 further addressing IaaS+ services until we complete the pure IaaS
 space in a satisfactory manner. Some others would like to build a
 roadmap based on AWS services. Some others would just add anything that
 fits the incubation/integration requirements.


 On one side this is a long-term discussion, but on the other we also
 need to make quick decisions. With 4 incubated projects, and 2 new ones
 

Re: [openstack-dev] [all] The future of the integrated release

2014-08-07 Thread Eoghan Glynn


 Multidisciplinary training rules! As an architect with field experience
 building roads, sidewalks, roofs, city planning (and training in lean
 manufacturing and services) I think I can have a say ;)
 
  You're not really introducing a successful Kanban here, you're just
  clarifying that there should be a set number of workstations.
 
 Right, and to clarify I'm really thinking kanban here, expanding on the
 few lines Mikal used to explain the 'slots' concept.
 
  Our current system is like a gigantic open space with hundreds of
  half-finished pieces, and a dozen workers keep on going from one to
  another with no strong pattern. The proposed system is to limit the
  number of half-finished pieces fighting for the workers attention at any
  given time, by setting a clear number of workstations.
 
 Correct, and I think we should add a limit to the amount of WIP, too. So
 we have a visible limit to people, workstations and Work In Progress.
 This way we can *immediately*, at any given time, identify problems.

If there's a trend in the replies from folks with experience
of running manufacturing or construction pipelines out in the
wild, it seems to be:

  extend the reach of the back-pressure further up the funnel 

This makes logical sense, but IMO simply doesn't apply in our
case, given the lack of direct command  control over the stuff
that contributors actually want to work on.

If we try to limit the number of WIP slots, then surely aspiring
contributors will simply work around that restriction by preparing
the code they're interested in on their own private branches, or
in their github forks?

OK, some pragmatic contributors will adjust their priorities to
align with the available slots. And some companies employing
large numbers of contributors will enforce policies to align
their developers' effort with the gatekeepers' priorities.

But I suspect we'd also have a good number who would take the
risk that their code never lands and work on it anyway. Given
that such efforts would really be flying beneath the radar and
may never see the light of day, that would seem like true waste
to me.

I don't have a good solution, just wanted to point out that
aspect.

Cheers,
Eoghan

  A true Kanban would be an interface between developers and reviewers,
  where reviewers define what type of change they have to review to
  complete production objectives, *and* developers would strive to produce
  enough to keep the kanban above the red line, but not too much (which
  would be piling up waste).
 
 Exactly where I am aiming at: reducing waste, which we already have but
 nobody (few, at different times) sees. By switching to a pull 'Just In
 Time' mode we'd see waste accumulate much earlier than as we do now.
 
  Without that discipline, Kanbans are useless. Unless the developers
  adapt what they work on based on release objectives, you don't really
  reduce waste/inventory at all, it just piles up waiting for available
  runway slots. As I said in my original email, the main issue here is
  the imbalance between too many people proposing changes and not enough
  people caring about the project itself enough to be trusted with core
  reviewers rights.
 
 I agree with you. Right now we're accumulating waste in the form of code
 proposals (raw pieces that need to be processed) but reviewers and core
 reviewers' attention span is limited (the number of 'workstations' is
 finite but we don't have such limit exposed) and nobody sees the
 accumulation of backlog until it's very late, at the end of the release
 cycle.
 
 A lot of the complaints I hear and worsening time to merge patches seems
 to indicate that we're over capacity and we didn't notice.
 
  The only way to be truly pull-based is
  to define a set of production objectives and have those objectives
  trickle up to the developers so that they don't work on something else.
 
 Yeah, don't we try to do that with blueprints/specs and priority? But we
 don't set a limit, it's almost a 'free for all' send your patches in and
 someone will evaluate them. Except there is a limit to what we can produce.
 
 I think fundamentally we need to admit that there are 24 hours in a day
 and that core reviewers have to sleep, sometimes. There is a finite
 amount of patches that can be processed in a given time interval.
 
 It's about finding a way to keep producing OpenStack at the highest
 speed possible, keeping quality, listening to 'downstream' first.
 
  The solution is about setting release cycle goals and strongly
  communicating that everything out of those goals is clearly priority 2.
 
 I don't think there is any 'proposal' just yet, only a half-baked idea
 thrown out there by the nova team during a meeting and fluffed up by me
 on the list. Still only a half-baked idea.
 
 I realized this is a digression from the original thread though. I'll
 talk to Russell and Nikola off-list (since they sent interesting
 comments, too) and John and Dan to see if they're still 

[openstack-dev] [sahara] Backports for 2014.1.2

2014-08-07 Thread Sergey Lukjanov
Hey sahara folks,

I'm going to push 2014.1.2 tag to stable/icehouse branch next week,
so, please, propose backports before the weekend and ping us to
backport some sensitive fixes.

Thanks you!
-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] team meeting minutes Aug 7

2014-08-07 Thread Sergey Lukjanov
Thanks everyone who have joined Sahara meeting.

Here are the logs from the meeting:

http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-08-07-18.02.html
http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-08-07-18.02.log.html

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-07 Thread Chris Friesen

On 08/07/2014 12:32 PM, Eoghan Glynn wrote:


If we try to limit the number of WIP slots, then surely aspiring
contributors will simply work around that restriction by preparing
the code they're interested in on their own private branches, or
in their github forks?

OK, some pragmatic contributors will adjust their priorities to
align with the available slots. And some companies employing
large numbers of contributors will enforce policies to align
their developers' effort with the gatekeepers' priorities.

But I suspect we'd also have a good number who would take the
risk that their code never lands and work on it anyway. Given
that such efforts would really be flying beneath the radar and
may never see the light of day, that would seem like true waste
to me.


Is that a problem?  If such developers are going to work on their pet 
project anyway, it's really up to the core team whether or not they 
think it makes sense to merge the changes upstream.


If the core team doesn't think they're worth merging (given the 
constraints on reviewer/approver time) then so be it.  At that point 
either we accept that we're going to leave possible contributions by the 
wayside or else we increase the core team (and infrastructure, and other 
strategic resources)  to be able to handle the load.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] oslo.concurrency repo review

2014-08-07 Thread Yuriy Taraday
Hello, oslo cores.

I've finished polishing up oslo.concurrency repo at [0] - please take a
look at it. I used my new version of graduate.sh [1] to generate it, so
history looks a bit different from what you might be used to.

I've made as little changes as possible, so there're still some steps left
that should be done after new repo is created:
- fix PEP8 errors H405 and E126;
- use strutils from oslo.utils;
- remove eventlet dependency (along with random sleeps), but proper testing
with eventlet should remain;
- fix for bug [2] should be applied from [3] (although it needs some
improvements);
- oh, there's really no limit for this...

I'll finalize and publish relevant change request to openstack-infra/config
soon.

Looking forward to any feedback!

[0] https://github.com/YorikSar/oslo.concurrency
[1] https://review.openstack.org/109779
[2] https://bugs.launchpad.net/oslo/+bug/1327946
[3] https://review.openstack.org/108954

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo] oslo.config and import chains

2014-08-07 Thread Doug Hellmann

On Aug 7, 2014, at 12:39 PM, Kevin L. Mitchell kevin.mitch...@rackspace.com 
wrote:

 On Thu, 2014-08-07 at 17:27 +0100, Matthew Booth wrote:
 On 07/08/14 16:27, Kevin L. Mitchell wrote:
 On Thu, 2014-08-07 at 12:15 +0100, Matthew Booth wrote:
 A (the?) solution is to register_opts() in foo before importing any
 modules which might also use oslo.config.
 
 Actually, I disagree.  The real problem here is the definition of
 bar_func().  The default value of the parameter arg will likely always
 be the default value of foo_opt, rather than the configured value,
 because CONF.foo_opt will be evaluated at module load time.  The way
 bar_func() should be defined would be:
 
def bar_func(arg=None):
if not arg:
arg = CONF.foo_opt
…
 
 That ensures that arg will be the configured value, and should also
 solve the import conflict.
 
 That's different behaviour, because you can no longer pass arg=None. The
 fix isn't to change the behaviour of the code.
 
 Well, the point is that the code as written is incorrect.  And if 'None'
 is an input you want to allow, then use an incantation like:
 
_unset = object()
 
def bar_func(arg=_unset):
if arg is _unset:
arg = CONF.foo_opt
…
 
 In any case, the operative point is that CONF.attribute must always be
 evaluated inside run-time code, never at module load time.

It would be even better to take the extra step of registering the option at 
runtime at the point it is about to be used by calling register_opt() inside 
bar_func() instead of when bar is imported. That avoids import order concerns, 
and re-enforces the idea that options should be declared local to the code that 
uses them and their values should be passed to other code, rather than having 2 
modules tightly bound together through a global configuration value.

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-07 Thread Kevin Benton
I mean't 'side stepping' why GBP allows for the comment you made previous,
With the latter, a mapping driver could determine that communication
between these two hosts can be prevented by using an ACL on a router or a
switch, which doesn't violate the user's intent and buys a performance
improvement and works with ports that don't support security groups..

Neutron's current API is a logical abstraction and enforcement can be done
however one chooses to implement it. I'm really trying to understand at the
network level why GBP allows for these optimizations and performance
improvements you talked about.

You absolutely cannot enforce security groups on a firewall/router that
sits at the boundary between networks. If you try, you are lying to the
end-user because it's not enforced at the port level. The current neutron
APIs force you to decide where things like that are implemented. The higher
level abstractions give you the freedom to move the enforcement by allowing
the expression of broad connectivity requirements.

Why are you bringing up logging connections?

This was brought up as a feature proposal to FWaaS because this is a basic
firewall feature missing from OpenStack. However, this does not preclude a
FWaaS vendor from logging.

Personally, I think one could easily write up a very short document
probably less than one page with examples showing/exampling how the current
neutron API works even without a much networking background.

The difficulty of the API for establishing basic connectivity isn't really
the problem. It's when you have to compose a bunch of requirements and make
sure nothing is violating auditing and connectivity constraints that it
becomes a problem. We are arguing about the levels of abstraction. You
could also write up a short document explaining to novice programmers how
to use C to read and write database entries to an sqlite database, but that
doesn't mean it's the best level of abstraction for what the users are
trying to accomplish.

I'll let someone else explain the current GBP API because I'm not working
on that. I'm just trying to convince you of the value of declarative
network configuration.


On Thu, Aug 7, 2014 at 12:02 PM, Aaron Rosen aaronoro...@gmail.com wrote:




 On Thu, Aug 7, 2014 at 9:54 AM, Kevin Benton blak...@gmail.com wrote:

 You said you had no idea what group based policy was buying us so I tried
 to illustrate what the difference between declarative and imperative
 network configuration looks like. That's the major selling point of GBP so
 I'm not sure how that's 'side stepping' any points. It removes the need for
 the user to pick between implementation details like security
 groups/FWaaS/ACLs.


 I mean't 'side stepping' why GBP allows for the comment you made previous,
 With the latter, a mapping driver could determine that communication
 between these two hosts can be prevented by using an ACL on a router or a
 switch, which doesn't violate the user's intent and buys a performance
 improvement and works with ports that don't support security groups..

 Neutron's current API is a logical abstraction and enforcement can be done
 however one chooses to implement it. I'm really trying to understand at the
 network level why GBP allows for these optimizations and performance
 improvements you talked about.



 So are you saying that GBP allows someone to be able to configure an
 application that at the end of the day is equivalent  to
 networks/router/FWaaS rules without understanding networking concepts?

 It's one thing to understand the ports an application leverages and
 another to understand the differences between configuring VM firewalls,
 security groups, FWaaS, and router ACLs.


 Sure, but how does group based policy solve this. Security Groups and
 FWaaS are just different places of enforcement. Say I want different
 security enforcement on my router than on my instances. One still needs to
 know enough to tell group based policy this right?  They need to know
 enough that there are different enforcement points? How is doing this with
 Group based policy make it easier?



  I'm also curious how this GBP is really less error prone than the
 model we have today as it seems the user will basically have to tell
 neutron the same information about how he wants his networking to function.

 With GBP, the user just gives the desired end result (e.g. allow
 connectivity between endpoint groups via TCP port 22 with all connections
 logged). Without it, the user has to do the following:


 Why are you bringing up logging connections? Neutron has no concept of
 this at all today in it's code base. Is logging something related to GBP?


1. create a network/subnet for each endpoint group
2. allow all traffic on the security groups since the logging would
need to be accomplished with FWaaS
3. create an FWaaS instance
4. attach the FWaaS to both networks

 Today FWaaS api is still incomplete as there is no real point of
 enforcement 

Re: [openstack-dev] [Octavia] Weekly meetings resuming + agenda

2014-08-07 Thread Brandon Logan
It's just my own preference.  Others like webex/hangouts because it can
be easier to talk about topics than in IRC, but with this many people
and the latency delays, it can become quite cumbersome.  Plus, it makes
it easier for meeting notes.  I'll deal with it while the majority
really prefer it.

Thanks,
Brandon

On Thu, 2014-08-07 at 01:28 -0700, Stephen Balukoff wrote:
 Hi Brandon,
 
 
 I don't think we've set a specific date to make the transition to IRC
 meetings. Is there a particular urgency about this that we should be
 aware of?
 
 
 Stephen
 
 
 On Wed, Aug 6, 2014 at 7:58 PM, Brandon Logan
 brandon.lo...@rackspace.com wrote:
 When is the plan to move the meeting to IRC?
 
 On Wed, 2014-08-06 at 15:30 -0700, Stephen Balukoff wrote:
  Action items from today's Octavia meeting:
 
 
  1. We're going to hold off for a couple days on merging the
  constitution and preliminary road map to give people (and in
  particular Ebay) a chance to review and comment.
  2. Stephen is going to try to get Octavia v0.5 design docs
 into gerrit
  review by the end of the week, or early next week at the
 latest.
 
  3. If those with specific networking concerns could codify
 this and/or
  figure out a way to write these down and share with the
 list, that
  would be great. This is going to be important to ensure that
 our
  operator-grade load balancer solution can actually meet
 the needs of
  the operators developing it.
 
  Thanks,
 
  Stephen
 
 
 
 
 
 
 
 
  On Tue, Aug 5, 2014 at 2:34 PM, Stephen Balukoff
  sbaluk...@bluebox.net wrote:
  Hello!
 
 
  We plan on resuming weekly meetings to discuss
 things related
  to the Octavia project starting tomorrow: August 6th
 at
  13:00PDT (20:00UTC). In order to facilitate
 high-bandwidth
  discussion as we bootstrap the project, we have
 decided to
  hold these meetings via webex, with the plan to
 eventually
  transition to IRC. Please contact me directly if you
 would
  like to get in on the webex.
 
 
  Tomorrow's meeting agenda is currently as follows:
 
 
  * Discuss Octavia constitution and project direction
 documents
  currently under gerrit review:
  https://review.openstack.org/#/c/110563/
 
 
 
  * Discuss reviews of design proposals currently
 under gerrit
  review:
  https://review.openstack.org/#/c/111440/
  https://review.openstack.org/#/c/111445/
 
 
  * Discuss operator network topology requirements
 based on data
  currently being collected by HP, Rackspace and Blue
 Box.
  (Other operators are certainly welcome to collect
 and share
  their data as well! I'm looking at you, Ebay. ;) )
 
 
  Please feel free to respond with additional agenda
 items!
 
 
  Stephen
 
 
  --
  Stephen Balukoff
  Blue Box Group, LLC
  (800)613-4305 x807
 
 
 
 
  --
  Stephen Balukoff
  Blue Box Group, LLC
  (800)613-4305 x807
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 
 -- 
 Stephen Balukoff 
 Blue Box Group, LLC 
 (800)613-4305 x807
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-07 Thread Doug Hellmann

On Aug 6, 2014, at 5:10 PM, Michael Still mi...@stillhq.com wrote:

 On Wed, Aug 6, 2014 at 2:03 AM, Thierry Carrez thie...@openstack.org wrote:
 
 We seem to be unable to address some key issues in the software we
 produce, and part of it is due to strategic contributors (and core
 reviewers) being overwhelmed just trying to stay afloat of what's
 happening. For such projects, is it time for a pause ? Is it time to
 define key cycle goals and defer everything else ?
 
 The nova team has been thinking about these issues recently too --
 especially at our mid cycle meetup last week. We are drawing similar
 conclusions to be honest.
 
 Two nova cores were going to go away and write up a proposal for how
 nova could handle a more focussed attempt to land code in Kilo, but
 they haven't had a chance to do that yet. To keep this conversation
 rolling, here's a quick summary of what they proposed:
 
 - we rate limit the total number of blueprints under code review at
 any one time to a fixed number of slots. I secretly prefer the term
 runway, so I am going to use that for the rest of this email. A
 suggested initial number of runways was proposed at ten.
 
 - the development process would be much like juno for a blueprint --
 you propose a spec, get it approved, write some code, and then you
 request a runway to land the code in. Depending on your relative
 priority compared to other code attempting to land, you queue until
 traffic control assigns you a runway.
 
 - code occupying a runway gets nova core review attention, with the
 expectation of fast iteration. If we find a blueprint has stalled in a
 runway, it is removed and put back onto the queue based on its
 priority (you don't get punished for being bumped).
 
 This proposal is limiting the number of simultaneous proposals a core
 needs to track, not the total number landed. The expectation is that
 the time taken on a runway is short, and then someone else will occupy
 it. Its mostly about focus -- instead of doing 100 core reviews on 100
 patches so they never land, trying to do those reviews on the 10
 patches so they all land.

I’ve been trying to highlight “review priorities” each week in the Oslo 
meeting, with moderate success. Our load is a lot lower than nova’s, but so is 
our review team. Perhaps having a more explicit cap on new feature work like 
this would work better.

I’m looking forward to seeing what you come up with as an approach.

Doug

 
 We also talked about tweaking the ratio of tech debt runways vs
 'feature runways. So, perhaps every second release is focussed on
 burning down tech debt and stability, whilst the others are focussed
 on adding features. I would suggest if we do such a thing, Kilo should
 be a stability' release.
 
 Michael
 
 -- 
 Rackspace Australia
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Core team proposals

2014-08-07 Thread Sean Dague
On 08/07/2014 02:09 PM, Dean Troyer wrote:
 I want to nominate Ian Wienand (IRC: ianw) to the DevStack core team.
  Ian has been a consistent contributor and reviewer for some time now.
  He also manages the Red Hat CI that runs tests on Fedora, RHEL and
 CentOS so those platforms have been a particular point of interest for
 him.  Ian has also been active in the config and devstack-gate projects
 among others.
 
 Reviews: https://review.openstack.org/#/q/reviewer:%22Ian+Wienand+%22,n,z
 
 Stackalytics:
 http://stackalytics.com/?user_id=iwienandmetric=marksmodule=devstackrelease=all
 
 I also want to (finally?) remove long-standing core team members Vish
 Ishaya and Jesse Andrews, who between them were responsible for
 instigating the whole 'build a stack script' back in the day.
 
 Please respond in the usual manner, +1 or concerns.
 
 Thanks
 dt

+1, happy to have Ian on the team.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] Weekly meetings resuming + agenda

2014-08-07 Thread Stefano Maffulli
On Thu 07 Aug 2014 12:12:26 PM PDT, Brandon Logan wrote:
 It's just my own preference.  Others like webex/hangouts because it can
 be easier to talk about topics than in IRC, but with this many people
 and the latency delays, it can become quite cumbersome.  Plus, it makes
 it easier for meeting notes.  I'll deal with it while the majority
 really prefer it.

Most of all, if you're interested in including people whose primary 
language is not English, IRC (or text-based communication) is a lot 
more accessible than voice.

Also, skimming through written logs from IRC is a lot easier/faster 
than listening to audio recordings for those that couldn't join in real 
time.

/stef

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] Weekly meetings resuming + agenda

2014-08-07 Thread Brandon Logan
Those are definitely other big reasons, and probably the reason it is
planned to move to IRC in the future, no matter what.  I was just
wondering how soon, if soon at all.

On Thu, 2014-08-07 at 12:35 -0700, Stefano Maffulli wrote:
 On Thu 07 Aug 2014 12:12:26 PM PDT, Brandon Logan wrote:
  It's just my own preference.  Others like webex/hangouts because it can
  be easier to talk about topics than in IRC, but with this many people
  and the latency delays, it can become quite cumbersome.  Plus, it makes
  it easier for meeting notes.  I'll deal with it while the majority
  really prefer it.
 
 Most of all, if you're interested in including people whose primary 
 language is not English, IRC (or text-based communication) is a lot 
 more accessible than voice.
 
 Also, skimming through written logs from IRC is a lot easier/faster 
 than listening to audio recordings for those that couldn't join in real 
 time.
 
 /stef

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo.concurrency repo review

2014-08-07 Thread Yuriy Taraday
On Thu, Aug 7, 2014 at 10:58 PM, Yuriy Taraday yorik@gmail.com wrote:

 Hello, oslo cores.

 I've finished polishing up oslo.concurrency repo at [0] - please take a
 look at it. I used my new version of graduate.sh [1] to generate it, so
 history looks a bit different from what you might be used to.

 I've made as little changes as possible, so there're still some steps left
 that should be done after new repo is created:
 - fix PEP8 errors H405 and E126;
 - use strutils from oslo.utils;
 - remove eventlet dependency (along with random sleeps), but proper
 testing with eventlet should remain;
 - fix for bug [2] should be applied from [3] (although it needs some
 improvements);
 - oh, there's really no limit for this...

 I'll finalize and publish relevant change request to
 openstack-infra/config soon.


Here it is: https://review.openstack.org/112666

Looking forward to any feedback!

 [0] https://github.com/YorikSar/oslo.concurrency
 [1] https://review.openstack.org/109779
 [2] https://bugs.launchpad.net/oslo/+bug/1327946
  [3] https://review.openstack.org/108954

 --

 Kind regards, Yuriy.




-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] Weekly meetings resuming + agenda

2014-08-07 Thread Doug Wiegley
Personally, I prefer IRC for general meeting stuff, with separate
breakouts to voice for topics that warrant it.

Doug


On 8/7/14, 2:28 AM, Stephen Balukoff sbaluk...@bluebox.net wrote:

Hi Brandon,


I don't think we've set a specific date to make the transition to IRC
meetings. Is there a particular urgency about this that we should be
aware of?


Stephen



On Wed, Aug 6, 2014 at 7:58 PM, Brandon Logan
brandon.lo...@rackspace.com wrote:

When is the plan to move the meeting to IRC?

On Wed, 2014-08-06 at 15:30 -0700, Stephen Balukoff wrote:
 Action items from today's Octavia meeting:


 1. We're going to hold off for a couple days on merging the
 constitution and preliminary road map to give people (and in
 particular Ebay) a chance to review and comment.
 2. Stephen is going to try to get Octavia v0.5 design docs into gerrit
 review by the end of the week, or early next week at the latest.

 3. If those with specific networking concerns could codify this and/or
 figure out a way to write these down and share with the list, that
 would be great. This is going to be important to ensure that our
 operator-grade load balancer solution can actually meet the needs of
 the operators developing it.

 Thanks,

 Stephen








 On Tue, Aug 5, 2014 at 2:34 PM, Stephen Balukoff
 sbaluk...@bluebox.net wrote:
 Hello!


 We plan on resuming weekly meetings to discuss things related
 to the Octavia project starting tomorrow: August 6th at
 13:00PDT (20:00UTC). In order to facilitate high-bandwidth
 discussion as we bootstrap the project, we have decided to
 hold these meetings via webex, with the plan to eventually
 transition to IRC. Please contact me directly if you would
 like to get in on the webex.


 Tomorrow's meeting agenda is currently as follows:


 * Discuss Octavia constitution and project direction documents
 currently under gerrit review:
 https://review.openstack.org/#/c/110563/



 * Discuss reviews of design proposals currently under gerrit
 review:
 https://review.openstack.org/#/c/111440/
 https://review.openstack.org/#/c/111445/


 * Discuss operator network topology requirements based on data
 currently being collected by HP, Rackspace and Blue Box.
 (Other operators are certainly welcome to collect and share
 their data as well! I'm looking at you, Ebay. ;) )


 Please feel free to respond with additional agenda items!


 Stephen


 --
 Stephen Balukoff
 Blue Box Group, LLC
 (800)613-4305 x807 tel:%28800%29613-4305%20x807




 --
 Stephen Balukoff
 Blue Box Group, LLC
 (800)613-4305 x807 tel:%28800%29613-4305%20x807


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev









-- 
Stephen Balukoff 
Blue Box Group, LLC
(800)613-4305 x807 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][policy] Group Based Policy - Renaming

2014-08-07 Thread Ronak Shah
Hi,
Following a very interesting and vocal thread on GBP for last couple of
days and the GBP meeting today, GBP sub-team proposes following name
changes to the resource.


policy-point for endpoint
policy-group for endpointgroup (epg)

Please reply if you feel that it is not ok with reason and suggestion.

I hope that it wont be another 150 messages thread :)

Ronak
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-07 Thread Eoghan Glynn

  If we try to limit the number of WIP slots, then surely aspiring
  contributors will simply work around that restriction by preparing
  the code they're interested in on their own private branches, or
  in their github forks?
 
  OK, some pragmatic contributors will adjust their priorities to
  align with the available slots. And some companies employing
  large numbers of contributors will enforce policies to align
  their developers' effort with the gatekeepers' priorities.
 
  But I suspect we'd also have a good number who would take the
  risk that their code never lands and work on it anyway. Given
  that such efforts would really be flying beneath the radar and
  may never see the light of day, that would seem like true waste
  to me.
 
 Is that a problem? 

Well I guess it wouldn't be, if we're willing to tolerate waste.

But IIUC the motiviation behind applying the ideas of kanban is
to minimize waste piling up at bottlenecks in the pipeline.

My point was simply that we don't have direct control over the
contributors' activities, so that limiting WIP slots wouldn't
cut out the waste, rather it would force it underground.

This seems worse to me because either:

 (a) lots of good ideas end up being lost, as a critical mass
 of other contributors don't get to see them

and/or:

 (b) contributors figure out ways to by-pass the rate-limiting
 on gerrit and share their code in other ways

Just a thought ...

Cheers,
Eoghan


 If such developers are going to work on their pet
 project anyway, it's really up to the core team whether or not they
 think it makes sense to merge the changes upstream.
 
 If the core team doesn't think they're worth merging (given the
 constraints on reviewer/approver time) then so be it.  At that point
 either we accept that we're going to leave possible contributions by the
 wayside or else we increase the core team (and infrastructure, and other
 strategic resources)  to be able to handle the load.
 
 Chris
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][policy] Group Based Policy - Renaming

2014-08-07 Thread Andrew Mann
Can you include the definition/description of what each is here as well?  I
think there was a description in the 100+ thread of doom, but I don't want
to go back there :)


On Thu, Aug 7, 2014 at 3:17 PM, Ronak Shah ronak.malav.s...@gmail.com
wrote:

 Hi,
 Following a very interesting and vocal thread on GBP for last couple of
 days and the GBP meeting today, GBP sub-team proposes following name
 changes to the resource.


 policy-point for endpoint
 policy-group for endpointgroup (epg)

 Please reply if you feel that it is not ok with reason and suggestion.

 I hope that it wont be another 150 messages thread :)

 Ronak

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Andrew Mann
DivvyCloud Inc.
www.divvycloud.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][policy] Group Based Policy - Renaming

2014-08-07 Thread Edgar Magana
I am sorry that I could not attend the GBP meeting.
Is there any reason why the IEFT standard is not considered?
http://tools.ietf.org/html/rfc3198

I would like to understand the argument why we are creating new names instead 
of using the standard ones.

Edgar

From: Ronak Shah ronak.malav.s...@gmail.commailto:ronak.malav.s...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, August 7, 2014 at 1:17 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Neutron][policy] Group Based Policy - Renaming

Hi,
Following a very interesting and vocal thread on GBP for last couple of days 
and the GBP meeting today, GBP sub-team proposes following name changes to the 
resource.


policy-point for endpoint
policy-group for endpointgroup (epg)

Please reply if you feel that it is not ok with reason and suggestion.

I hope that it wont be another 150 messages thread :)

Ronak
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Sahara] Swift trust authentication, status and concerns

2014-08-07 Thread Michael McCune
hi Sahara folks,

This serves as a detailed status update for the Swift trust authentication 
spec[1], and to bring up concerns about integration for the Juno cycle.

So far I have pushed a few reviews that start to lay the groundwork for the 
infrastructure needed to complete this blueprint. I have tried to keep the 
changes as low impact as possible so as not to create incompatible commits. I 
will continue this for as long as makes sense, but we are approaching the point 
at which disruptive changes will be introduced.

Currently, I am working on delegating and revoking trusts for job executions. 
The next steps will be to finish the periodic updater that will distribute 
authentication tokens to cluster instances. After this I plan to start 
integrating the job binaries to use the authentication tokens as this will all 
be contained within Sahara.

Once these pieces are done I will focus on the Swift-Hadoop component and 
finalize the workflow creation to support the new Swift references. I will hold 
these changes until we are ready to switch to this new style of authentication 
as this will be disruptive to our current deployments. I would like to get some 
assistance understanding the Swift-Hadoop component, any guidance would be 
greatly appreciated.

That's the status update, I'm confident that over the next few weeks much of 
this will be implemented and getting ready for review.

I do have concerns around how we will integrate and release this update. Once 
the trust authentication is in place we will be changing the way Swift 
information is distributed to the cluster instances. This means that existing 
vm images will need to be updated with the new Swift-Hadoop component. We will 
need to create new public images for all plugins that use Hadoop and Swift. We 
will also need to update the publicly available versions of the Swift-Hadoop 
component to ensure that Sahara-image-elements continues to work.

We will also need to upgrade the gate testing machines to incorporate these 
changes and most likely I will need to be able to run these tests on a local 
cluster I can control before I push them for review. I am soliciting any advice 
about how I could run the gate tests from my local machine or cluster.

For the new Swift-Hadoop component I propose that we bump the version to 2.0 to 
indicate the incompatibility between it and the 1.0 version.

regards,
mike


[1]: 
https://blueprints.launchpad.net/sahara/+spec/edp-swift-trust-authentication

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo.concurrency repo review

2014-08-07 Thread Ben Nemec
LGTM.  Plenty of things I could add to your list, but they're all
post-import. :-)

-Ben

On 08/07/2014 01:58 PM, Yuriy Taraday wrote:
 Hello, oslo cores.
 
 I've finished polishing up oslo.concurrency repo at [0] - please take a
 look at it. I used my new version of graduate.sh [1] to generate it, so
 history looks a bit different from what you might be used to.
 
 I've made as little changes as possible, so there're still some steps left
 that should be done after new repo is created:
 - fix PEP8 errors H405 and E126;
 - use strutils from oslo.utils;
 - remove eventlet dependency (along with random sleeps), but proper testing
 with eventlet should remain;
 - fix for bug [2] should be applied from [3] (although it needs some
 improvements);
 - oh, there's really no limit for this...
 
 I'll finalize and publish relevant change request to openstack-infra/config
 soon.
 
 Looking forward to any feedback!
 
 [0] https://github.com/YorikSar/oslo.concurrency
 [1] https://review.openstack.org/109779
 [2] https://bugs.launchpad.net/oslo/+bug/1327946
 [3] https://review.openstack.org/108954
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Win The Enterprise Work Group Update

2014-08-07 Thread Anne Gentle
Hi Carol, thanks for the summary presentation. I listened in to the board
meeting for this portion. More below.


On Wed, Aug 6, 2014 at 4:55 PM, Barrett, Carol L carol.l.barr...@intel.com
wrote:

  I want to provide the community an update on the Win The Enterprise work
 group that came together in a BoF session in Atlanta.

 The work group led a discussion with the OpenStack Board at their 7/22
 meeting on the findings of our analysis of Enterprise IT requirements gaps.
 A summary of the presentation and next steps can be found here:
 *https://drive.google.com/file/d/0BxtM4AiszlEySmJwMHpDTGFDZHc/edit?usp=sharing*
 https://drive.google.com/file/d/0BxtM4AiszlEySmJwMHpDTGFDZHc/edit?usp=sharing

 Based upon the analysis and discussion, the actions for the work group are:

1. Form a Deployment team to take on the Deployment oriented
requirements that came up from the different teams. This team will have
both Technical and Marketing members.


1. *Please let me know if you’re interested in joining*


1. Form a Monitoring team to take on the Monitoring oriented
requirements that came up from the different teams. This team will have
both Technical and Marketing members.


1. *Please let me know if you’re interested in joining*

 For Technical gaps, we need to assess final accepted Juno blueprints
 versus requirements and develop additional blueprints through community
 participation and implementation support to bring into the Kilo Design
 Summit

1. For Documentation gaps, we need to work with either existing
documentation teams or the Marketing team to create.



Yes, I'd love to work with you on this. There are definitely marketing
deliverables that do not belong in the docs program, but there are also
docs that exist in the docs program already. Looks like the enterprise
group identified:

Security Guide http://docs.openstack.org/security-guide/content/
High Availability Guide
http://docs.openstack.org/high-availability-guide/content/
 Upgrades
http://docs.openstack.org/openstack-ops/content/ch_ops_upgrades.html

The newest is the Architecture Design Guide
http://docs.openstack.org/arch-design/content/ - just a few weeks old. I'd
like to get some technical reviewers to take a look at that guide. Ideally
we can repurpose that content for marketing deliverables or enhance it in
place.

I can go on and on, so what's the best way for me to work with you on
priorities and expectations?

Let me know - perhaps a phone call is best for starters.
Thanks,
Anne




1. For Marketing Perceptions, we need to create a content and
collateral plan with owners and execute.


 Our goals are:

1. Prepare and intercept the Kilo Design Summit pre-plannning and
sessions in Paris with new BPs that implement the requirements
2. Intercept Paris Summit Analyst and Press outreach plans with
content addressing top perception issues
3. Complete the needed documentation/collateral ahead of the Paris
summit
4. Target the Enterprise IT Strategy track in Paris on the key
Enterprise IT requirements to address documentation gaps, and provide
how-to info for deployments.


 *Call to Action:* Please let me know if you want to be involved in any of
 the work group activities. Lots of opportunities for you to help advance
 OpenStack adoption in this segment!

 If you have any questions or want more info, pls get in touch.
 Carol Barrett
 Intel Corp
 +1 503 712 7623




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][policy] Group Based Policy - Renaming

2014-08-07 Thread Ryan Moats
Edgar-

I can't speak for anyone else, but in my mind at least (and having been
involved in the work that led up to 3198),
the members of the groups being discussed here are not PEPs.   As 3198
states, being a PEP implies running COPS
and I don't see that as necessary for membership in GBP groups.

Ryan Moats

Edgar Magana edgar.mag...@workday.com wrote on 08/07/2014 04:02:43 PM:

 From: Edgar Magana edgar.mag...@workday.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: 08/07/2014 04:03 PM
 Subject: Re: [openstack-dev] [Neutron][policy] Group Based Policy -
Renaming

 I am sorry that I could not attend the GBP meeting.
 Is there any reason why the IEFT standard is not considered?
 http://tools.ietf.org/html/rfc3198

 I would like to understand the argument why we are creating new
 names instead of using the standard ones.

 Edgar

 From: Ronak Shah ronak.malav.s...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)

 openstack-dev@lists.openstack.org
 Date: Thursday, August 7, 2014 at 1:17 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [Neutron][policy] Group Based Policy - Renaming

 Hi,
 Following a very interesting and vocal thread on GBP for last couple
 of days and the GBP meeting today, GBP sub-team proposes following
 name changes to the resource.


 policy-point for endpoint
 policy-group for endpointgroup (epg)

 Please reply if you feel that it is not ok with reason and suggestion.

 I hope that it wont be another 150 messages thread :)

 Ronak___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][policy] Group Based Policy - Renaming

2014-08-07 Thread Sumit Naiksatam
Ryan, point well taken. I am paraphrasing the discussion from today's
GBP sub team meeting on the options considered and the eventual
proposal for policy-point and policy-group:

18:36:50 SumitNaiksatam_ so regarding the endpoint terminology
18:36:53 SumitNaiksatam_ any suggestions?
18:36:56 arosen ivar-lazzaro:  If you are expressing your intent of
doing enforcement at both points you do care then.
18:37:09 rockyg regXboi: Edgar Magana suggested using the IETF
phrasing -- enforcement point
18:37:31 mscohen i was thinking “edgar point” would be good.  and we
won’t have to change our slides from EP.
18:37:44 arosen ivar-lazzaro:  would be great to see an example
using the CLI how one sets something up that in GBP that does
enforcement at the instance and router.
18:37:44 rockyg mschoen ++
18:37:55 SumitNaiksatam_ rockyg: although enforcement point tends to
be used in a slightly different context
18:38:02 rockyg mscohen ++
18:38:04 regXboi I was involved in the early IETF policy days, and
I'm not a big from of ep
18:38:04 SumitNaiksatam_ mscohen: we dont want to overload the terminology
18:38:13 SumitNaiksatam_ regXboi: +1
18:38:17 rkukura I’m not entirely sure “enforcement point” is the
same as our usage of endpoint
18:38:25 SumitNaiksatam_ rkukura: exactly
18:38:28 mscohen SumitNaiksatam: i am joking of course
18:38:42 SumitNaiksatam_ mscohen: :-)
18:38:54 rockyg Yeah.  that's the problem with endpoint.  It's right
for networking, but it already has another definition in
virtualization world.
18:38:54 SumitNaiksatam_ how about network-endpoint (someone else
suggested that)?
18:38:55 rkukura I think enforcement point is more like the SG or
FWaaS that is used to render the intent
18:39:07 SumitNaiksatam_ rkukura: agree
18:39:09 regXboi so... let's hit the thesaurus
18:39:16 rockyg Rkukara, agree
18:39:38 rkukura I had always throught endpoint was the right word
for both our usage and for keystone, with similar meanings, but
different meta-levels
18:40:01 regXboi rkukura: if we can find something different, let's
consider it
18:40:11 regXboi there is enough of a hill to climb
18:40:35 regXboi how about terminus?
18:40:52 * regXboi keeps reading synonyms
18:41:06 rms_13 network-endpoint?
18:41:12 regXboi um... no
18:41:27 regXboi I think that won't help
18:41:58 LouisF policy-point/policy groups?
18:42:07 rkukura group member?
18:42:14 mscohen termination-point, gbp-id, policy point maybe
18:42:18 SumitNaiksatam sorry i dropped off again!
18:42:23 regXboi I think member
18:42:31 regXboi unless that's already used somewhere
18:42:33 SumitNaiksatam i was saying earlier, what about policy-point?
18:42:36 s3wong #chair SumitNaiksatam
18:42:37 openstack Current chairs: SumitNaiksatam SumitNaiksatam_
banix rkukura s3wong
18:42:41 rkukura regXboi: Just “member” and “group”?
18:42:44 SumitNaiksatam s3wong: :-)
18:43:04 s3wong SumitNaiksatam: so now either way works for you :-)
18:43:09 regXboi rkurkura: too general I think...
18:43:15 nbouthors policy-provider, policy-consumer
18:43:16 regXboi er rkukura ... sorry
18:43:17 yyywu i still like endpoint better.
18:43:23 rockyg bourn or bourne 1  (bɔːn)
18:43:23 rockyg
18:43:23 rockyg — n
18:43:23 rockyg 1.  a destination; goal
18:43:23 rockyg 2.  a boundary
18:43:25 regXboi I think policy-point and policy-group
18:43:27 SumitNaiksatam yyywu: :-)
18:43:34 rockyg Bourne-point?
18:43:40 SumitNaiksatam rockyg: :-)
18:44:04 SumitNaiksatam more in favor of policy-point and policy-group?
18:44:36 SumitNaiksatam i thnk LouisF suggested as well
18:44:49 mscohen +1 to policy-point
18:44:50 rms_13 +1 to policy-point and policy-group
18:44:55 yyywu +1
18:44:56 nbouthors SumitNaiksatam: +1 too
18:45:07 rockyg +1
18:45:08 rms_13 FINALLY... YEAH
18:45:18 SumitNaiksatam okay so how about we float this in the ML?
18:45:21 s3wong +1
18:45:31 prasadv +1
18:45:35 rms_13 Yes... lets do that
18:45:37 rkukura +1
18:45:44 SumitNaiksatam so that we dont end up picking up an
overlapping terminology again
18:45:55 SumitNaiksatam who wants to do it? as in send to the ML?
18:46:07 * SumitNaiksatam waiting to hand out an AI :-P
18:46:16 SumitNaiksatam regXboi: ?
18:46:17 rms_13 I can do it
18:46:26 regXboi hmm?
18:46:31 SumitNaiksatam rms_13: ah you put your hand up first
18:46:36 * regXboi apologies - bouncing between multiple IRC meetings
18:46:47 hemanthravi policy-endpoint ?
18:46:57 SumitNaiksatam #action rms_13 to send “policy-point”
“policy-group” suggestion to mailing list

On Thu, Aug 7, 2014 at 2:18 PM, Ryan Moats rmo...@us.ibm.com wrote:
 Edgar-

 I can't speak for anyone else, but in my mind at least (and having been
 involved in the work that led up to 3198),
 the members of the groups being discussed here are not PEPs.   As 3198
 states, being a PEP implies running COPS
 and I don't see that as necessary for membership in GBP groups.

 Ryan Moats

 Edgar Magana edgar.mag...@workday.com wrote on 08/07/2014 04:02:43 PM:

 From: Edgar Magana edgar.mag...@workday.com


 To: OpenStack 

Re: [openstack-dev] [Neutron][policy] Group Based Policy - Renaming

2014-08-07 Thread Edgar Magana
Ryan,

COPS implies a common protocol to communicate with PEPs, which implies the same 
communication mechanism basically.
So, you are implying that endpoints in GBP will use different protocol to 
communicate with decisions entities?

It that is the case.. Well it sounds very complex for a simple GBP initial 
project. Then, the discussion will be in a different level.

Edgar

From: Ryan Moats rmo...@us.ibm.commailto:rmo...@us.ibm.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, August 7, 2014 at 2:18 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][policy] Group Based Policy - Renaming


Edgar-

I can't speak for anyone else, but in my mind at least (and having been 
involved in the work that led up to 3198),
the members of the groups being discussed here are not PEPs.   As 3198 states, 
being a PEP implies running COPS
and I don't see that as necessary for membership in GBP groups.

Ryan Moats

Edgar Magana edgar.mag...@workday.commailto:edgar.mag...@workday.com wrote 
on 08/07/2014 04:02:43 PM:

 From: Edgar Magana edgar.mag...@workday.commailto:edgar.mag...@workday.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 Date: 08/07/2014 04:03 PM
 Subject: Re: [openstack-dev] [Neutron][policy] Group Based Policy - Renaming

 I am sorry that I could not attend the GBP meeting.
 Is there any reason why the IEFT standard is not considered?
 http://tools.ietf.org/html/rfc3198

 I would like to understand the argument why we are creating new
 names instead of using the standard ones.

 Edgar

 From: Ronak Shah 
 ronak.malav.s...@gmail.commailto:ronak.malav.s...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 Date: Thursday, August 7, 2014 at 1:17 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [Neutron][policy] Group Based Policy - Renaming

 Hi,
 Following a very interesting and vocal thread on GBP for last couple
 of days and the GBP meeting today, GBP sub-team proposes following
 name changes to the resource.


 policy-point for endpoint
 policy-group for endpointgroup (epg)

 Please reply if you feel that it is not ok with reason and suggestion.

 I hope that it wont be another 150 messages thread :)

 Ronak___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][policy] Group Based Policy - Renaming

2014-08-07 Thread Edgar Magana
Thanks for sharing this Sumit.
Again, my apologies for not attending the meeting, I just I couldn’t.

It seems you had a good discussion about the naming and I do respect the
decision.

Cheers,

Edgar


On 8/7/14, 2:32 PM, Sumit Naiksatam sumitnaiksa...@gmail.com wrote:

Ryan, point well taken. I am paraphrasing the discussion from today's
GBP sub team meeting on the options considered and the eventual
proposal for policy-point and policy-group:

18:36:50 SumitNaiksatam_ so regarding the endpoint terminology
18:36:53 SumitNaiksatam_ any suggestions?
18:36:56 arosen ivar-lazzaro:  If you are expressing your intent of
doing enforcement at both points you do care then.
18:37:09 rockyg regXboi: Edgar Magana suggested using the IETF
phrasing -- enforcement point
18:37:31 mscohen i was thinking “edgar point” would be good.  and we
won’t have to change our slides from EP.
18:37:44 arosen ivar-lazzaro:  would be great to see an example
using the CLI how one sets something up that in GBP that does
enforcement at the instance and router.
18:37:44 rockyg mschoen ++
18:37:55 SumitNaiksatam_ rockyg: although enforcement point tends to
be used in a slightly different context
18:38:02 rockyg mscohen ++
18:38:04 regXboi I was involved in the early IETF policy days, and
I'm not a big from of ep
18:38:04 SumitNaiksatam_ mscohen: we dont want to overload the
terminology
18:38:13 SumitNaiksatam_ regXboi: +1
18:38:17 rkukura I’m not entirely sure “enforcement point” is the
same as our usage of endpoint
18:38:25 SumitNaiksatam_ rkukura: exactly
18:38:28 mscohen SumitNaiksatam: i am joking of course
18:38:42 SumitNaiksatam_ mscohen: :-)
18:38:54 rockyg Yeah.  that's the problem with endpoint.  It's right
for networking, but it already has another definition in
virtualization world.
18:38:54 SumitNaiksatam_ how about network-endpoint (someone else
suggested that)?
18:38:55 rkukura I think enforcement point is more like the SG or
FWaaS that is used to render the intent
18:39:07 SumitNaiksatam_ rkukura: agree
18:39:09 regXboi so... let's hit the thesaurus
18:39:16 rockyg Rkukara, agree
18:39:38 rkukura I had always throught endpoint was the right word
for both our usage and for keystone, with similar meanings, but
different meta-levels
18:40:01 regXboi rkukura: if we can find something different, let's
consider it
18:40:11 regXboi there is enough of a hill to climb
18:40:35 regXboi how about terminus?
18:40:52 * regXboi keeps reading synonyms
18:41:06 rms_13 network-endpoint?
18:41:12 regXboi um... no
18:41:27 regXboi I think that won't help
18:41:58 LouisF policy-point/policy groups?
18:42:07 rkukura group member?
18:42:14 mscohen termination-point, gbp-id, policy point maybe
18:42:18 SumitNaiksatam sorry i dropped off again!
18:42:23 regXboi I think member
18:42:31 regXboi unless that's already used somewhere
18:42:33 SumitNaiksatam i was saying earlier, what about policy-point?
18:42:36 s3wong #chair SumitNaiksatam
18:42:37 openstack Current chairs: SumitNaiksatam SumitNaiksatam_
banix rkukura s3wong
18:42:41 rkukura regXboi: Just “member” and “group”?
18:42:44 SumitNaiksatam s3wong: :-)
18:43:04 s3wong SumitNaiksatam: so now either way works for you :-)
18:43:09 regXboi rkurkura: too general I think...
18:43:15 nbouthors policy-provider, policy-consumer
18:43:16 regXboi er rkukura ... sorry
18:43:17 yyywu i still like endpoint better.
18:43:23 rockyg bourn or bourne 1  (bɔːn)
18:43:23 rockyg
18:43:23 rockyg — n
18:43:23 rockyg 1.  a destination; goal
18:43:23 rockyg 2.  a boundary
18:43:25 regXboi I think policy-point and policy-group
18:43:27 SumitNaiksatam yyywu: :-)
18:43:34 rockyg Bourne-point?
18:43:40 SumitNaiksatam rockyg: :-)
18:44:04 SumitNaiksatam more in favor of policy-point and policy-group?
18:44:36 SumitNaiksatam i thnk LouisF suggested as well
18:44:49 mscohen +1 to policy-point
18:44:50 rms_13 +1 to policy-point and policy-group
18:44:55 yyywu +1
18:44:56 nbouthors SumitNaiksatam: +1 too
18:45:07 rockyg +1
18:45:08 rms_13 FINALLY... YEAH
18:45:18 SumitNaiksatam okay so how about we float this in the ML?
18:45:21 s3wong +1
18:45:31 prasadv +1
18:45:35 rms_13 Yes... lets do that
18:45:37 rkukura +1
18:45:44 SumitNaiksatam so that we dont end up picking up an
overlapping terminology again
18:45:55 SumitNaiksatam who wants to do it? as in send to the ML?
18:46:07 * SumitNaiksatam waiting to hand out an AI :-P
18:46:16 SumitNaiksatam regXboi: ?
18:46:17 rms_13 I can do it
18:46:26 regXboi hmm?
18:46:31 SumitNaiksatam rms_13: ah you put your hand up first
18:46:36 * regXboi apologies - bouncing between multiple IRC meetings
18:46:47 hemanthravi policy-endpoint ?
18:46:57 SumitNaiksatam #action rms_13 to send “policy-point”
“policy-group” suggestion to mailing list

On Thu, Aug 7, 2014 at 2:18 PM, Ryan Moats rmo...@us.ibm.com wrote:
 Edgar-

 I can't speak for anyone else, but in my mind at least (and having been
 involved in the work that led up to 3198),
 the members of the groups being discussed here are not 

Re: [openstack-dev] [Neutron][policy] Group Based Policy - Renaming

2014-08-07 Thread Ryan Moats


Edgar Magana edgar.mag...@workday.com wrote on 08/07/2014 04:37:39 PM:

 From: Edgar Magana edgar.mag...@workday.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: 08/07/2014 04:40 PM
 Subject: Re: [openstack-dev] [Neutron][policy] Group Based Policy -
Renaming

 Ryan,

 COPS implies a common protocol to communicate with PEPs, which
 implies the same communication mechanism basically.
 So, you are implying that “endpoints” in GBP will use “different”
 protocol to communicate with “decisions” entities?

Nope, I'm saying that the members of groups are not *required* to do
enforcement.
They *could* (based on the implementation), but calling them PEPs means
they would *have* to.

Ryan___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][policy] Group Based Policy - Renaming

2014-08-07 Thread Edgar Magana
That I understand it!
Thanks for the clarification.

Edgar

From: Ryan Moats rmo...@us.ibm.commailto:rmo...@us.ibm.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, August 7, 2014 at 2:45 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][policy] Group Based Policy - Renaming


Edgar Magana edgar.mag...@workday.commailto:edgar.mag...@workday.com wrote 
on 08/07/2014 04:37:39 PM:

 From: Edgar Magana edgar.mag...@workday.commailto:edgar.mag...@workday.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 Date: 08/07/2014 04:40 PM
 Subject: Re: [openstack-dev] [Neutron][policy] Group Based Policy - Renaming

 Ryan,

 COPS implies a common protocol to communicate with PEPs, which
 implies the same communication mechanism basically.
 So, you are implying that endpoints in GBP will use different
 protocol to communicate with decisions entities?

Nope, I'm saying that the members of groups are not *required* to do 
enforcement.
They *could* (based on the implementation), but calling them PEPs means they 
would *have* to.

Ryan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-07 Thread Aaron Rosen
On Thu, Aug 7, 2014 at 12:08 PM, Kevin Benton blak...@gmail.com wrote:

 I mean't 'side stepping' why GBP allows for the comment you made
 previous, With the latter, a mapping driver could determine that
 communication between these two hosts can be prevented by using an ACL on a
 router or a switch, which doesn't violate the user's intent and buys a
 performance improvement and works with ports that don't support security
 groups..

 Neutron's current API is a logical abstraction and enforcement can be
 done however one chooses to implement it. I'm really trying to understand
 at the network level why GBP allows for these optimizations and performance
 improvements you talked about.

 You absolutely cannot enforce security groups on a firewall/router that
 sits at the boundary between networks. If you try, you are lying to the
 end-user because it's not enforced at the port level. The current neutron
 APIs force you to decide where things like that are implemented.


The current neutron API's are just logical abstractions. Where and how
things are actually enforced are 100% an implementation detail of a vendors
system.  Anyways, moving the discussion to the etherpad...



 The higher level abstractions give you the freedom to move the enforcement
 by allowing the expression of broad connectivity requirements.

Why are you bringing up logging connections?

 This was brought up as a feature proposal to FWaaS because this is a basic
 firewall feature missing from OpenStack. However, this does not preclude a
 FWaaS vendor from logging.

 Personally, I think one could easily write up a very short document
 probably less than one page with examples showing/exampling how the current
 neutron API works even without a much networking background.

 The difficulty of the API for establishing basic connectivity isn't really
 the problem. It's when you have to compose a bunch of requirements and make
 sure nothing is violating auditing and connectivity constraints that it
 becomes a problem. We are arguing about the levels of abstraction. You
 could also write up a short document explaining to novice programmers how
 to use C to read and write database entries to an sqlite database, but that
 doesn't mean it's the best level of abstraction for what the users are
 trying to accomplish.

 I'll let someone else explain the current GBP API because I'm not working
 on that. I'm just trying to convince you of the value of declarative
 network configuration.


 On Thu, Aug 7, 2014 at 12:02 PM, Aaron Rosen aaronoro...@gmail.com
 wrote:




 On Thu, Aug 7, 2014 at 9:54 AM, Kevin Benton blak...@gmail.com wrote:

 You said you had no idea what group based policy was buying us so I
 tried to illustrate what the difference between declarative and imperative
 network configuration looks like. That's the major selling point of GBP so
 I'm not sure how that's 'side stepping' any points. It removes the need for
 the user to pick between implementation details like security
 groups/FWaaS/ACLs.


 I mean't 'side stepping' why GBP allows for the comment you made
 previous, With the latter, a mapping driver could determine that
 communication between these two hosts can be prevented by using an ACL on a
 router or a switch, which doesn't violate the user's intent and buys a
 performance improvement and works with ports that don't support security
 groups..

 Neutron's current API is a logical abstraction and enforcement can be
 done however one chooses to implement it. I'm really trying to understand
 at the network level why GBP allows for these optimizations and performance
 improvements you talked about.



 So are you saying that GBP allows someone to be able to configure an
 application that at the end of the day is equivalent  to
 networks/router/FWaaS rules without understanding networking concepts?

 It's one thing to understand the ports an application leverages and
 another to understand the differences between configuring VM firewalls,
 security groups, FWaaS, and router ACLs.


 Sure, but how does group based policy solve this. Security Groups and
 FWaaS are just different places of enforcement. Say I want different
 security enforcement on my router than on my instances. One still needs to
 know enough to tell group based policy this right?  They need to know
 enough that there are different enforcement points? How is doing this with
 Group based policy make it easier?



  I'm also curious how this GBP is really less error prone than the
 model we have today as it seems the user will basically have to tell
 neutron the same information about how he wants his networking to function.

 With GBP, the user just gives the desired end result (e.g. allow
 connectivity between endpoint groups via TCP port 22 with all connections
 logged). Without it, the user has to do the following:


 Why are you bringing up logging connections? Neutron has no concept of
 this at all today in it's code base. Is logging something related to GBP?



Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-07 Thread Mohammad Banikazemi


Thierry Carrez thie...@openstack.org wrote on 08/07/2014 06:23:56 AM:

 From: Thierry Carrez thie...@openstack.org
 To: openstack-dev@lists.openstack.org
 Date: 08/07/2014 06:25 AM
 Subject: Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy
 and the way forward

 Armando M. wrote:
  This thread is moving so fast I can't keep up!
 
  The fact that troubles me is that I am unable to grasp how we move
  forward, which was the point of this thread to start with. It seems we
  have 2 options:
 
  - We make GBP to merge as is, in the Neutron tree, with some minor
  revision (e.g. naming?);
  - We make GBP a stackforge project, that integrates with Neutron in
some
  shape or form;
 
  Another option, might be something in between, where GBP is in tree,
but
  in some sort of experimental staging area (even though I am not sure
how
  well baked this idea is).
 
  Now, as a community we all need make a decision; arguing about the fact
  that the blueprint was approved is pointless.
 I agree with you: it is possible to change your mind on a topic and
 revisit past decisions.
 In past OpenStack history we did revert merged
 commits and remove existing functionality because we felt it wasn't that
 much of a great idea after all. Here we are talking about making the
 right decision *before* the final merging and shipping into a release,
 which is kind of an improvement. The spec system was supposed to help
 limit such cases, but it's not bullet-proof.

 In the end, if there is no consensus on that question within the Neutron
 project (and I hear both sides have good arguments), our governance
 gives the elected Neutron PTL the power to make the final call. If the
 disagreement is between projects (like if Nova disagreed with the
 Neutron decision), then the issue could be escalated to the TC.


It is good to know that the OpenStack governance provides a way to resolve
these issues but I really hope that we can reach a consensus.

Best,

Mohammad



 Regards,

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] introducing cyclops

2014-08-07 Thread Eoghan Glynn



 Dear All,
 
 Let me use my first post to this list to introduce Cyclops and initiate a
 discussion towards possibility of this platform as a future incubated
 project in OpenStack.
 
 We at Zurich university of Applied Sciences have a python project in open
 source (Apache 2 Licensing) that aims to provide a platform to do
 rating-charging-billing over ceilometer. We call is Cyclops (A Charging
 platform for OPenStack CLouds).
 
 The initial proof of concept code can be accessed here:
 https://github.com/icclab/cyclops-web 
 https://github.com/icclab/cyclops-tmanager
 
 Disclaimer: This is not the best code out there, but will be refined and
 documented properly very soon!
 
 A demo video from really early days of the project is here:
 https://www.youtube.com/watch?v=ZIwwVxqCio0 and since this video was made,
 several bug fixes and features were added.
 
 The idea presentation was done at Swiss Open Cloud Day at Bern and the talk
 slides can be accessed here:
 http://piyush-harsh.info/content/ocd-bern2014.pdf , and more recently the
 research paper on the idea was published in 2014 World Congress in Computer
 Science (Las Vegas), which can be accessed here:
 http://piyush-harsh.info/content/GCA2014-rcb.pdf
 
 I was wondering, if our effort is something that OpenStack
 Ceilometer/Telemetry release team would be interested in?
 
 I do understand that initially rating-charging-billing service may have been
 left out by choice as they would need to be tightly coupled with existing
 CRM/Billing systems, but Cyclops design (intended) is distributed, service
 oriented architecture with each component allowing for possible integration
 with external software via REST APIs. And therefore Cyclops by design is
 CRM/Billing platform agnostic. Although Cyclops PoC implementation does
 include a basic bill generation module.
 
 We in our team are committed to this development effort and we will have
 resources (interns, students, researchers) work on features and improve the
 code-base for a foreseeable number of years to come.
 
 Do you see a chance if our efforts could make in as an incubated project in
 OpenStack within Ceilometer?

Hi Piyush,

Thanks for bringing this up!

I should preface my remarks by setting out a little OpenStack
history, in terms of the original decision not to include the
rating and billing stages of the pipeline under the ambit of
the ceilometer project.

IIUC, the logic was that such rating/billing policies were very
likely to be:

  (a) commercially sensitive for competing cloud operators

and:

  (b) already built-out via existing custom/proprietary systems

The folks who were directly involved at the outset of ceilometer
can correct me if I've misrepresented the thinking that pertained
at the time.

While that logic seems to still apply, I would be happy to learn
more about the work you've done already on this, and would be
open to hearing arguments for and against. Are you planning to
attend the Kilo summit in Paris (Nov 3-7)? If so, it would be a
good opportunity to discuss further in person.

In the meantime, stackforge provides a low-bar-to-entry for
projects in the OpenStack ecosystem that may, or may not, end up
as incubated projects or as dependencies taken by graduated
projects. So you might consider moving your code there?

Cheers,
Eoghan


 
 I really would like to hear back from you, comments, suggestions, etc.
 
 Kind regards,
 Piyush.
 ___
 Dr. Piyush Harsh, Ph.D.
 Researcher, InIT Cloud Computing Lab
 Zurich University of Applied Sciences (ZHAW)
 [Site] http://piyush-harsh.info
 [Research Lab] http://www.cloudcomp.ch/
 Fax: +41(0)58.935.7403 GPG Keyid: 9C5A8838
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-07 Thread Stefano Maffulli
On 08/07/2014 01:41 PM, Eoghan Glynn wrote:
 My point was simply that we don't have direct control over the
 contributors' activities

This is not correct and I've seen it repeated too often to let it go
uncorrected: we (the OpenStack project as a whole) have a lot of control
over contributors to OpenStack. There is a Technical Committee and a
Board of Directors, corporate members and sponsors... all of these can
do a lot to make things happen. For example, the Platinum members of the
Foundation are required at the moment to have at least 'two full time
equivalents' and I don't see why the board couldn't change that
requirement, make it more specific.

OpenStack is not an amateurish project done by volunteers in their free
time.  We have lots of leverage we can apply to get things done.

/stef

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo] oslo.config and import chains

2014-08-07 Thread Brant Knudson
On Thu, Aug 7, 2014 at 12:54 PM, Kevin L. Mitchell 
kevin.mitch...@rackspace.com wrote:

 On Thu, 2014-08-07 at 17:46 +0100, Matthew Booth wrote:
   In any case, the operative point is that CONF.attribute must
  always be
   evaluated inside run-time code, never at module load time.
 
  ...unless you call register_opts() safely, which is what I'm
  proposing.

 No, calling register_opts() at a different point only fixes the import
 issue you originally complained about; it does not fix the problem that
 the configuration option is evaluated at the wrong time.  The example
 code you included in your original email evaluates the configuration
 option at module load time, BEFORE the configuration has been loaded,
 which means that the argument default will be the default of the
 configuration option, rather than the configured value of the
 configuration option.  Configuration options must be evaluated at
 RUN-TIME, after configuration is loaded; they must not be evaluated at
 LOAD-TIME, which is what your original code does.
 --
 Kevin L. Mitchell kevin.mitch...@rackspace.com
 Rackspace


We had this problem in Keystone[1]. There were some config parameters
passed to a function decorator (it was the cache timeout time). You'd
change the value in the config file and it would have no effect... the
default was still used. Luckily the cache decorator also took a function so
it was an easy fix, just pass `lambda: CONF.foo`. The mistaken code was
made possible because the config options were registered at import time.
Keystone now registers its config options at run-time so using CONF.foo at
import-time fails with an error that the option isn't registered.

[1] https://bugs.launchpad.net/keystone/+bug/1265670

- Brant
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-07 Thread Eoghan Glynn


 On 08/07/2014 01:41 PM, Eoghan Glynn wrote:
  My point was simply that we don't have direct control over the
  contributors' activities
 
 This is not correct and I've seen it repeated too often to let it go
 uncorrected: we (the OpenStack project as a whole) have a lot of control
 over contributors to OpenStack. There is a Technical Committee and a
 Board of Directors, corporate members and sponsors... all of these can
 do a lot to make things happen. For example, the Platinum members of the
 Foundation are required at the moment to have at least 'two full time
 equivalents' and I don't see why the board couldn't change that
 requirement, make it more specific.
 
 OpenStack is not an amateurish project done by volunteers in their free
 time.  We have lots of leverage we can apply to get things done.

There was no suggestion of amatuerish-ness, or even volunteerism,
in my post.

Simply a recognition of the reality that we are not operating in
a traditional command  control environment.

TBH I'm surprised such an assertion would be considered controversial.

But I'd be happy to hear how you envisage rate-limiting WIP would
play out in practice?

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-07 Thread Yuriy Taraday
On Thu, Aug 7, 2014 at 10:28 AM, Chris Friesen chris.frie...@windriver.com
wrote:

 On 08/06/2014 05:41 PM, Zane Bitter wrote:

 On 06/08/14 18:12, Yuriy Taraday wrote:

 Well, as per Git author, that's how you should do with not-CVS. You have
 cheap merges - use them instead of erasing parts of history.


 This is just not true.

 http://www.mail-archive.com/dri-devel@lists.sourceforge.net/msg39091.html

 Choice quotes from the author of Git:

 * 'People can (and probably should) rebase their _private_ trees'
 * 'you can go wild on the git rebase thing'
 * 'we use git rebase etc while we work on our problems.'
 * 'git rebase is not wrong.'


 Also relevant:

 ...you must never pull into a branch that isn't already
 in good shape.

 Don't merge upstream code at random points.

 keep your own history clean


And in the very same thread he says I don't like how you always rebased
patches and none of these rules should be absolutely black-and-white.
But let's not get driven into discussion of what Linus said (or I'll have
to rewatch his ages old talk in Google to get proper quotes).
In no way I want to promote exposing private trees with all those
intermediate changes. And my proposal is not against rebasing (although we
could use -R option for git-review more often to publish what we've tested
and to let reviewers see diffs between patchsets). It is for letting people
keep history of their work towards giving you a crystal-clean change
request series.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] [Infra] Devstack and Testing for ironic-python-agent``

2014-08-07 Thread Jay Faulkner
Hi all,


At the recent Ironic mid-cycle meetup, we got the first version of the 
ironic-python-agent (IPA) driver merged. There are a few reviews we need merged 
(and their dependencies) across a few other projects in order to begin testing 
it automatically. We would like to eventually gate IPA and Ironic with tempest 
testing similar to what the pxe driver does today.


For IPA to work in devstack (openstack-dev/devstack repo):

 - https://review.openstack.org/#/c/112095 Adds swift temp URL support to 
Devstack

 - https://review.openstack.org/#/c/108457 Adds IPA support to Devstack



Docs on running IPA in devstack (openstack/ironic repo):

 - https://review.openstack.org/#/c/112136/



For IPA to work in the devstack-gate environment (openstack-infra/config  
openstack-infra/devstack-gate repos):

 - https://review.openstack.org/#/c/112143 Add IPA support to devstack-gate

 - https://review.openstack.org/#/c/112134 Consolidate and rename Ironic jobs

 - https://review.openstack.org/#/c/112693 Add check job for IPA + tempest


Once these are all merged, we'll have IPA testing via a nonvoting check job, 
using the IPA-CoreOS deploy ramdisk, in both the ironic and ironic-python-agent 
projects. This will be promoted to voting once proven stable.


However, this is only one of many possible IPA deploy ramdisk images. We're 
currently publishing a CoreOS ramdisk, but we also have an effort to create a 
ramdisk with diskimage-builder (https://review.openstack.org/#/c/110487/) , as 
well as plans for an ISO image (for use with things like iLo). As we gain 
additional images, we'd like to run those images through the same suite of 
tests prior to publishing them, so that images which would break IPA's gate 
wouldn't get published. The final state testing matrix should look something 
like this, with check and gate jobs in each project covering the variations 
unique to that project, and one representative test in consuming project's test 
pipelines.


IPA:

 - tempest runs against Ironic+agent_ssh with CoreOS ramdisk

 - tempest runs against Ironic+agent_ssh with DIB ramdisk

 - (other IPA tests)



IPA would then, as a post job, generate and publish the images, as we currently 
do with IPA-CoreOS ( 
http://tarballs.openstack.org/ironic-python-agent/coreos/ipa-coreos.tar.gz ). 
Because IPA would gate on tempest tests against each image, we'd avoid ever 
publishing a bad deploy ramdisk.


Ironic:

 - tempest runs against Ironic+agent_ssh with most suitable ramdisk (due to 
significantly decreased ram requirements, this will likely be an image created 
by DIB once it exists)

 - tempest runs against Ironic+pxe_ssh

 - (what ever else Ironic runs)



Nova and other integrated projects will continue to run a single job, using 
Ironic with its default deploy driver (currently pxe_ssh).





Using this testing matrix, we'll ensure that there is coverage of each 
cross-project dependency, without bloating each project's test matrix 
unnecessarily. If, for instance, a change in Nova passes the Ironic pxe_ssh job 
and lands, but then breaks the agent_ssh job and thus blocks Ironic's gate, 
this would indicate a layering violation between Ironic and its deploy drivers 
(from Nova's perspective, nothing should change between those drivers). 
Similarly, if IPA tests failed against the CoreOS image (due to Ironic OR Nova 
change), but the DIB image passed in both Ironic and Nova tests, then it's 
almost certainly an *IPA* bug.


Thanks so much for your time, and for the Openstack Ironic community being 
welcoming to us as we have worked towards this alternate deploy driver and work 
towards improving it even further as Kilo opens.


--

Jay Faulkner
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-07 Thread Yuriy Taraday
On Thu, Aug 7, 2014 at 7:36 PM, Ben Nemec openst...@nemebean.com wrote:

 On 08/06/2014 05:35 PM, Yuriy Taraday wrote:
  On Wed, Aug 6, 2014 at 11:00 PM, Ben Nemec openst...@nemebean.com
 wrote:
  You keep mentioning detached HEAD and reflog.  I have never had to deal
  with either when doing a rebase, so I think there's a disconnect here.
  The only time I see a detached HEAD is when I check out a change from
  Gerrit (and I immediately stick it in a local branch, so it's a
  transitive state), and the reflog is basically a safety net for when I
  horribly botch something, not a standard tool that I use on a daily
 basis.
 
 
  It usually takes some time for me to build trust in utility that does a
 lot
  of different things at once while I need only one small piece of that.
 So I
  usually do smth like:
  $ git checkout HEAD~2
  $ vim
  $ git commit
  $ git checkout mybranch
  $ git rebase --onto HEAD@{1} HEAD~2
  instead of almost the same workflow with interactive rebase.

 I'm sorry, but I don't trust the well-tested, widely used tool that Git
 provides to make this easier so I'm going to reimplement essentially the
 same thing in a messier way myself is a non-starter for me.  I'm not
 surprised you dislike rebases if you're doing this, but it's a solved
 problem.  Use git rebase -i.


I'm sorry, I must've mislead you by using word 'trust' in that sentence.
It's more like understanding. I like to understand how things work. I don't
like treating tools as black boxes. And I also don't like when tool does a
lot of things at once with no way back. So yes, I decompose 'rebase -i' a
bit and get slightly (1 command, really) longer workflow. But at least I
can stop at any point and think if I'm really finished at this step. And
sometimes interactive rebase works for me better than this, sometimes it
doesn't. It all depends on situation.

I don't dislike rebases because I sometimes use a bit longer version of it.
I would be glad to avoid them because they destroy history that can help me
later.

I think I've said all I'm going to say on this.


I hope you don't think that this thread was about rebases vs merges. It's
about keeping track of your changes without impact on review process.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Passing a list of ResourceGroup's attributes back to its members

2014-08-07 Thread Zane Bitter

On 07/08/14 13:22, Tomas Sedovic wrote:

Hi all,

I have a ResourceGroup which wraps a custom resource defined in another
template:

 servers:
   type: OS::Heat::ResourceGroup
   properties:
 count: 10
 resource_def:
   type: my_custom_server
   properties:
 prop_1: ...
 prop_2: ...
 ...

And a corresponding provider template and environment file.

Now I can get say the list of IP addresses or any custom value of each
server from the ResourceGroup by using `{get_attr: [servers,
ip_address]}` and outputs defined in the provider template.

But I can't figure out how to pass that list back to each server in the
group.

This is something we use in TripleO for things like building a MySQL
cluster, where each node in the cluster (the ResourceGroup) needs the
addresses of all the other nodes.


Yeah, this is kind of the perpetual problem with clusters. I've been 
hoping that DNSaaS will show up in OpenStack soon and that that will be 
a way to fix this issue.


The other option is to have the cluster members discover each other 
somehow (mDNS?), but people seem loath to do that.



Right now, we have the servers ungrouped in the top-level template so we
can build this list manually. But if we move to ResourceGroups (or any
other scaling mechanism, I think), this is no longer possible.


So I believe the current solution is to abuse a Launch Config resource 
as a store for the data, and then later retrieve it somehow? Possibly 
you could do something along similar lines, but it's unclear how the 
'later retrieval' part would work... presumably it would have to involve 
something outside of Heat closing the loop :(



We can't pass the list to ResourceGroup's `resource_def` section because
that causes a circular dependency.

And I'm not aware of a way to attach a SoftwareConfig to a
ResourceGroup. SoftwareDeployment only allows attaching a config to a
single server.


Yeah, and that would be a tricky thing to implement well, because a 
resource group may not be a group of servers (but in many cases it may 
be a group of nested stacks that each contain one or more servers, and 
you'd want to be able to handle that too).



Is there a way to do this that I'm missing? And if there isn't, is this
something we could add to Heat? E.g. extending a SoftwareDeployment to
accept ResourceGroups or adding another resource for that purpose.

Thanks,
Tomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-07 Thread Chris Friesen

On 08/07/2014 04:52 PM, Yuriy Taraday wrote:


I hope you don't think that this thread was about rebases vs merges.
It's about keeping track of your changes without impact on review process.


But if you rebase, what is stopping you from keeping whatever private 
history you want and then rebase the desired changes onto the version 
that the current review tools are using?


Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-07 Thread Yuriy Taraday
On Fri, Aug 8, 2014 at 3:03 AM, Chris Friesen chris.frie...@windriver.com
wrote:

 On 08/07/2014 04:52 PM, Yuriy Taraday wrote:

  I hope you don't think that this thread was about rebases vs merges.
 It's about keeping track of your changes without impact on review process.


 But if you rebase, what is stopping you from keeping whatever private
 history you want and then rebase the desired changes onto the version that
 the current review tools are using?


That's almost what my proposal is about - allowing developer to keep
private history and store uploaded changes separately.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-07 Thread Michael Still
On Thu, Aug 7, 2014 at 11:20 PM, Russell Bryant rbry...@redhat.com wrote:
 On 08/07/2014 09:07 AM, Sean Dague wrote: I think the difference is
 slot selection would just be Nova drivers. I
 think there is an assumption in the old system that everyone in Nova
 core wants to prioritize the blueprints. I think there are a bunch of
 folks in Nova core that are happy having signaling from Nova drivers on
 high priority things to review. (I know I'm in that camp.)

 Lacking that we all have picking algorithms to hack away at the 500 open
 reviews. Which basically means it's a giant random queue.

 Having a few blueprints that *everyone* is looking at also has the
 advantage that the context for the bits in question will tend to be
 loaded into multiple people's heads at the same time, so is something
 that's discussable.

 Will it fix the issue, not sure, but it's an idea.

 OK, got it.  So, success critically depends on nova-core being willing
 to take review direction and priority setting from nova-drivers.  That
 sort of assumption is part of why I think agile processes typically
 don't work in open source.  We don't have the ability to direct people
 with consistent and reliable results.

 I'm afraid if people doing the review are not directly involved in at
 least ACKing the selection and commiting to review something, putting
 stuff in slots seems futile.

I think some of this discussion is because I haven't had a chance to
write a summary of the meetup yet for the public mailing list. That's
something I will try and do today.

We talked about having a regular discussion in our weekly meeting of
what reviews were strategic at a given point in time. In my mind if we
do the runway thing, then that list of reviews would be important bug
fixes and slot occupying features. I think an implied side effect of
the runway system is that nova-drivers would -2 blueprint reviews
which were not occupying a slot.

(If we start doing more -2's I think we will need to explore how to
not block on someone with -2's taking a vacation. Some sort of role
account perhaps).

I think at the moment nova is lost in the tactical, instead of trying
to rise above that to solve strategic problems. That's a big risk to
the project, because its not how we handle the big issues our users
actually care about.

Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-08-07 Thread Michael Still
It seems to me that the tension here is that there are groups who
would really like to use features in newer libvirts that we don't CI
on in the gate. Is it naive to think that a possible solution here is
to do the following:

 - revert the libvirt version_cap flag
 - instead implement a third party CI with the latest available
libvirt release [1]
 - document clearly in the release notes the versions of dependancies
that we tested against in a given release: hypervisor versions (gate
and third party), etc etc

Michael

1: I think that ultimately should live in infra as part of check, but
I'd be ok with it starting as a third party if that delivers us
something faster. I'd be happy enough to donate resources to get that
going if we decide to go with this plan.

On Fri, Aug 8, 2014 at 12:38 AM, Matt Riedemann
mrie...@linux.vnet.ibm.com wrote:


 On 7/18/2014 2:55 AM, Daniel P. Berrange wrote:

 On Thu, Jul 17, 2014 at 12:13:13PM -0700, Johannes Erdfelt wrote:

 On Thu, Jul 17, 2014, Russell Bryant rbry...@redhat.com wrote:

 On 07/17/2014 02:31 PM, Johannes Erdfelt wrote:

 It kind of helps. It's still implicit in that you need to look at what
 features are enabled at what version and determine if it is being
 tested.

 But the behavior is still broken since code is still getting merged
 that
 isn't tested. Saying that is by design doesn't help the fact that
 potentially broken code exists.


 Well, it may not be tested in our CI yet, but that doesn't mean it's not
 tested some other way, at least.


 I'm skeptical. Unless it's tested continuously, it'll likely break at
 some time.

 We seem to be selectively choosing the continuous part of CI. I'd
 understand if it was reluctantly because of immediate problems but
 this reads like it's acceptable long-term too.

 I think there are some good ideas in other parts of this thread to look
 at how we can more reguarly rev libvirt in the gate to mitigate this.

 There's also been work going on to get Fedora enabled in the gate, which
 is a distro that regularly carries a much more recent version of libvirt
 (among other things), so that's another angle that may help.


 That's an improvement, but I'm still not sure I understand what the
 workflow will be for developers.


 That's exactly why we want to have the CI system using newer libvirt
 than it does today. The patch to cap the version doesn't change what
 is tested - it just avoids users hitting untested paths by default
 so they're not exposed to any potential instability until we actually
 get a more updated CI system

 Do they need to now wait for Fedora to ship a new version of libvirt?
 Fedora is likely to help the problem because of how quickly it generally
 ships new packages and their release schedule but it would still hold
 back some features?


 Fedora has an add-on repository (virt-preview) which contains the
 latest QEMU + libvirt RPMs for current stable release - this is lags
 upstream by a matter of days, so there would be no appreciable delay
 in getting access to newest possible releases.

 Also, this explanation doesn't answer my question about what happens
 when the gate finally gets around to actually testing those potentially
 broken code paths.


 I think we would just test out the bump and make sure it's working fine
 before it's enabled for every job.  That would keep potential breakage
 localized to people working on debugging/fixing it until it's ready to
 go.


 The downside is that new features for libvirt could be held back by
 needing to fix other unrelated features. This is certainly not a bigger
 problem than users potentially running untested code simply because they
 are on a newer version of libvirt.

 I understand we have an immediate problem and I see the short-term value
 in the libvirt version cap.

 I try to look at the long-term and unless it's clear to me that a
 solution is proposed to be short-term and there are some understood
 trade-offs then I'll question the long-term implications of it.


 Once CI system is regularly tracking upstream releases within a matter of
 days, then the version cap is a total non-issue from a feature
 availability POV. It is none the less useful in the long term, for
 example,
 if there were a problem we miss in testing, which a deployer then hits in
 the field, the version cap would allow them to get their deployment to
 avoid use of the newer libvirt feature, which could be a useful workaround
 for them until a fix is available.

 Regards,
 Daniel


 FYI, there is a proposed revert of the libvirt version cap change mentioned
 previously in this thread [1].

 Just bringing it up again here since the discussion should happen in the ML
 rather than gerrit.

 [1] https://review.openstack.org/#/c/110754/

 --

 Thanks,

 Matt Riedemann



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Rackspace 

[openstack-dev] [Ironic] Multi-ironic-conductor issue

2014-08-07 Thread Jander lu
Hi, all

if I have more than one ironic conductor, so does each conductor should has
their own PXE server and DHCP namespace or they just share one centralized
pxe server or dhcp server ? if they share one centralized pxe and dhcp
server, so how does they support HA?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo.db]A proposal for DB read/write separation

2014-08-07 Thread Li Ma
Getting a massive amount of information from data storage to be displayed is 
where most of the activity happens in OpenStack. The two activities of reading 
data and writing (creating, updating and deleting) data are fundamentally 
different.

The optimization for these two opposite database activities can be done by 
physically separating the databases that service these two different 
activities. All the writes go to database servers, which then replicates the 
written data to the database server(s) dedicated to servicing the reads.

Currently, AFAIK, many OpenStack deployment in production try to take 
advantage of MySQL (includes Percona or MariaDB) multi-master Galera cluster. 
It is possible to design and implement a read/write separation schema 
for such a DB cluster.

Actually, OpenStack has a method for read scalability via defining 
master_connection and slave_connection in configuration, but this method 
lacks of flexibility due to deciding master or slave in the logical 
context(code). It's not transparent for application developer. 
As a result, it is not widely used in all the OpenStack projects.

So, I'd like to propose a transparent read/write separation method 
for oslo.db that every project may happily takes advantage of it 
without any code modification.

Moreover, I'd like to put it in the mailing list in advance to 
make sure it is acceptable for oslo.db.

I'd appreciate any comments.

br.
Li Ma


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for instance-level snapshots in Nova

2014-08-07 Thread Preston L. Bannister
Did this ever go anywhere?

http://lists.openstack.org/pipermail/openstack-dev/2014-January/024315.html

Looking at what is needed to get backup working in OpenStack, and this
seems the most recent reference.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-07 Thread Kashyap Chamarthy
On Thu, Aug 07, 2014 at 03:56:04AM -0700, Jay Pipes wrote:
 On 08/07/2014 02:12 AM, Kashyap Chamarthy wrote:

[. . .]

 
 Excellent sugestion. I've wondered multiple times that if we could
 dedicate a good chunk (or whole) of a specific release for heads down
 bug fixing/stabilization. As it has been stated elsewhere on this list:
 there's no pressing need for a whole lot of new code submissions, rather
 we focusing on fixing issues that affect _existing_ users/operators.
 
 There's a whole world of GBP/NFV/VPN/DVR/TLA folks that would beg to differ
 on that viewpoint. :)

Sure. new code submissions might be exciting, and might not find it as
unalloyed joy to fix someone *else*'s bugs. People can differ, as long
as: there's a clear indication of commitment to stand by when bugs occur
and help investigate cross-project issues involving their work -- to me
this shows that they care about the project in long-term and gets you
more karma. Not just only throw some half-assed code (not implying they
do) and go about their ways. While users/operators have to find out the
hard way that it's a pain in the neck to even set up, or so fragile that
you sneeze and it all falls apart.

I like Nikola's response[1] and the 'snippet' he posted, which sets the
expectations in a crystal clear language.

 That said, I entirely agree with you and wish efforts to stabilize would
 take precedence over feature work.


  [1] http://lists.openstack.org/pipermail/openstack-dev/2014-August/042299.html

-- 
/kashyap

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-07 Thread Robert Collins
On 8 August 2014 10:52, Yuriy Taraday yorik@gmail.com wrote:

 I don't dislike rebases because I sometimes use a bit longer version of it.
 I would be glad to avoid them because they destroy history that can help me
 later.

rebase doesn't destroy any history. gc destroys history.

See git reflog - you can recover all of your history in high fidelity
(and there are options to let you control just how deep the rabbit
hole goes).

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Bug#1231298 - size parameter for volume creation

2014-08-07 Thread Ganapathy, Sandhya
Hi,

This is to discuss Bug #1231298 - https://bugs.launchpad.net/cinder/+bug/1231298

Bug description : When one creates a volume from a snapshot or another volume, 
the size argument is calculated automatically. In the case of an image it needs 
to be specified though, for something larger than the image min_disk attribute. 
It would be nice to automatically get that size if it's not passed.

That's is a behavior of Cinder API.

Conclusion reached with this bug is that, we need to modify cinder client in 
order to accept optional size parameter (as the cinder's API allows)  and 
calculate the size automatically during volume creation from image.
There is also an opinion that size should not be an optional parameter during 
volume creation - does this mean, Cinder's API should be changed in order to 
make size a mandatory parameter.

Which direction should I take to fix this bug?

Thanks,
Sandhya.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-08-07 Thread Luke Gorrie
On 8 August 2014 02:06, Michael Still mi...@stillhq.com wrote:

 1: I think that ultimately should live in infra as part of check, but
 I'd be ok with it starting as a third party if that delivers us
 something faster. I'd be happy enough to donate resources to get that
 going if we decide to go with this plan.


Can we cooperate somehow?

We are already working on bringing up a third party CI covering QEMU 2.1
and Libvirt 1.2.7. The intention of this CI is to test the software
configuration that we are recommending for NFV deployments (including
vhost-user feature which appeared in those releases), and to provide CI
cover for the code we are offering for Neutron.

Michele Paolino is working on this and the relevant nova/devstack changes.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Nova] API design and usability

2014-08-07 Thread Robert Collins
On 7 August 2014 15:31, Christopher Yeoh cbky...@gmail.com wrote:
 On Thu, 7 Aug 2014 11:58:43 +1200
 Robert Collins robe...@robertcollins.net wrote:
...
 At the moment when cleaning up we don't know if a port was autocreated
 by Nova or was passed to us initially through the API.

That seems like a very small patch to fix - record the source, use
that info on cleanup.

 And there can be
 a long period of time between the initial server creation request and
 failure/cleanup - the API layer responds to the client well before the
 server has successfully started or failed.

Right.

 I think this sort of logic is much better handled above the REST API
 layer- which doesn't have to mean duplicated code in multiple clients

It doesn't? So we'll build a stateful client side datastore, and
provide language bindings to it from Python, Ruby, Java, etc?

 - it can for example be handled by client libraries such as
 python-novaclient or openstackclient and neutron related errors more
 directly returned to the client rather than having them proxied
 all the way through Nova.

--Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-07 Thread Chris Friesen

On 08/06/2014 05:41 PM, Zane Bitter wrote:

On 06/08/14 18:12, Yuriy Taraday wrote:

2. since hacking takes tremendous amount of time (you're doing a Cool
Feature (tm), nothing less) you need to update some code from
master, so
you're just merging master in to your branch (i.e. using Git as you'd
use it normally);



This is not how I'd use Git normally.


Well, as per Git author, that's how you should do with not-CVS. You have
cheap merges - use them instead of erasing parts of history.


This is just not true.

http://www.mail-archive.com/dri-devel@lists.sourceforge.net/msg39091.html

Choice quotes from the author of Git:

* 'People can (and probably should) rebase their _private_ trees'
* 'you can go wild on the git rebase thing'
* 'we use git rebase etc while we work on our problems.'
* 'git rebase is not wrong.'


Also relevant:

...you must never pull into a branch that isn't already
in good shape.

Don't merge upstream code at random points.

keep your own history clean

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-07 Thread Subrahmanyam Ongole
I am one of the developers on the project, so I have a strong preference
for option (1).

I think a 3rd option is also possible, which offers a scale down version of
GBP APIs. Contracts could be added in kilo. Provide EndPoints,
EndPointGroups, Rules and Policies. This is the simpler approach suggested
in GBP document, where you have a policy with a single rule (classifier 
action) applied between 2 EPGs. This approach minimizes complexity and
therefore saves precious reviewer's time. This requires some code reorg,
which may not be preferable to other developers on the project.

Alternatively, contracts could be added as optional vendor extensions in
Juno.



On Wed, Aug 6, 2014 at 8:50 PM, Alan Kavanagh alan.kavan...@ericsson.com
wrote:

 +1
 I believe Pedro has a very valid point here, and that is the the
 community to approve the spec and that decision should be respected. It
 makes sense to again clearly denote the process and governance and have
 this noted on the thread Stefano started earlier today.

 /Alan

 -Original Message-
 From: Pedro Marques [mailto:pedro.r.marq...@gmail.com]
 Sent: August-06-14 4:52 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the
 way forward


 On Aug 6, 2014, at 1:27 PM, Jay Pipes jaypi...@gmail.com wrote:
 
  However, it seems to me that the end-goal of the GBP effort is
 *actually* to provide a higher-layer API to Neutron that would essentially
 enable proprietary vendors to entirely swap out all of Neutron core for a
 new set of drivers that spoke proprietary device APIs.
 
  If this is the end-goal, it should be stated more clearly, IMO.

 I believe that people should be considered innocent until proven
 otherwise. Is there a reason to believe there is some hidden reason behind
 this proposal ? It seems to me that this is uncalled for.

 Neutron allows vendors to speak to proprietary device APIs, it was
 designed to do so, AFAIK. It is also possibly to entirely swap out all of
 the Neutron core... the proponents of the group based policy didn't have
 to go through so much trouble if that was their intent. As far as i know
 most plugins talk to a proprietary API.

 I happen to disagree technically with a couple of choices made by this
 proposal; but the blueprint was approved. Which means that i lost the
 argument, or didn't raise it on time, or didn't argue convincingly...
 regardless of the reason, the time to argue about the goal has passed. The
 decision of the community was to approve the spec and that decision should
 be respected.

   Pedro.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Thanks
OSM
(Subrahmanyam Ongole)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Manage multiple clusters using a single nova service

2014-08-07 Thread Gary Kotton
Hi,
Sorry for taking such  long time to chime in but these mails were sadly
missed. Please see my inline comments below. My original concerns for the
revert of the service were as follows:

1. What do we do about existing installation. This support was added at
the end of Havana and it is in production.
2. I had concerns regarding the way in which the image cache would be
maintained - that is each compute node has its own cache directory. So
this may have had datastore issues.

Over the last few weeks I have encountered some serious problems with the
multi VC support. This is causing production setups to break
(https://review.openstack.org/108225 is an example - this is due to
https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L3368
). This is due to the fact that the node may be updated at random places
in the nova manager code (these may be bugs - but it does not work well
with the multi cluster support). There are too many edge cases here and
the code is not robust enough.

If we do decide to go ahead with dropping the support, then we need to do
the following:
1. Upgrade path: we need to have a well defined upgrade path that will
enable an existing setup to upgrade from I to J (I do not think that we
should leave this till K as there are too many pinpoints with the node
management).
2. We need to make a few tweaks to the image cache path. My original
concern was that each compute node has its own cache directory. After
giving it some though this will be ok as long as we have each compute host
using the same cache directory. The reason for this is that the locking
for image handling is done external on the file system
(https://github.com/openstack/nova/blob/master/nova/virt/vmwareapi/vmops.py
#L319). So if we have multiple compute processes running on the same host
then we are good. In addition to this we can make use of a shared files
system and then we can have all compute nodes use the shared file system
for the locking - win win :). If anyone gets to this stage in the thread
then please see a fix for object support and aging
(https://review.openstack.org/111996 - the object updates made earlier int
he cycle caused a few problems - but I guess that the gate does not wait
24 hours to purge instances).

In short I am in favor of removing the multi cluster support but we need
to do the following:
1. Upgrade path
2. Investigate memory issues with nova compute
3. Tweak image cache path


Thanks
Gary

On 7/15/14, 11:36 AM, Matthew Booth mbo...@redhat.com wrote:

On 14/07/14 09:34, Vaddi, Kiran Kumar wrote:
 Hi,
 
  
 
 In the Juno summit, it was discussed that the existing approach of
 managing multiple VMware Clusters using a single nova compute service is
 not preferred and the approach of one nova compute service representing
 one cluster should be looked into.
 
  
 
 We would like to retain the existing approach (till we have resolved the
 issues) for the following reasons:
 
  
 
 1.   Even though a single service is managing all the clusters,
 logically it is still one compute per cluster. To the scheduler each
 cluster is represented as individual computes. Even in the driver each
 cluster is represented separately.

This is something that would not change with dropping the multi cluster
support.
The only change here is that additional processes will be running (please
see below).

 
  
 
 2.   Since ESXi does not allow to run nova-compute service on the
 hypervisor unlike KVM, the service has to be run externally on a
 different server. Its easier from administration perspective to manage a
 single service than multiple.

Yes, you have a good point here, but I think that at the end of the day we
need a robust service and that service will be managed by external tools,
for example chef, puppet etc. Unless it is a very small cloud.

  
 
 3.   Every connection to vCenter uses up ~140MB in the driver. If we
 were to manage each cluster by an individual service the memory consumed
 for 32 clusters will be high (~4GB). The newer versions support 64
clusters!

I think that this is a bug and it needs to be fixed. I understand that
this may affect a decision from today to tomorrow but it is not an
architectural issue and can be resolved (and really should be resolved
ASAP). I think that we need to open a bug for this and we should start to
investigate - fixing this will enable whoever is running a service uses
those resources elsewhere :)

 
  
 
 4.   There are existing customer installations that use the existing
 approach and therefore not enforce the new approach until it is simple
 to manage and not resource intensive.
 
  
 
 If the admin wants to use one service per cluster, it can be done with
 the existing driver. In the conf the admin has to specify a single
 cluster instead of a list of clusters. Therefore its better to give the
 admins the choice rather than enforcing one type of deployment.

This is a real pain point which we should address. I think that we 

Re: [openstack-dev] OpenStack Heat installation guide and Heat utilisation manual

2014-08-07 Thread Qiming Teng

This is good work.  However, I would suggest you check with some
deployment tools such as devstack to understand additional steps needed
for configuring Heat.  For example:

http://git.openstack.org/cgit/openstack-dev/devstack/tree/lib/heat#n215

There you can see the role creation work and domain setup steps.
Without these operations, you will get trapped into many weird problems
later on.


Regards,
  - Qiming

On Wed, Aug 06, 2014 at 12:10:47AM +0200, marwen mechtri wrote:
 Hi all,
 
 I want to present you our OpenStack Heat installation guide for Icehouse
 release.
 
 https://github.com/MarouenMechtri/OpenStack-Heat-Installation/blob/master/OpenStack-Heat-Installation.rst
 
 A well described manual with illustrative pictures for Heat utilisation and
 HOT template creation is available here:
 
 https://github.com/MarouenMechtri/OpenStack-Heat-Installation/blob/master/Create-your-first-stack-with-Heat.rst
 
 Please let us know your opinion about it.
 
 Enjoy!
 
 Marouen Mechtri

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Which program for Rally

2014-08-07 Thread marc

Hi John,

see below.

Zitat von John Griffith john.griff...@solidfire.com:


I have to agree with Duncan here.  I also don't know if I fully understand
the limit in options.  Stress test seems like it could/should be different


This is correct, Rally and Tempest Stress test have a different focus. The
stress test framework doesn't do any measurements of performance. This was
done by purpose since it is quite hard to measure performance with
asynchronous requests all over the place and using polling to measure actions.
So anyway I see that Rally already has an integration to run Tempest test
cases as load profile. But it's doesn't have an jenkins job like the stress
test has. In general I think in that area we could benefit in working
closer together and decide together if it makes sense to move to Tempest
or let it completely inside of Rally.

[snip]

Honestly I think who better to write tests for a project than the folks
building and contributing to the project.  At some point IMO the QA team
isn't going to scale.  I wonder if maybe we should be thinking about
proposals for delineating responsibility and goals in terms of functional
testing?


I think we are a bit off-topic now ;) Anyway I do think that moving test
cases closer to the project is a good idea.


Regards
Marc



On Wed, Aug 6, 2014 at 12:25 PM, Duncan Thomas duncan.tho...@gmail.com
wrote:


I'm not following here - you complain about rally being monolithic,
then suggest that parts of it should be baked into tempest - a tool
that is already huge and difficult to get into. I'd rather see tools
that do one thing well and some overlap than one tool to rule them
all.

On 6 August 2014 14:44, Sean Dague s...@dague.net wrote:
 On 08/06/2014 09:11 AM, Russell Bryant wrote:
 On 08/06/2014 06:30 AM, Thierry Carrez wrote:
 Hi everyone,

 At the TC meeting yesterday we discussed Rally program request and
 incubation request. We quickly dismissed the incubation request, as
 Rally appears to be able to live happily on top of OpenStack and would
 benefit from having a release cycle decoupled from the OpenStack
 integrated release.

 That leaves the question of the program. OpenStack programs are created
 by the Technical Committee, to bless existing efforts and teams that
are
 considered *essential* to the production of the OpenStack integrated
 release and the completion of the OpenStack project mission. There are
3
 ways to look at Rally and official programs at this point:

 1. Rally as an essential QA tool
 Performance testing (and especially performance regression testing) is
 an essential QA function, and a feature that Rally provides. If the QA
 team is happy to use Rally to fill that function, then Rally can
 obviously be adopted by the (already-existing) QA program. That said,
 that would put Rally under the authority of the QA PTL, and that raises
 a few questions due to the current architecture of Rally, which is more
 product-oriented. There needs to be further discussion between the QA
 core team and the Rally team to see how that could work and if that
 option would be acceptable for both sides.

 2. Rally as an essential operator tool
 Regular benchmarking of OpenStack deployments is a best practice for
 cloud operators, and a feature that Rally provides. With a bit of a
 stretch, we could consider that benchmarking is essential to the
 completion of the OpenStack project mission. That program could one day
 evolve to include more such operations best practices tools. In
 addition to the slight stretch already mentioned, one concern here is
 that we still want to have performance testing in QA (which is clearly
 essential to the production of OpenStack). Letting Rally primarily be
 an operational tool might make that outcome more difficult.

 3. Let Rally be a product on top of OpenStack
 The last option is to not have Rally in any program, and not consider
it
 *essential* to the production of the OpenStack integrated release or
 the completion of the OpenStack project mission. Rally can happily
exist
 as an operator tool on top of OpenStack. It is built as a monolithic
 product: that approach works very well for external complementary
 solutions... Also be more integrated in OpenStack or part of the
 OpenStack programs might come at a cost (slicing some functionality out
 of rally to make it more a framework and less a product) that might not
 be what its authors want.

 Let's explore each option to see which ones are viable, and the pros
and
 cons of each.

 My feeling right now is that Rally is trying to accomplish too much at
 the start (both #1 and #2).  I would rather see the project focus on
 doing one of them as best as it can before increasing scope.

 It's my opinion that #1 is the most important thing that Rally can be
 doing to help ensure the success of OpenStack, so I'd like to explore
 the Rally as a QA tool in more detail to start with.

 I want to clarify some things. I don't think that rally in it's current
 form belongs 

Re: [openstack-dev] [horizon] Package python-django-pyscss dependencies on CentOS

2014-08-07 Thread Matthias Runge

On 06/08/14 14:01, Timur Sufiev wrote:

Hi!

Here is the link: http://koji.fedoraproject.org/koji/rpminfo?rpmID=5239113

The question is whether the python-pillow package really needed for
proper compiling css from scss in Horizon or is it an optional
requirement which can be safely dropped? The problem with
python-pillow is that it pulls a lot of unneeded deps (like tk, qt,
etc...) which is better avoided.

If you're looking at the spec[1], you'd see it's a test requirement, not 
a runtime requirement.



Matthias

[1] 
http://pkgs.fedoraproject.org/cgit/python-django-pyscss.git/tree/python-django-pyscss.spec


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Deprecating CONF.block_device_allocate_retries_interval

2014-08-07 Thread Liyi Meng
Hi Michael, 

Not sure if I am getting your right. I think your proposal doesn't not perform 
well in reality. 

Firstly, it is difficult to guess a good time that fix all problems, except  
you take forever. Just take the volume creation in my bugfix as example 
(https://review.openstack.org/#/c/104876/). If a couple of large volumes are 
requested to create at the same time toward a fast storage backend, it would 
take a long time for each to create. It is quite normal to see it takes more 
than an hour to create a volume from a 60G image. That is why I proposal we 
need to guess a total timeout base on image size in the bugfix.  

Secondly, are you suggesting Eventlet sleep for 15 minute then check the 
result of operation, without doing anything in between? IMHO, this would be 
very bad experience for end user! Because they ALWAYS have to wait for 15 
minutes to move on regardless what operation they have issued. 

BR/Liyi 

 

From: mikalst...@gmail.com [mikalst...@gmail.com] on behalf of Michael Still 
[mi...@stillhq.com]
Sent: Wednesday, 06 August 2014 10:57 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Liyi Meng
Subject: Re: [openstack-dev] [nova] Deprecating 
CONF.block_device_allocate_retries_interval

Maybe we should change how we wait?

I get that we don't want to sit around forever, but perhaps we should
specify a total maximum time to wait instead of a number of iterations
of a loop? Something like 15 minutes should be long enough for
anyone!. Eventlet sleeps are also pretty cheap, so having a bigger
sleep time inside them just means that we overshoot more than we would
otherwise.

Michael

On Thu, Aug 7, 2014 at 3:54 AM, Jay Pipes jaypi...@gmail.com wrote:
 Hi Stackers!

 So, Liyi Meng has an interesting patch up for Nova:

 https://review.openstack.org/#/c/104876

 that changes the way that the interval and number of retries is calculated
 for a piece of code that waits around for a block device mapping to become
 active.

 Namely, the patch discards the value of the following configuration options
 *if the volume size is not 0* (which is a typical case):

 * CONF.block_device_allocate_retries_interval
 * CONF.block_device_allocate_retries

 and in their place, instead uses a hard-coded 60 max number of retries and
 calculates a more appropriate interval by looking at the size of the
 volume to be created. The algorithm uses the sensible idea that it will take
 longer to allocate larger volumes than smaller volumes, and therefore the
 interval time for larger volumes should be longer than smaller ones.

 So... here's the question: since this code essentially makes the above two
 configuration options obselete for the majority of cases (where volume size
 is not 0), should we do one of the following?

 1) We should just deprecate both the options, with a note in the option help
 text that these options are not used when volume size is not 0, and that the
 interval is calculated based on volume size

 or

 2) We should deprecate the CONF.block_device_allocate_retries_interval
 option only, and keep the CONF.block_device_allocate_retries configuration
 option as-is, changing the help text to read something like Max number of
 retries. We calculate the interval of the retry based on the size of the
 volume.

 I bring this up on the mailing list because I think Liyi's patch offers an
 interesting future direction to the way that we think about our retry
 approach in Nova. Instead of having hard-coded or configurable interval
 times, I think Liyi's approach of calculating the interval length based on
 some input values is a good direction to take.

 Thoughts?

 -jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and theway forward

2014-08-07 Thread Sumit Naiksatam
And while we are on this, just wanted to remind all those interested
to attend the weekly GBP meeting later today:
https://wiki.openstack.org/wiki/Meetings/Neutron_Group_Policy

On Wed, Aug 6, 2014 at 8:12 PM, Mike Cohen co...@noironetworks.com wrote:
 Its good to see such a lively debate about this topic.  With the disclaimer
 of someone who has worked on this project, I have a strong preference
 towards Option 1 as well (ie. merging it in the tree).  We’ve actually
 already heard from users on this thread who want to use this([1] and [2]),
 others who have at least expressed some interest ([3]).   Making it easier
 for them to consume it is a very much worth the effort.

 You’ll also see a strong willingness from our team to compromise on things
 like naming conventions (endpoints can certainly become something else to
 avoid confusion for example) and labels the community wants to place on this
 in terms of support (maybe a “beta” or “preview” disclaimer) so it does not
 send the wrong message to users.

 From our group’s perspective, we’re happy to see the discussion occur so
 everyone can weigh in but we also are seeking *closure* on this topic,
 especially considering we have operators asking for it and we have limited
 time to actually merge it in Juno-3.  Hopefully we can achieve this closure
 asap so we can move forward with our work (both on this project and other
 Neutron projects).

 Thanks,
 Mike

 [1]
 http://lists.openstack.org/pipermail/openstack-dev/2014-August/042036.html
 [2]
 http://lists.openstack.org/pipermail/openstack-dev/2014-August/042043.html
 [3]
 http://lists.openstack.org/pipermail/openstack-dev/2014-August/042180.html


 From: Stephen Wong stephen.kf.w...@gmail.com

 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: Wednesday, August 6, 2014 at 9:03 PM

 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the
 way forward

 Hi,

 Thanks to Armando for steering the discussion back to the original
 intent.


 On Wed, Aug 6, 2014 at 3:56 PM, Armando M. arma...@gmail.com wrote:


 On 6 August 2014 15:47, Kevin Benton blak...@gmail.com wrote:

 I think we should merge it and just prefix the API for now with
 '/your_application_will_break_after_juno_if_you_use_this/'


 And you make your call based and what pros and cons exactly, If I am ask?

 Let me start:

 Option 1:
   - pros
 - immediate delivery vehicle for consumption by operators


 Buried inside these 100+ posts are posts from two OpenStack users
 pleading for their desire to use GBP APIs for their Juno deployments. While
 that is a small sample size, it does prove that there is legitimate
 interests from our user base to get their hands on this feature.

 User feedback is the best way to evolve the APIs moving forward - as
 long as these APIs/implementation do not jeopardize the stability of
 Neutron. And as many folks in the thread had pointed out, the GBP
 implementation currently has really gone the extra mile to ensure it does
 NOT do that.



   - cons
 - code is burder from a number of standpoints (review, test, etc)


 This is a legitimate concern - that said, if you take a look at the
 first patch:

 https://review.openstack.org/#/c/95900/

 there are 30 human reviewers (non-CI) signed up to review the patch at
 this time, and among them 9 Neutron core members (8 if you don't count
 Sumit, who is the author), as well as a Nova core. From the reception, I
 believe the community does not generally treat reviewing GBP related patches
 as a burden, but likely as an item of interest. Additionally, with such
 board and strong community base willing to involve in reviewing the code, I
 think with these many eyes it will hopefully help lessen the burden on
 Neutron cores to review and merge these set of patches.




 Option 2:
   - pros
 - enable a small set of Illuminati to iterate faster



 As a subgroup, we have already iterated the model and APIs for about a
 year, with around 40 IRC meetings for community discussions, a PoC demo that
 was presented to about 300 audiences back at J-Summit, and actual
 implementations in gerrit for months now. Indeed with about 35+ people
 responding to this thread, I have yet to see anyone making a claim that GBP
 model and APIs as they are now are crap, we have to scrap it and rethink. I
 would like to think that we are at a point where we will do phase by phase
 enhancements - as should practically any other APIs in OpenStack - rather
 than rapid iterations within a cycle. While we already have some user
 feedbacks, we would love to get more and more user and developer community
 feedbacks to evolve GBP to better fit their needs, and stackforge
 unfortunately does not serve that purpose well.



   - cons
 - integration burden with other OpenStack projects 

Re: [openstack-dev] [Octavia] Weekly meetings resuming + agenda

2014-08-07 Thread Stephen Balukoff
Hi Brandon,

I don't think we've set a specific date to make the transition to IRC
meetings. Is there a particular urgency about this that we should be aware
of?

Stephen


On Wed, Aug 6, 2014 at 7:58 PM, Brandon Logan brandon.lo...@rackspace.com
wrote:

 When is the plan to move the meeting to IRC?

 On Wed, 2014-08-06 at 15:30 -0700, Stephen Balukoff wrote:
  Action items from today's Octavia meeting:
 
 
  1. We're going to hold off for a couple days on merging the
  constitution and preliminary road map to give people (and in
  particular Ebay) a chance to review and comment.
  2. Stephen is going to try to get Octavia v0.5 design docs into gerrit
  review by the end of the week, or early next week at the latest.
 
  3. If those with specific networking concerns could codify this and/or
  figure out a way to write these down and share with the list, that
  would be great. This is going to be important to ensure that our
  operator-grade load balancer solution can actually meet the needs of
  the operators developing it.
 
  Thanks,
 
  Stephen
 
 
 
 
 
 
 
 
  On Tue, Aug 5, 2014 at 2:34 PM, Stephen Balukoff
  sbaluk...@bluebox.net wrote:
  Hello!
 
 
  We plan on resuming weekly meetings to discuss things related
  to the Octavia project starting tomorrow: August 6th at
  13:00PDT (20:00UTC). In order to facilitate high-bandwidth
  discussion as we bootstrap the project, we have decided to
  hold these meetings via webex, with the plan to eventually
  transition to IRC. Please contact me directly if you would
  like to get in on the webex.
 
 
  Tomorrow's meeting agenda is currently as follows:
 
 
  * Discuss Octavia constitution and project direction documents
  currently under gerrit review:
  https://review.openstack.org/#/c/110563/
 
 
 
  * Discuss reviews of design proposals currently under gerrit
  review:
  https://review.openstack.org/#/c/111440/
  https://review.openstack.org/#/c/111445/
 
 
  * Discuss operator network topology requirements based on data
  currently being collected by HP, Rackspace and Blue Box.
  (Other operators are certainly welcome to collect and share
  their data as well! I'm looking at you, Ebay. ;) )
 
 
  Please feel free to respond with additional agenda items!
 
 
  Stephen
 
 
  --
  Stephen Balukoff
  Blue Box Group, LLC
  (800)613-4305 x807
 
 
 
 
  --
  Stephen Balukoff
  Blue Box Group, LLC
  (800)613-4305 x807
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] Minutes from 8/6/2014 meeting

2014-08-07 Thread Stephen Balukoff
Wow, Trevor! Thanks for capturing all that!


On Wed, Aug 6, 2014 at 9:47 PM, Trevor Vardeman 
trevor.varde...@rackspace.com wrote:

 Agenda items are numbered, and topics, as discussed, are described beneath
 in list format.

 1) Octavia Constitution and Project Direction Documents (Road map)
 a) Constitution and Road map will potentially be adopted after another
 couple days; providing those who were busy more time to review the
 information

 2) Octavia Design Proposals
 a) Difference between version 0.5 and 1.0 isn't huge
 b) Version 2 has many network topology changes and Layer 4 routing
 + This includes N node Active-Active
 + Would like to avoid Layer 2 connectivity with Load Balancers
 (included in version 1 however)
 + Layer router driver
 + Layer router controller
 + Long term solution
 c) After refining Version 1 document (with some scrutiny) all changes
 will be propagated to the Version 2 document
 d) Version 0.5 is unpublished
 e) All control layer, anything connected to the intermediate message
 bus in version 1, will be collapsed down to 1 daemon.
 + No scale-able control, but scale-able service delivery
 + Version 1 will be the first large operator compatible version,
 that will have both scale-able control and scale-able service delivery
 + 0.5 will be a good start
 - laying out ground work
 - rough topology for the end users
 - must be approved by the networking teams for each
 contributing company
 f) The portions under control of neutron lbaas is the User API and the
 driver (for neutron lbaas)
 g) If neutron LBaaS is a sufficient front-end (user API doesn't suck),
 then Octavia will be kept as a vendor driver
 h) Potentially including a REST API on top of Octavia
 + Octavia is initially just a vendor driver, no real desire for
 another API in front of Octavia
 + If someone wants it, the work is trivial and can be done in
 another project at another time
 i) Octavia should have a loose coupling with Neutron; use a shim for
 network connectivity (one specifically for Neutron communication in the
 start)
 + This is going to hold any dirty hacks that would be required
 to get something done, keeping Octavia clean
 - Example: changing the mac address on a port

 3) Operator Network Topology Requirements
 a) One requirement is floating IPs.
 b) IPv6 is in demand, but is currently not supported reliably on
 Neutron
 + IPv6 would be represented as a different load balancer entity,
 and possibly include co-location with another Load Balancer
 c) Network interface plug-ability (potentially)
 d) Sections concerning front-end connectivity should be forwarded to
 each company's network specialists for review
 + Share findings in the mailing list, and dissect the proposals
 with the information and comment what requirements are needing added etc.

 4) HA/Failover Options/Solutions
 a) Rackspace may have a solution to this, but the conversation will be
 pushed off to the next meeting (at least)
 + Will gather more information from another member in Rackspace to
 provide to the ML for initial discussions
 b) One option for HA:  Spare pool option (similar to Libra)
 + Poor recovery time is a big problem
 c) Another option for HA:  Active/Passive
 + Bluebox uses one active and one passive configuration, and has
 sub-second fail over.  However is not resource-sufficient

 Questions:
 Q:  What is the expectation for a release time-frame
 A:  Wishful thinking; Octavia version 0.5 beta for Juno (probably not, but
 would be awesome to push for that)

 Notes:
  + We need to pressure the Neutron core reviewers to review the Neutron
 LBaaS changes to get merges.
  + Version 2 front-end topology is different than the Version 1.  Please
 review them individually, and thoroughly


 PS.  I re-wrote most of the information from the recording (thanks again
 Doug).  I have one question for everyone: should I just email this out
 after each meeting to the Octavia mailing list, or should I also add it to
 a page in an Octavia wiki for Meeting Notes/Minutes or something for review
 by anyone?  What are your thoughts?

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] Minutes from 8/6/2014 meeting

2014-08-07 Thread Stephen Balukoff
On where to capture notes like this long-term:  I would say the wiki is
more searchable for now. When we make the transition to IRC meetings, then
the meeting bots will capture minutes and transcripts in the usual way and
we can link to these from the wiki.


On Thu, Aug 7, 2014 at 1:29 AM, Stephen Balukoff sbaluk...@bluebox.net
wrote:

 Wow, Trevor! Thanks for capturing all that!


 On Wed, Aug 6, 2014 at 9:47 PM, Trevor Vardeman 
 trevor.varde...@rackspace.com wrote:

 Agenda items are numbered, and topics, as discussed, are described
 beneath in list format.

 1) Octavia Constitution and Project Direction Documents (Road map)
 a) Constitution and Road map will potentially be adopted after
 another couple days; providing those who were busy more time to review the
 information

 2) Octavia Design Proposals
 a) Difference between version 0.5 and 1.0 isn't huge
 b) Version 2 has many network topology changes and Layer 4 routing
 + This includes N node Active-Active
 + Would like to avoid Layer 2 connectivity with Load Balancers
 (included in version 1 however)
 + Layer router driver
 + Layer router controller
 + Long term solution
 c) After refining Version 1 document (with some scrutiny) all changes
 will be propagated to the Version 2 document
 d) Version 0.5 is unpublished
 e) All control layer, anything connected to the intermediate message
 bus in version 1, will be collapsed down to 1 daemon.
 + No scale-able control, but scale-able service delivery
 + Version 1 will be the first large operator compatible version,
 that will have both scale-able control and scale-able service delivery
 + 0.5 will be a good start
 - laying out ground work
 - rough topology for the end users
 - must be approved by the networking teams for each
 contributing company
 f) The portions under control of neutron lbaas is the User API and
 the driver (for neutron lbaas)
 g) If neutron LBaaS is a sufficient front-end (user API doesn't
 suck), then Octavia will be kept as a vendor driver
 h) Potentially including a REST API on top of Octavia
 + Octavia is initially just a vendor driver, no real desire for
 another API in front of Octavia
 + If someone wants it, the work is trivial and can be done in
 another project at another time
 i) Octavia should have a loose coupling with Neutron; use a shim for
 network connectivity (one specifically for Neutron communication in the
 start)
 + This is going to hold any dirty hacks that would be required
 to get something done, keeping Octavia clean
 - Example: changing the mac address on a port

 3) Operator Network Topology Requirements
 a) One requirement is floating IPs.
 b) IPv6 is in demand, but is currently not supported reliably on
 Neutron
 + IPv6 would be represented as a different load balancer entity,
 and possibly include co-location with another Load Balancer
 c) Network interface plug-ability (potentially)
 d) Sections concerning front-end connectivity should be forwarded to
 each company's network specialists for review
 + Share findings in the mailing list, and dissect the proposals
 with the information and comment what requirements are needing added etc.

 4) HA/Failover Options/Solutions
 a) Rackspace may have a solution to this, but the conversation will
 be pushed off to the next meeting (at least)
 + Will gather more information from another member in Rackspace
 to provide to the ML for initial discussions
 b) One option for HA:  Spare pool option (similar to Libra)
 + Poor recovery time is a big problem
 c) Another option for HA:  Active/Passive
 + Bluebox uses one active and one passive configuration, and has
 sub-second fail over.  However is not resource-sufficient

 Questions:
 Q:  What is the expectation for a release time-frame
 A:  Wishful thinking; Octavia version 0.5 beta for Juno (probably not,
 but would be awesome to push for that)

 Notes:
  + We need to pressure the Neutron core reviewers to review the Neutron
 LBaaS changes to get merges.
  + Version 2 front-end topology is different than the Version 1.  Please
 review them individually, and thoroughly


 PS.  I re-wrote most of the information from the recording (thanks again
 Doug).  I have one question for everyone: should I just email this out
 after each meeting to the Octavia mailing list, or should I also add it to
 a page in an Octavia wiki for Meeting Notes/Minutes or something for review
 by anyone?  What are your thoughts?

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Stephen Balukoff
 Blue Box Group, LLC
 (800)613-4305 x807




-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807

Re: [openstack-dev] [Neutron][QA] Enabling full neutron Job

2014-08-07 Thread Salvatore Orlando
I had to put the patch back on WIP because yesterday a bug causing a 100%
failure rate slipped in.
It should be an easy fix, and I'm already working on it.
Situations like this, exemplified by [1] are a bit frustrating for all the
people working on improving neutron quality.
Now, if you allow me a little rant, as Neutron is receiving a lot of
attention for all the ongoing discussion regarding this group policy stuff,
would it be possible for us to receive a bit of attention to ensure both
the full job and the grenade one are switched to voting before the juno-3
review crunch.

We've already had the attention of the QA team, it would probably good if
we could get the attention of the infra core team to ensure:
1) the jobs are also deemed by them stable enough to be switched to voting
2) the relevant patches for openstack-infra/config are reviewed

Regards,
Salvatore

[1]
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwie3UnbWVzc2FnZSc6IHUnRmxvYXRpbmcgaXAgcG9vbCBub3QgZm91bmQuJywgdSdjb2RlJzogNDAwfVwiIEFORCBidWlsZF9uYW1lOlwiY2hlY2stdGVtcGVzdC1kc3ZtLW5ldXRyb24tZnVsbFwiIEFORCBidWlsZF9icmFuY2g6XCJtYXN0ZXJcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiMTcyODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwNzQwMDExMDIwNywibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==


On 23 July 2014 14:59, Matthew Treinish mtrein...@kortar.org wrote:

 On Wed, Jul 23, 2014 at 02:40:02PM +0200, Salvatore Orlando wrote:
  Here I am again bothering you with the state of the full job for Neutron.
 
  The patch for fixing an issue in nova's server external events extension
  merged yesterday [1]
  We do not have yet enough data points to make a reliable assessment, but
 of
  out 37 runs since the patch merged, we had only 5 failures, which puts
  the failure rate at about 13%
 
  This is ugly compared with the current failure rate of the smoketest
 (3%).
  However, I think it is good enough to start making the full job voting at
  least for neutron patches.
  Once we'll be able to bring down failure rate to anything around 5%, we
 can
  then enable the job everywhere.

 I think that sounds like a good plan. I'm also curious how the failure
 rates
 compare to the other non-neutron jobs, that might be a useful comparison
 too
 for deciding when to flip the switch everywhere.

 
  As much as I hate asymmetric gating, I think this is a good compromise
 for
  avoiding developers working on other projects are badly affected by the
  higher failure rate in the neutron full job.

 So we discussed this during the project meeting a couple of weeks ago [3]
 and
 there was a general agreement that doing it asymmetrically at first would
 be
 better. Everyone should be wary of the potential harms with doing it
 asymmetrically and I think priority will be given to fixing issues that
 block
 the neutron gate should they arise.

  I will therefore resume work on [2] and remove the WIP status as soon as
 I
  can confirm a failure rate below 15% with more data points.
 

 Thanks for keeping on top of this Salvatore. It'll be good to finally be at
 least partially gating with a parallel job.

 -Matt Treinish

 
  [1] https://review.openstack.org/#/c/103865/
  [2] https://review.openstack.org/#/c/88289/
 [3]
 http://eavesdrop.openstack.org/meetings/project/2014/project.2014-07-08-21.03.log.html#l-28

 
 
  On 10 July 2014 11:49, Salvatore Orlando sorla...@nicira.com wrote:
 
  
  
  
   On 10 July 2014 11:27, Ihar Hrachyshka ihrac...@redhat.com wrote:
  
   -BEGIN PGP SIGNED MESSAGE-
   Hash: SHA512
  
   On 10/07/14 11:07, Salvatore Orlando wrote:
The patch for bug 1329564 [1] merged about 11 hours ago. From [2]
it seems there has been an improvement on the failure rate, which
seem to have dropped to 25% from over 40%. Still, since the patch
merged there have been 11 failures already in the full job out of
42 jobs executed in total. Of these 11 failures: - 3 were due to
problems in the patches being tested - 1 had the same root cause as
bug 1329564. Indeed the related job started before the patch merged
but finished after. So this failure doesn't count. - 1 was for an
issue introduced about a week ago which actually causing a lot of
failures in the full job [3]. Fix should be easy for it; however
given the nature of the test we might even skip it while it's
fixed. - 3 were for bug 1333654 [4]; for this bug discussion is
going on on gerrit regarding the most suitable approach. - 3 were
for lock wait timeout errors. Several people in the community are
already working on them. I hope this will raise the profile of this
issue (maybe some might think it's just a corner case as it rarely
causes failures in smoke jobs, whereas the truth is that error
occurs but it does not cause job failure because the jobs isn't
parallel).
  
   Can you give directions on where to find those lock timeout failures?
   I'd like to check logs to see 

Re: [openstack-dev] [all] The future of the integrated release

2014-08-07 Thread Thierry Carrez
Stefano Maffulli wrote:
 On Wed 06 Aug 2014 02:10:23 PM PDT, Michael Still wrote:
  - we rate limit the total number of blueprints under code review at
 any one time to a fixed number of slots. I secretly prefer the term
 runway, so I am going to use that for the rest of this email. A
 suggested initial number of runways was proposed at ten.
 
 oh, I like the 'slots/runaway model'. Sounds to me like kan ban (in 
 the Toyota sense not the hipster developer sense).
 
 A light in my head just went on.
 
 Let me translate what you're thinking about in other terms: the 
 slot/runaway model would switch what is now a push model into a pull 
 model. Currently we have patches coming in, pushed up for review. We 
 have then on gerrit reviewers and core reviewers shuffling through 
 these changesets, doing work and approve/comment. The reviewers have 
 little to no way to notice when they're overloaded and managers have no 
 way either. There is no way to identify when the process is suffering, 
 slowing down or not satisfying demand, if not when the backlog blows 
 up. As recent discussions demonstrate, this model is failing under our 
 growth.
 
 By switching to a model where we have a set of slots/runaway (buckets, 
 in Toyota's terminology) reviewers would have a clear way to *pull* new 
 reviews into their workstations to be processed. It's as simple as a 
 supermaket aisle: when there is no more pasta on the shelf, a clerk 
 goes in the backend and gets more pasta to refurbish the shelf. There 
 is no sophisticated algorithm to predict demand: it's the demand of 
 pasta that drives new pull requests (of pasta or changes to review).
 
 This pull mechanism would help make it very visible where the 
 bottlenecks are. At Toyota, for example, the amount of kanbans is the 
 visible way to understand the capacity of the plant. The amount of 
 slots/runaways would probably give us similar overview of the capacity 
 of each project and give us tools to solve bottlenecks before they 
 become emergencies.

As an ex factory IT manager, I feel compelled to comment on that :)
You're not really introducing a successful Kanban here, you're just
clarifying that there should be a set number of workstations.

Our current system is like a gigantic open space with hundreds of
half-finished pieces, and a dozen workers keep on going from one to
another with no strong pattern. The proposed system is to limit the
number of half-finished pieces fighting for the workers attention at any
given time, by setting a clear number of workstations.

A true Kanban would be an interface between developers and reviewers,
where reviewers define what type of change they have to review to
complete production objectives, *and* developers would strive to produce
enough to keep the kanban above the red line, but not too much (which
would be piling up waste).

Without that discipline, Kanbans are useless. Unless the developers
adapt what they work on based on release objectives, you don't really
reduce waste/inventory at all, it just piles up waiting for available
runway slots. As I said in my original email, the main issue here is
the imbalance between too many people proposing changes and not enough
people caring about the project itself enough to be trusted with core
reviewers rights.

This proposal is not solving that, so it is not the miracle cure that
will end all developers frustration, nor is it turning our push-based
model into a sane pull-based one. The only way to be truly pull-based is
to define a set of production objectives and have those objectives
trickle up to the developers so that they don't work on something else.
The solution is about setting release cycle goals and strongly
communicating that everything out of those goals is clearly priority 2.

Now I'm not saying this is a bad idea. Having too many reviews to
consider at the same time dilutes review attention to the point where we
don't finalize anything. Having runway slots makes sure there is a focus
on a limited set of features at a time, which increases the chances that
those get finalized.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] nova-network stuck at get semaphores lock when startup

2014-08-07 Thread Alex Xu

When I startup nova-network, it stuck at trying get lock for ebtables.

@utils.synchronized('ebtables', external=True)
def ensure_ebtables_rules(rules, table='filter'):
.

Checking the code found that invoke utils.synchronized without parameter 
lock_path, the code will try to use

posix semaphore.

But posix semaphore won't release even the process crashed. Should we 
fix it? I saw a lot of call for synchronized

without lock_path.

Thanks
Alex


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Package python-django-pyscss dependencies on CentOS

2014-08-07 Thread Timur Sufiev
Thanks,

now it is clear that this requirement can be safely dropped.

On Thu, Aug 7, 2014 at 11:33 AM, Matthias Runge mru...@redhat.com wrote:
 On 06/08/14 14:01, Timur Sufiev wrote:

 Hi!

 Here is the link: http://koji.fedoraproject.org/koji/rpminfo?rpmID=5239113

 The question is whether the python-pillow package really needed for
 proper compiling css from scss in Horizon or is it an optional
 requirement which can be safely dropped? The problem with
 python-pillow is that it pulls a lot of unneeded deps (like tk, qt,
 etc...) which is better avoided.

 If you're looking at the spec[1], you'd see it's a test requirement, not a
 runtime requirement.


 Matthias

 [1]
 http://pkgs.fedoraproject.org/cgit/python-django-pyscss.git/tree/python-django-pyscss.spec

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Timur Sufiev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-07 Thread Kashyap Chamarthy
On Thu, Aug 07, 2014 at 07:10:23AM +1000, Michael Still wrote:
 On Wed, Aug 6, 2014 at 2:03 AM, Thierry Carrez thie...@openstack.org wrote:
 
  We seem to be unable to address some key issues in the software we
  produce, and part of it is due to strategic contributors (and core
  reviewers) being overwhelmed just trying to stay afloat of what's
  happening. For such projects, is it time for a pause ? Is it time to
  define key cycle goals and defer everything else ?

[. . .]

 We also talked about tweaking the ratio of tech debt runways vs
 'feature runways. So, perhaps every second release is focussed on
 burning down tech debt and stability, whilst the others are focussed
 on adding features.

 I would suggest if we do such a thing, Kilo should be a stability'
 release.

Excellent sugestion. I've wondered multiple times that if we could
dedicate a good chunk (or whole) of a specific release for heads down
bug fixing/stabilization. As it has been stated elsewhere on this list:
there's no pressing need for a whole lot of new code submissions, rather
we focusing on fixing issues that affect _existing_ users/operators. 
 
-- 
/kashyap

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-network stuck at get semaphores lock when startup

2014-08-07 Thread Chen CH Ji
Just to clarify , I think your case would be run nova-network ,then ^C or
abnormally shutdown it
and it might be during  the period of holding a semaphore without releasing
it, right?

guess all component other than nova have this problem ? so maybe remove
this [nova] can get more input ...


Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC



From:   Alex Xu x...@linux.vnet.ibm.com
To: OpenStack Development Mailing List
openstack-dev@lists.openstack.org,
Date:   08/07/2014 04:54 PM
Subject:[openstack-dev] [nova] nova-network stuck at get semaphores
lockwhen startup



When I startup nova-network, it stuck at trying get lock for ebtables.

@utils.synchronized('ebtables', external=True)
def ensure_ebtables_rules(rules, table='filter'):
 .

Checking the code found that invoke utils.synchronized without parameter
lock_path, the code will try to use
posix semaphore.

But posix semaphore won't release even the process crashed. Should we
fix it? I saw a lot of call for synchronized
without lock_path.

Thanks
Alex


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-network stuck at get semaphores lock when startup

2014-08-07 Thread Alex Xu

On 2014?08?07? 17:13, Chen CH Ji wrote:


Just to clarify , I think your case would be run nova-network ,then ^C 
or abnormally shutdown it
and it might be during  the period of holding a semaphore without 
releasing it, right?



yes, you are right. thanks for the clarify.

guess all component other than nova have this problem ? so maybe 
remove this [nova] can get more input ...



yes




Best Regards!

Kevin (Chen) Ji ? ?

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian 
District, Beijing 100193, PRC


Inactive hide details for Alex Xu ---08/07/2014 04:54:09 PM---When I 
startup nova-network, it stuck at trying get lock for ebtaAlex Xu 
---08/07/2014 04:54:09 PM---When I startup nova-network, it stuck at 
trying get lock for ebtables. @utils.synchronized('ebtables


From: Alex Xu x...@linux.vnet.ibm.com
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.org,

Date: 08/07/2014 04:54 PM
Subject: [openstack-dev] [nova] nova-network stuck at get semaphores 
lock when startup






When I startup nova-network, it stuck at trying get lock for ebtables.

@utils.synchronized('ebtables', external=True)
def ensure_ebtables_rules(rules, table='filter'):
.

Checking the code found that invoke utils.synchronized without parameter
lock_path, the code will try to use
posix semaphore.

But posix semaphore won't release even the process crashed. Should we
fix it? I saw a lot of call for synchronized
without lock_path.

Thanks
Alex


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] introducing cyclops

2014-08-07 Thread Piyush Harsh
Dear All,

Let me use my first post to this list to introduce Cyclops and initiate a
discussion towards possibility of this platform as a future incubated
project in OpenStack.

We at Zurich university of Applied Sciences have a python project in open
source (Apache 2 Licensing) that aims to provide a platform to do
rating-charging-billing over ceilometer. We call is Cyclops (A Charging
platform for OPenStack CLouds).

The initial proof of concept code can be accessed here:
https://github.com/icclab/cyclops-web 
https://github.com/icclab/cyclops-tmanager

*Disclaimer: This is not the best code out there, but will be refined and
documented properly very soon!*

A demo video from really early days of the project is here:
https://www.youtube.com/watch?v=ZIwwVxqCio0 and since this video was made,
several bug fixes and features were added.

The idea presentation was done at Swiss Open Cloud Day at Bern and the talk
slides can be accessed here:
http://piyush-harsh.info/content/ocd-bern2014.pdf, and more recently the
research paper on the idea was published in 2014 World Congress in Computer
Science (Las Vegas), which can be accessed here:
http://piyush-harsh.info/content/GCA2014-rcb.pdf

I was wondering, if our effort is something that OpenStack
Ceilometer/Telemetry release team would be interested in?

I do understand that initially rating-charging-billing service may have
been left out by choice as they would need to be tightly coupled with
existing CRM/Billing systems, but Cyclops design (intended) is distributed,
service oriented architecture with each component allowing for possible
integration with external software via REST APIs. And therefore Cyclops by
design is CRM/Billing platform agnostic. Although Cyclops PoC
implementation does include a basic bill generation module.

We in our team are committed to this development effort and we will have
resources (interns, students, researchers) work on features and improve the
code-base for a foreseeable number of years to come.

Do you see a chance if our efforts could make in as an incubated project in
OpenStack within Ceilometer?

I really would like to hear back from you, comments, suggestions, etc.

Kind regards,
Piyush.
___
Dr. Piyush Harsh, Ph.D.
Researcher, InIT Cloud Computing Lab
Zurich University of Applied Sciences (ZHAW)
[Site] http://piyush-harsh.info
[Research Lab] http://www.cloudcomp.ch/
Fax: +41(0)58.935.7403 GPG Keyid: 9C5A8838
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] call for operator-focused docs

2014-08-07 Thread Vinay B S (vinbs)
Hi Devananda,

I have been working on the documentation of some of the areas you listed. I 
have updated the bug https://bugs.launchpad.net/ironic/+bug/1323589 by 
including the other requirements which I haven't documented yet.

Regards,
Vinay

-Original Message-
From: Devananda van der Veen [mailto:devananda@gmail.com] 
Sent: Thursday, August 07, 2014 2:02 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Ironic] call for operator-focused docs

Hi all!

Short version: if you have operational knowledge setting up Ironic (either the 
Icehouse release or current trunk), you can help out a lot right now by sharing 
that knowledge.

Long version...

I've seen an influx of folks interested in deploying Ironic over the past few 
months, which is fantastic and awesome and also somewhat scary -- there are 
clearly people using Ironic who are not also part of its developer community! 
It has also become increasingly apparent that, while our developer docs are 
good (or at least good enough), our operational docs leave a lot to be 
desired. Some folks even went back to the old Nova Baremetal wiki, which is a 
terrible thing because most of what that says is almost similar to Ironic but 
wrong. (I have updated that to have more bold text about its deprecated status, 
and will archive the page once that driver is actually removed from Nova.)

During the Icehouse cycle, the core review team waited until close to the 
release to write docs. While we were focused on developer docs, we also put 
together some operational docs (kudos to the folks who contributed!). That 
process worked well since it was our first release, and as developers, it's 
easy for us to iterate on the developer docs. However, hindsight being what it 
is, I don't think we knew enough about what users and operators would need, and 
now I think we will be doing our community a disservice if we don't provide 
more operator-focused docs soon.

The areas where I'm currently seeing a lot of questions from operators are:

- building the deploy kernel  ramdisk pair // configuring ironic to use them
- how to enroll nodes // what information needs to be supplied
- relationship between ironic and nova scheduler (flavors, capabilities, etc)
- example/suggested neutron configuration for provisioning physical machines
- recommended deployment topology and rationale (service co-location or 
isolation)
- how to run the nova.virt.ironic driver alongside a traditional hypervisor 
driver

A lot of this is done by the automation tooling we use every day (devstack and 
tripleo). However, neither of these are a replacement for a human-readable set 
of instructions to help a smart person figure out what they're supposed to do, 
especially if they just want to start using Ironic with their existing 
OpenStack deployment.

So -- if you're runnig Ironic (outside of devstack or tripleo) or are in the 
process of figuring that out (and maybe already asking questions in IRC), 
please consider proposing something to the in-tree doc pages, found here:

http://git.openstack.org/cgit/openstack/ironic/tree/doc/source/deploy


Thanks in advance,
Devananda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-07 Thread Thierry Carrez
Armando M. wrote:
 This thread is moving so fast I can't keep up!
 
 The fact that troubles me is that I am unable to grasp how we move
 forward, which was the point of this thread to start with. It seems we
 have 2 options:
 
 - We make GBP to merge as is, in the Neutron tree, with some minor
 revision (e.g. naming?);
 - We make GBP a stackforge project, that integrates with Neutron in some
 shape or form;
 
 Another option, might be something in between, where GBP is in tree, but
 in some sort of experimental staging area (even though I am not sure how
 well baked this idea is).
 
 Now, as a community we all need make a decision; arguing about the fact
 that the blueprint was approved is pointless.

I agree with you: it is possible to change your mind on a topic and
revisit past decisions. In past OpenStack history we did revert merged
commits and remove existing functionality because we felt it wasn't that
much of a great idea after all. Here we are talking about making the
right decision *before* the final merging and shipping into a release,
which is kind of an improvement. The spec system was supposed to help
limit such cases, but it's not bullet-proof.

In the end, if there is no consensus on that question within the Neutron
project (and I hear both sides have good arguments), our governance
gives the elected Neutron PTL the power to make the final call. If the
disagreement is between projects (like if Nova disagreed with the
Neutron decision), then the issue could be escalated to the TC.

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-07 Thread Nachi Ueno
Hi folks


I think this thread is still mixing topics. I feel we can archive 1000 mails :P
so let me name it and let me write my thought on this.


[Topic1] Nova parity priority

I do understand concern and this is highest priority.
However, group based policy effort won't slower this effort.


Because most skilled developers such as  Mark, Oleg and Carl are
working hard for this making happen.
I'm also trying to POC non-DVR approach such as
migrating nova-network directly to the neutron. (Sorry, this is off
topic, but if you are interested in this,
https://docs.google.com/presentation/d/12w28HhNpLltSpA6pJvKWBiqeEusaI-NM-OZo8HNyH2w/edit#slide=id.p
is my thought )

Even if codes for GBP and Nova parity is really independent, so it
won't conflict.
Note that if you have any task item related with nova parity work, plz ping me.

[Topic2] Neutron community decision making process

I'm super lazy guy, so I'll be very disappointed the code i write rejected..
( http://openstackreactions.enovance.com/2013/09/when-my-patch-get-2/ )

We are discussing this spec for couple of releases…
IMO, we should have voting process such as IETF or let PTL make final
decision like TTX saying.

[Topic3] Group based policy specs

I'm not jumped in this one. I have already shared my thought on this.
This is still extension proposal. so we should let user choose use
this or not.
If it is proven for wide use, then we can discuss making it for core API.

I'm also working on another extension spec. It is deferred for next
release, 
(http://openstackreactions.enovance.com/2014/04/getting-told-that-this-feature-will-not-be-accepted-before-next-release/
)

but if you have any interest on this plz ping me

http://docs-draft.openstack.org/12/93112/16/check/gate-neutron-specs-docs/afb346d/doc/build/html/specs/juno/security_group_action.html
http://docs-draft.openstack.org/12/93112/16/check/gate-neutron-specs-docs/afb346d/doc/build/html/specs/juno/security_group_for_network.html

[Topic4] Where we develop group based policy stuffs. And generally,
service related stuffs.

I do remember the LBaaS new project meeting at the Summit. I remember
someone says I don't trust neutron makes this feature make it working
in Juno timeframe..  I guess he was right..

We should have a new project for service related stuffs.
( 
http://openstackreactions.enovance.com/2013/11/when-a-new-openstack-project-announce-itself/
)


Best
Nachi

2014-08-07 19:23 GMT+09:00 Thierry Carrez thie...@openstack.org:
 Armando M. wrote:
 This thread is moving so fast I can't keep up!

 The fact that troubles me is that I am unable to grasp how we move
 forward, which was the point of this thread to start with. It seems we
 have 2 options:

 - We make GBP to merge as is, in the Neutron tree, with some minor
 revision (e.g. naming?);
 - We make GBP a stackforge project, that integrates with Neutron in some
 shape or form;

 Another option, might be something in between, where GBP is in tree, but
 in some sort of experimental staging area (even though I am not sure how
 well baked this idea is).

 Now, as a community we all need make a decision; arguing about the fact
 that the blueprint was approved is pointless.

 I agree with you: it is possible to change your mind on a topic and
 revisit past decisions. In past OpenStack history we did revert merged
 commits and remove existing functionality because we felt it wasn't that
 much of a great idea after all. Here we are talking about making the
 right decision *before* the final merging and shipping into a release,
 which is kind of an improvement. The spec system was supposed to help
 limit such cases, but it's not bullet-proof.

 In the end, if there is no consensus on that question within the Neutron
 project (and I hear both sides have good arguments), our governance
 gives the elected Neutron PTL the power to make the final call. If the
 disagreement is between projects (like if Nova disagreed with the
 Neutron decision), then the issue could be escalated to the TC.

 Regards,

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] nova-network stuck at get semaphores lock when startup

2014-08-07 Thread Alex Xu



On 2014?08?07? 17:13, Chen CH Ji wrote:


Just to clarify , I think your case would be run nova-network ,then ^C 
or abnormally shutdown it
and it might be during  the period of holding a semaphore without 
releasing it, right?



yes, you are right. thanks for the clarify.

guess all component other than nova have this problem ? so maybe 
remove this [nova] can get more input ...



yes




Best Regards!

Kevin (Chen) Ji ? ?

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian 
District, Beijing 100193, PRC


Inactive hide details for Alex Xu ---08/07/2014 04:54:09 PM---When I 
startup nova-network, it stuck at trying get lock for ebtaAlex Xu 
---08/07/2014 04:54:09 PM---When I startup nova-network, it stuck at 
trying get lock for ebtables. @utils.synchronized('ebtables


From: Alex Xu x...@linux.vnet.ibm.com
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.org,

Date: 08/07/2014 04:54 PM
Subject: [openstack-dev] [nova] nova-network stuck at get semaphores 
lock when startup






When I startup nova-network, it stuck at trying get lock for ebtables.

@utils.synchronized('ebtables', external=True)
def ensure_ebtables_rules(rules, table='filter'):
.

Checking the code found that invoke utils.synchronized without parameter
lock_path, the code will try to use
posix semaphore.

But posix semaphore won't release even the process crashed. Should we
fix it? I saw a lot of call for synchronized
without lock_path.

Thanks
Alex


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-07 Thread Jay Pipes

On 08/07/2014 02:12 AM, Kashyap Chamarthy wrote:

On Thu, Aug 07, 2014 at 07:10:23AM +1000, Michael Still wrote:

On Wed, Aug 6, 2014 at 2:03 AM, Thierry Carrez thie...@openstack.org wrote:


We seem to be unable to address some key issues in the software we
produce, and part of it is due to strategic contributors (and core
reviewers) being overwhelmed just trying to stay afloat of what's
happening. For such projects, is it time for a pause ? Is it time to
define key cycle goals and defer everything else ?


[. . .]


We also talked about tweaking the ratio of tech debt runways vs
'feature runways. So, perhaps every second release is focussed on
burning down tech debt and stability, whilst the others are focussed
on adding features.



I would suggest if we do such a thing, Kilo should be a stability'
release.


Excellent sugestion. I've wondered multiple times that if we could
dedicate a good chunk (or whole) of a specific release for heads down
bug fixing/stabilization. As it has been stated elsewhere on this list:
there's no pressing need for a whole lot of new code submissions, rather
we focusing on fixing issues that affect _existing_ users/operators.


There's a whole world of GBP/NFV/VPN/DVR/TLA folks that would beg to 
differ on that viewpoint. :)


That said, I entirely agree with you and wish efforts to stabilize would 
take precedence over feature work.


Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Nova] API design and usability

2014-08-07 Thread Jay Pipes

On 08/06/2014 11:08 PM, Robert Collins wrote:

On 7 August 2014 15:31, Christopher Yeoh cbky...@gmail.com wrote:

On Thu, 7 Aug 2014 11:58:43 +1200
Robert Collins robe...@robertcollins.net wrote:

...

At the moment when cleaning up we don't know if a port was autocreated
by Nova or was passed to us initially through the API.


That seems like a very small patch to fix - record the source, use
that info on cleanup.


It isn't a particularly small patch, but it's already been done by Aaron 
and been in review for a while now.


https://review.openstack.org/#/c/77043/


And there can be
a long period of time between the initial server creation request and
failure/cleanup - the API layer responds to the client well before the
server has successfully started or failed.


Right.


I think this sort of logic is much better handled above the REST API
layer- which doesn't have to mean duplicated code in multiple clients


It doesn't? So we'll build a stateful client side datastore, and
provide language bindings to it from Python, Ruby, Java, etc?


- it can for example be handled by client libraries such as
python-novaclient or openstackclient and neutron related errors more
directly returned to the client rather than having them proxied
all the way through Nova.


I disagree that the REST API is the place for this to happen. I think a 
behind-the-REST-API notification/eventing piece is a better idea.


Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][oslo] oslo.config and import chains

2014-08-07 Thread Matthew Booth
I'm sure this is well known, but I recently encountered this problem for
the second time.

---
foo:
import oslo.config as cfg

import bar

CONF = cfg.CONF
CONF.register_opts('foo_opt')

---
bar:
import oslo.config as cfg

CONF = cfg.CONF

def bar_func(arg=CONF.foo_opt):
  pass
---

importing foo results in an error in bar because CONF.foo_opt doesn't
exist. This is because bar is imported before CONF.register_opts.
CONF.import_opt() fails in the same way because it just imports foo and
hits the exact same problem when foo imports bar.

A (the?) solution is to register_opts() in foo before importing any
modules which might also use oslo.config. This also allows import_opt()
to work in bar, which you should do to remove any dependency on import
order:

---
foo:
import oslo.config as cfg

CONF = cfg.CONF
CONF.register_opts('foo_opt')

import bar

---
bar:
import oslo.config as cfg

CONF = cfg.CONF
CONF.import_opt('foo_opt', 'foo')

def bar_func(arg=CONF.foo_opt):
  pass
---

Even if it's old news it's worth a refresher because it was a bit of a
headscratcher.

Matt
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] dns for overcloud nodes

2014-08-07 Thread Jan Provaznik

Hi,
by default we don't set nameserver when setting up neutron subnet used 
by overcloud nodes, then nameserver points to the machine where 
undercloud's dnsmasq is running.


I wonder if we should not change *default* devtest setup to allow dns 
resolving not only for local network but for internet? Proper DNS 
resolving is handy e.g. for package update scenario.


This would mean:

a) set explicitly nameserver when configuring neutron subnet (as it's 
done for network in overcloud [1])


or

b) set forwarding dns server for dnsmasq [2]

Any thoughts?

Thanks, Jan


[1]: 
https://github.com/openstack/tripleo-incubator/blob/master/scripts/setup-neutron#L53

[2]: https://github.com/openstack/neutron/blob/master/etc/dhcp_agent.ini#L67

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Which program for Rally

2014-08-07 Thread Rohan Kanade

 Date: Wed, 06 Aug 2014 09:44:12 -0400
 From: Sean Dague s...@dague.net
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] Which program for Rally
 Message-ID: 53e2312c.8000...@dague.net
 Content-Type: text/plain; charset=utf-8

 Like the fact that right now the rally team is proposing gate jobs which
 have some overlap to the existing largeops jobs. Did they start a
 conversation about it? Nope. They just went off to do their thing
 instead. https://review.openstack.org/#/c/112251/


Hi Sean,

Appreciate your analysis
Here is a comparison of the tempest largeops job and similar in Rally.

What large-ops job provides as of now:
Running hard-coded configured benchmarks (in gates), that are taken from
tempest repo.
eg:  run 100 vms by one request. End result is a +1 or -1 which doesnt
really reflect much in terms of the performance stats and regressions in
performance.


What Rally job provides:
(example in glance:
https://github.com/openstack/glance/tree/master/rally-scenarios)

1) Projects can specify which benchmarks to run:
https://github.com/openstack/glance/blob/master/rally-scenarios/glance.yaml

2) Projects can specify conditions of passing and inputs to benchmarks
(e.g. there is no iteration of benchmark failed and average duration of
iteration is less then X)
https://github.com/stackforge/rally/blob/master/rally-scenarios/rally.yaml#L11-L12


3) Projects can create any number of benchmarks inside their source tree
(so they don't need to merge anything to rally)
https://github.com/openstack/glance/tree/master/rally-scenarios/plugins

4) Users are getting automated reports of all benchmarks:
http://logs.openstack.org/81/112181/2/check/gate-rally-dsvm-rally/78b1146/rally-plot/results.html.gz

5) Users can easily install Rally (with this script
https://github.com/stackforge/rally/blob/master/install_rally.sh)
and test benchmark locally, using the same benchmark configuration as in
gate.

 6) Rally jobs (benchmarks) give you capabilities to check for SLAs in your
gates itself, which helps immensly to gauge impact of your proposed change
on the current code in terms of performance and SLA

Basically with Rally job, one can benchmark changes and compare them with
master in gates.
Using below approach:

1) Put patch set 1 that changes rally benchmark configuration and probably
adds some benchmark
Get base results

2) Put patch set2 that includes point 1 + changes that fixes issue
Get new results

3) Compare results and if new results are better push patch set 3 that
removes changes in task and merge it.




 So now we're going to run 2 jobs that do very similar things, with
 different teams adjusting the test loads. Which I think is basically
 madness.

 -Sean

 --
 Sean Dague
 http://dague.net


Rally jobs allows every project to chose what benchmarks to have (as a
plugins in their source tree)  run in gates.

Rally is trying to be as open as possible by helping projects define and
set their own benchmarks in their gates which they have full control over.

I think this is very important point as it simplifies a lot work on
performance issues. Hopefully we can discuss out these issues in IRC or
someplace so that we are all on same page in terms of details about what
Rally does and what it doesnt do.

Rohan Kanade
Senior Software Engineer, Red Hat
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][QA] Enabling full neutron Job

2014-08-07 Thread Salvatore Orlando
Patch [1] will solve the observed issue. It has already passed Jenkins
tests.
As it is a nova patch, the neutron full job did not run for it.
To check the neutron full job outcome with [1], please check [2].

Salvatore

[1] https://review.openstack.org/#/c/112541/
[2] https://review.openstack.org/#/c/98441/


On 7 August 2014 10:34, Salvatore Orlando sorla...@nicira.com wrote:

 I had to put the patch back on WIP because yesterday a bug causing a 100%
 failure rate slipped in.
 It should be an easy fix, and I'm already working on it.
 Situations like this, exemplified by [1] are a bit frustrating for all the
 people working on improving neutron quality.
 Now, if you allow me a little rant, as Neutron is receiving a lot of
 attention for all the ongoing discussion regarding this group policy stuff,
 would it be possible for us to receive a bit of attention to ensure both
 the full job and the grenade one are switched to voting before the juno-3
 review crunch.

 We've already had the attention of the QA team, it would probably good if
 we could get the attention of the infra core team to ensure:
 1) the jobs are also deemed by them stable enough to be switched to voting
 2) the relevant patches for openstack-infra/config are reviewed

 Regards,
 Salvatore

 [1]
 http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwie3UnbWVzc2FnZSc6IHUnRmxvYXRpbmcgaXAgcG9vbCBub3QgZm91bmQuJywgdSdjb2RlJzogNDAwfVwiIEFORCBidWlsZF9uYW1lOlwiY2hlY2stdGVtcGVzdC1kc3ZtLW5ldXRyb24tZnVsbFwiIEFORCBidWlsZF9icmFuY2g6XCJtYXN0ZXJcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiMTcyODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwNzQwMDExMDIwNywibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==


 On 23 July 2014 14:59, Matthew Treinish mtrein...@kortar.org wrote:

 On Wed, Jul 23, 2014 at 02:40:02PM +0200, Salvatore Orlando wrote:
  Here I am again bothering you with the state of the full job for
 Neutron.
 
  The patch for fixing an issue in nova's server external events extension
  merged yesterday [1]
  We do not have yet enough data points to make a reliable assessment,
 but of
  out 37 runs since the patch merged, we had only 5 failures, which puts
  the failure rate at about 13%
 
  This is ugly compared with the current failure rate of the smoketest
 (3%).
  However, I think it is good enough to start making the full job voting
 at
  least for neutron patches.
  Once we'll be able to bring down failure rate to anything around 5%, we
 can
  then enable the job everywhere.

 I think that sounds like a good plan. I'm also curious how the failure
 rates
 compare to the other non-neutron jobs, that might be a useful comparison
 too
 for deciding when to flip the switch everywhere.

 
  As much as I hate asymmetric gating, I think this is a good compromise
 for
  avoiding developers working on other projects are badly affected by the
  higher failure rate in the neutron full job.

 So we discussed this during the project meeting a couple of weeks ago [3]
 and
 there was a general agreement that doing it asymmetrically at first would
 be
 better. Everyone should be wary of the potential harms with doing it
 asymmetrically and I think priority will be given to fixing issues that
 block
 the neutron gate should they arise.

  I will therefore resume work on [2] and remove the WIP status as soon
 as I
  can confirm a failure rate below 15% with more data points.
 

 Thanks for keeping on top of this Salvatore. It'll be good to finally be
 at
 least partially gating with a parallel job.

 -Matt Treinish

 
  [1] https://review.openstack.org/#/c/103865/
  [2] https://review.openstack.org/#/c/88289/
 [3]
 http://eavesdrop.openstack.org/meetings/project/2014/project.2014-07-08-21.03.log.html#l-28

 
 
  On 10 July 2014 11:49, Salvatore Orlando sorla...@nicira.com wrote:
 
  
  
  
   On 10 July 2014 11:27, Ihar Hrachyshka ihrac...@redhat.com wrote:
  
   -BEGIN PGP SIGNED MESSAGE-
   Hash: SHA512
  
   On 10/07/14 11:07, Salvatore Orlando wrote:
The patch for bug 1329564 [1] merged about 11 hours ago. From [2]
it seems there has been an improvement on the failure rate, which
seem to have dropped to 25% from over 40%. Still, since the patch
merged there have been 11 failures already in the full job out of
42 jobs executed in total. Of these 11 failures: - 3 were due to
problems in the patches being tested - 1 had the same root cause as
bug 1329564. Indeed the related job started before the patch merged
but finished after. So this failure doesn't count. - 1 was for an
issue introduced about a week ago which actually causing a lot of
failures in the full job [3]. Fix should be easy for it; however
given the nature of the test we might even skip it while it's
fixed. - 3 were for bug 1333654 [4]; for this bug discussion is
going on on gerrit regarding the most suitable approach. - 3 were
for lock wait timeout errors. Several people in the community are
   

Re: [openstack-dev] [Ironic] Exceptional approval request for Cisco Driver Blueprint

2014-08-07 Thread Dmitry Tantsur
Hi!

Didn't read the spec thoroughly, but I'm concerned by it's huge scope.
It's actually several specs squashed into one (not too detailed). My
vote is splitting it into a chain of specs (at least 3: power driver,
discovery, other configurations) and seek exception separately.
Actually, I'm +1 on making exception for power driver, but -0 on the
others, until I see a separate spec for them.

Dmitry.

On Thu, 2014-08-07 at 09:30 +0530, GopiKrishna Saripuri wrote:
 Hi,
 
 
 I've submitted Ironic Cisco driver blueprint post proposal freeze
 date. This driver is critical for Cisco and few customers to test as
 part of their private cloud expansion. The driver implementation is
 ready along with unit-tests. Will submit the code for review once
 blueprint is accepted. 
 
 
 The Blueprint review link: https://review.openstack.org/#/c/110217/
 
 
 Please let me know If its possible to include this in Juno release.
 
 
 
 Regards
 GopiKrishna S
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Nova] API design and usability

2014-08-07 Thread Mathieu Gagné

On 2014-08-06 7:58 PM, Robert Collins wrote:


I'm astounded by this proposal - it doesn't remove the garbage
collection complexity at all - it transfers it from our code - Nova -
onto end users. So rather than one tested and consolidated
implementation, we'll have one implementation in saltstack, one
implementation in heat, one implementation in Juju, one implementation
in foreman etc.

In what possible way is that an improvement ?



I agree with Robert. It is not an improvement.

For various reasons, in some parts of our systems, we have to manually 
create ports beforehand and it has always been a mess.


Instance creation often fails for all sort of reasons and it's really 
annoying to have to garbage collect orphan ports once in a while. The 
typically user does not use the API and does not care about the 
underlying details.


In other parts of our systems, we do rely on port auto-creation. It 
might has its flaws but when we use it, it works like a charm and we 
like it. We really appreciate the orchestration and automation made by Nova.


IMO, moving the burden of such orchestration (and garbage collection) to 
the end users would be a mistake. It's not a good UX at all.


I could say that removing auto-creation is like having to create your 
volume (from an image) before booting on it. Before BDMv2, that's what 
we had to do and it wasn't cool at all. We had to implement a logic 
waiting for the volume to be 'available' before booting on it otherwise 
Nova would complain about the volume not being available. Now that we 
have BDMv2, it's a much better UX.


I want to be able to run this command and not worry about pre-steps:

  nova boot --num-instances=50 [...] app.example.org

--
Mathieu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Which program for Rally

2014-08-07 Thread Angus Salkeld
On Wed, 2014-08-06 at 15:48 -0600, John Griffith wrote:
 I have to agree with Duncan here.  I also don't know if I fully
 understand the limit in options.  Stress test seems like it
 could/should be different (again overlap isn't a horrible thing) and I
 don't see it as siphoning off resources so not sure of the issue.
  We've become quite wrapped up in projects, programs and the like
 lately and it seems to hinder forward progress more than anything
 else.
h
 
 I'm also not convinced that Tempest is where all things belong, in
 fact I've been thinking more and more that a good bit of what Tempest
 does today should fall more on the responsibility of the projects
 themselves.  For example functional testing of features etc, ideally
 I'd love to have more of that fall on the projects and their
 respective teams.  That might even be something as simple to start as
 saying if you contribute a new feature, you have to also provide a
 link to a contribution to the Tempest test-suite that checks it.
  Sort of like we do for unit tests, cross-project tracking is
 difficult of course, but it's a start.  The other idea is maybe
 functional test harnesses live in their respective projects.
 

Couldn't we reduce the scope of tempest (and rally) : make tempest the
API verification and rally the secenario/performance tester? Make each
tool do less, but better. My point being to split the projects by
functionality so there is less need to share code and stomp on each
other's toes.

 
 
 Honestly I think who better to write tests for a project than the
 folks building and contributing to the project.  At some point IMO the
 QA team isn't going to scale.  I wonder if maybe we should be thinking
 about proposals for delineating responsibility and goals in terms of
 functional testing?
 

This is planned, I believe.

-Angus

 
 
 
 
 
 On Wed, Aug 6, 2014 at 12:25 PM, Duncan Thomas
 duncan.tho...@gmail.com wrote:
 I'm not following here - you complain about rally being
 monolithic,
 then suggest that parts of it should be baked into tempest - a
 tool
 that is already huge and difficult to get into. I'd rather see
 tools
 that do one thing well and some overlap than one tool to rule
 them
 all.

+1

 
 On 6 August 2014 14:44, Sean Dague s...@dague.net wrote:
  On 08/06/2014 09:11 AM, Russell Bryant wrote:
  On 08/06/2014 06:30 AM, Thierry Carrez wrote:
  Hi everyone,
 
  At the TC meeting yesterday we discussed Rally program
 request and
  incubation request. We quickly dismissed the incubation
 request, as
  Rally appears to be able to live happily on top of
 OpenStack and would
  benefit from having a release cycle decoupled from the
 OpenStack
  integrated release.
 
  That leaves the question of the program. OpenStack
 programs are created
  by the Technical Committee, to bless existing efforts and
 teams that are
  considered *essential* to the production of the
 OpenStack integrated
  release and the completion of the OpenStack project
 mission. There are 3
  ways to look at Rally and official programs at this point:
 
  1. Rally as an essential QA tool
  Performance testing (and especially performance regression
 testing) is
  an essential QA function, and a feature that Rally
 provides. If the QA
  team is happy to use Rally to fill that function, then
 Rally can
  obviously be adopted by the (already-existing) QA program.
 That said,
  that would put Rally under the authority of the QA PTL,
 and that raises
  a few questions due to the current architecture of Rally,
 which is more
  product-oriented. There needs to be further discussion
 between the QA
  core team and the Rally team to see how that could work
 and if that
  option would be acceptable for both sides.
 
  2. Rally as an essential operator tool
  Regular benchmarking of OpenStack deployments is a best
 practice for
  cloud operators, and a feature that Rally provides. With a
 bit of a
  stretch, we could consider that benchmarking is essential
 to the
  completion of the OpenStack project mission. That program
 could one day
  evolve to include more such operations best practices
 tools. In
  addition to the slight stretch already mentioned, one
 concern here is
  that we still want to have performance testing in QA
 (which is clearly
  essential to the production of OpenStack). Letting Rally
 primarily be
  an operational tool might make that outcome more
 difficult.
 
 

[openstack-dev] [NFV] Meeting summary 2014-08-06

2014-08-07 Thread Steve Gordon
Meeting Summary (HTML): 
http://eavesdrop.openstack.org/meetings/nfv/2014/nfv.2014-08-06-14.00.html
Meeting Log (HTML): 
http://eavesdrop.openstack.org/meetings/nfv/2014/nfv.2014-08-06-14.00.log.html
Meeting Summary (TXT): 
http://eavesdrop.openstack.org/meetings/nfv/2014/nfv.2014-08-06-14.00.txt
Meeting Log (TXT): 
http://eavesdrop.openstack.org/meetings/nfv/2014/nfv.2014-08-06-14.00.log.txt

Action items:
* ACTION: sgordon to email list about alternating schedule (sgordon, 14:11:31)
* ACTION: sgordon to remove extensible resource tracker from nfv list (for 
now...) (sgordon, 14:33:27)
* ACTION: sgordon to update wiki to reflect on track for juno vs outstanding 
(sgordon, 14:48:16)

NB: Code reviews for Juno are now updating @ http://nfv.russellbryant.net/ again

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Which program for Rally

2014-08-07 Thread Sean Dague
On 08/06/2014 05:48 PM, John Griffith wrote:
 I have to agree with Duncan here.  I also don't know if I fully
 understand the limit in options.  Stress test seems like it could/should
 be different (again overlap isn't a horrible thing) and I don't see it
 as siphoning off resources so not sure of the issue.  We've become quite
 wrapped up in projects, programs and the like lately and it seems to
 hinder forward progress more than anything else.

Today we have 2 debug domains that developers have to deal with when
tests fails:

 * project level domain (unit tests)
 * cross project (Tempest)

Even 2 debug domains is considered too much for most people, as we get
people that understand one or another, and just throw up their hands
when they are presented with a failure outside their familiar debug domain.

So if Rally was just taken in as a whole, as it exists now, it would
create a 3rd debug domain. It would include running a bunch of tests
that we run in cross project and project level domain, yet again,
written a different way. And when it fails this will be another debug
domain.

I think a 3rd debug domain isn't going to help any of the OpenStack
developers or Operators.

Moving the test payload into Tempest hopefully means getting a more
consistent model for all these tests so when things fail, there is some
common pattern people are familiar with to get to the bottom of things.
As opaque as Tempest runs feel to people, there has been substantial
effort in providing first failure dumps to get as much information about
what's wrong as possible. I agree things could be better, but you will
be starting that work all over from scratch with Rally again.

It also means we could potentially take advantage of the 20,000 Tempest
runs we do every week. We're actually generating a ton of data now that
is not being used for analysis. We're at a point in Tempest development
where to make some data based decisions on which tests need extra
attention, which probably need to get dropped, we need this anyway.

 I'm also not convinced that Tempest is where all things belong, in fact
 I've been thinking more and more that a good bit of what Tempest does
 today should fall more on the responsibility of the projects themselves.
  For example functional testing of features etc, ideally I'd love to
 have more of that fall on the projects and their respective teams.  That
 might even be something as simple to start as saying if you contribute
 a new feature, you have to also provide a link to a contribution to the
 Tempest test-suite that checks it.  Sort of like we do for unit tests,
 cross-project tracking is difficult of course, but it's a start.  The
 other idea is maybe functional test harnesses live in their respective
 projects.
 
 Honestly I think who better to write tests for a project than the folks
 building and contributing to the project.  At some point IMO the QA team
 isn't going to scale.  I wonder if maybe we should be thinking about
 proposals for delineating responsibility and goals in terms of
 functional testing?

I 100% agree in getting some of Tempest existing content out and into
functional tests. Honestly I imagine a Tempest that's 1/2 the # of tests
a year away. Mostly it's going to be about ensuring that projects have
the coverage before we delete the safety nets.

And I 100% agree on getting some better idea on functional boundaries.
But I think that's something we need some practical experience on first.
Setting a policy without figuring out what in practice works is
something I expect wouldn't work so well. My expectation is this is
something we're going to take a few stabs at post J3, and bring into
summit for discussion.

...

So the question is do we think there should be 2 or 3 debug domains for
developers and operators on tests? My feeling is 2 puts us in a much
better place as a community.

The question is should Tempest provide data analysis on it's test runs
or should that be done in completely another program. Doing so in
another program means that all the deficiencies of the existing data get
completely ignored (like variability per run, interactions between
tests, between tests and periodic jobs, difficulty in time accounting of
async ops) to produce some pretty pictures that miss the point, because
they aren't measuring a thing that's real.

And the final question is should Tempest have an easier to understand
starting point than a tox command, like and actual cli for running
things. I think it's probably clear that it should. It would probably
actually make Tempest less big and scary for people.

Because I do think 'do one job and do it well' is completely consistent
with 'run tests across OpenStack projects and present that data in a
consumable way'.

The question basically is whether it's believed that collecting timing
analysis of test results is a separate concern from collecting
correctness results of test results. The Rally team would argue that
they are. I'd argue that they 

Re: [openstack-dev] [Ironic] Proposal for slight change in our spec process

2014-08-07 Thread Dmitry Tantsur
Hi!

On Tue, 2014-08-05 at 12:33 -0700, Devananda van der Veen wrote:
 Hi all!
 
 
 The following idea came out of last week's midcycle for how to improve
 our spec process and tracking on launchpad. I think most of us liked
 it, but of course, not everyone was there, so I'll attempt to write
 out what I recall.
 
 
 This would apply to new specs proposed for Kilo (since the new spec
 proposal deadline has already passed for Juno).
 
 
 
 
 First, create a blueprint in launchpad and populate it with your
 spec's heading. Then, propose a spec with just the heading (containing
 a link to the BP), Problem Description, and first paragraph outlining
 your Proposed change. 
 
 
 This will be given an initial, high-level review to determine whether
 it is in scope and in alignment with project direction, which will be
 reflected on the review comments, and, if affirmed, by setting the
 blueprint's Direction field to Approved.

How will we formally track it in Gerrit? By having several +1's by spec
cores? Or will it be done by you (I guess only you can update
Direction in LP)?

 
 
 At this point, if affirmed, you should proceed with filling out the
 entire spec, and the remainder of the process will continue as it was
 during Juno. Once the spec is approved, update launchpad to set the
 specification URL to the spec's location on
 https://specs.openstack.org/openstack/ironic-specs/ and a member of
 the team (probably me) will update the release target, priority, and
 status.
 
 
 
 
 I believe this provides two benefits. First, it should give quicker
 initial feedback to proposer if their change is going to be in/out of
 scope, which can save considerable time if the proposal is out of
 scope. Second, it allows us to track well-aligned specs on Launchpad
 before they are completely approved. We observed that several specs
 were approved at nearly the same time as the code was approved. Due to
 the way we were using LP this cycle, it meant that LP did not reflect
 the project's direction in advance of landing code, which is not what
 we intended. This may have been confusing, and I think this will help
 next cycle. FWIW, several other projects have observed a similar
 problem with spec-launchpad interaction, and are adopting similar
 practices for Kilo.
 
 
 
 
 Comments/discussion welcome!

I'm +1 to the idea, just some concerns about the implementation:
1. We don't have any pre-approved state in Gerrit - need agreement on
when to continue (see above)
2. We'll need to speed up spec reviews, because we're adding one more
blocker on the way to the code being merged :) Maybe it's no longer a
problem actually, we're doing it faster now.

 
 
 
 -Deva
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Which program for Rally

2014-08-07 Thread Sean Dague
On 08/07/2014 07:58 AM, Angus Salkeld wrote:
 On Wed, 2014-08-06 at 15:48 -0600, John Griffith wrote:
 I have to agree with Duncan here.  I also don't know if I fully
 understand the limit in options.  Stress test seems like it
 could/should be different (again overlap isn't a horrible thing) and I
 don't see it as siphoning off resources so not sure of the issue.
  We've become quite wrapped up in projects, programs and the like
 lately and it seems to hinder forward progress more than anything
 else.
 h

 I'm also not convinced that Tempest is where all things belong, in
 fact I've been thinking more and more that a good bit of what Tempest
 does today should fall more on the responsibility of the projects
 themselves.  For example functional testing of features etc, ideally
 I'd love to have more of that fall on the projects and their
 respective teams.  That might even be something as simple to start as
 saying if you contribute a new feature, you have to also provide a
 link to a contribution to the Tempest test-suite that checks it.
  Sort of like we do for unit tests, cross-project tracking is
 difficult of course, but it's a start.  The other idea is maybe
 functional test harnesses live in their respective projects.

 
 Couldn't we reduce the scope of tempest (and rally) : make tempest the
 API verification and rally the secenario/performance tester? Make each
 tool do less, but better. My point being to split the projects by
 functionality so there is less need to share code and stomp on each
 other's toes.

Who is going to propose the split? Who is going to manage the
coordination of the split? What happens when their is disagreement about
the location of something like booting and listing a server -
https://github.com/stackforge/rally/blob/master/rally/benchmark/scenarios/nova/servers.py#L44-L64

Because today we've got fundamental disagreements between the teams on
scope, long standing (as seen in these threads), so this won't
organically solve itself.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Deprecating CONF.block_device_allocate_retries_interval

2014-08-07 Thread John Garbutt
On 6 August 2014 18:54, Jay Pipes jaypi...@gmail.com wrote:
 So, Liyi Meng has an interesting patch up for Nova:

 https://review.openstack.org/#/c/104876

 1) We should just deprecate both the options, with a note in the option help
 text that these options are not used when volume size is not 0, and that the
 interval is calculated based on volume size

This feels bad.

 2) We should deprecate the CONF.block_device_allocate_retries_interval
 option only, and keep the CONF.block_device_allocate_retries configuration
 option as-is, changing the help text to read something like Max number of
 retries. We calculate the interval of the retry based on the size of the
 volume.

What about a slight modification to (2)...

3) CONF.block_device_allocate_retries_interval=-1 means calculate
using volume size, and we make it the default, so people can still
override it if they want to. But we also deprecate the option with a
view of removing it during Kilo? Move
CONF.block_device_allocate_retries as max retries.

 I bring this up on the mailing list because I think Liyi's patch offers an
 interesting future direction to the way that we think about our retry
 approach in Nova. Instead of having hard-coded or configurable interval
 times, I think Liyi's approach of calculating the interval length based on
 some input values is a good direction to take.

Seems like the right direction.

But I do worry that its quite dependent on the storage backend.
Sometimes the volume create is almost free regardless of the volume
size (with certain types of CoW). So maybe we end up needing some kind
of scaling factor on the weights. I kinda hope I am over thinking
that, and in reality it all works fine. I suspect that is the case.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-07 Thread Russell Bryant
On 08/07/2014 04:49 AM, Thierry Carrez wrote:
 Stefano Maffulli wrote:
 On Wed 06 Aug 2014 02:10:23 PM PDT, Michael Still wrote:
  - we rate limit the total number of blueprints under code review at
 any one time to a fixed number of slots. I secretly prefer the term
 runway, so I am going to use that for the rest of this email. A
 suggested initial number of runways was proposed at ten.

 oh, I like the 'slots/runaway model'. Sounds to me like kan ban (in 
 the Toyota sense not the hipster developer sense).

 A light in my head just went on.

 Let me translate what you're thinking about in other terms: the 
 slot/runaway model would switch what is now a push model into a pull 
 model. Currently we have patches coming in, pushed up for review. We 
 have then on gerrit reviewers and core reviewers shuffling through 
 these changesets, doing work and approve/comment. The reviewers have 
 little to no way to notice when they're overloaded and managers have no 
 way either. There is no way to identify when the process is suffering, 
 slowing down or not satisfying demand, if not when the backlog blows 
 up. As recent discussions demonstrate, this model is failing under our 
 growth.

 By switching to a model where we have a set of slots/runaway (buckets, 
 in Toyota's terminology) reviewers would have a clear way to *pull* new 
 reviews into their workstations to be processed. It's as simple as a 
 supermaket aisle: when there is no more pasta on the shelf, a clerk 
 goes in the backend and gets more pasta to refurbish the shelf. There 
 is no sophisticated algorithm to predict demand: it's the demand of 
 pasta that drives new pull requests (of pasta or changes to review).

 This pull mechanism would help make it very visible where the 
 bottlenecks are. At Toyota, for example, the amount of kanbans is the 
 visible way to understand the capacity of the plant. The amount of 
 slots/runaways would probably give us similar overview of the capacity 
 of each project and give us tools to solve bottlenecks before they 
 become emergencies.
 
 As an ex factory IT manager, I feel compelled to comment on that :)
 You're not really introducing a successful Kanban here, you're just
 clarifying that there should be a set number of workstations.
 
 Our current system is like a gigantic open space with hundreds of
 half-finished pieces, and a dozen workers keep on going from one to
 another with no strong pattern. The proposed system is to limit the
 number of half-finished pieces fighting for the workers attention at any
 given time, by setting a clear number of workstations.
 
 A true Kanban would be an interface between developers and reviewers,
 where reviewers define what type of change they have to review to
 complete production objectives, *and* developers would strive to produce
 enough to keep the kanban above the red line, but not too much (which
 would be piling up waste).
 
 Without that discipline, Kanbans are useless. Unless the developers
 adapt what they work on based on release objectives, you don't really
 reduce waste/inventory at all, it just piles up waiting for available
 runway slots. As I said in my original email, the main issue here is
 the imbalance between too many people proposing changes and not enough
 people caring about the project itself enough to be trusted with core
 reviewers rights.
 
 This proposal is not solving that, so it is not the miracle cure that
 will end all developers frustration, nor is it turning our push-based
 model into a sane pull-based one. The only way to be truly pull-based is
 to define a set of production objectives and have those objectives
 trickle up to the developers so that they don't work on something else.
 The solution is about setting release cycle goals and strongly
 communicating that everything out of those goals is clearly priority 2.
 
 Now I'm not saying this is a bad idea. Having too many reviews to
 consider at the same time dilutes review attention to the point where we
 don't finalize anything. Having runway slots makes sure there is a focus
 on a limited set of features at a time, which increases the chances that
 those get finalized.
 

I found this response to be very insightful, thank you.

I feel like this idea is essentially trying to figure out how to apply
an agile process to nova.  Lots and lots of people have tried to figure
out how to make it work for open source, and there's several reasons
that it just doesn't.  This came up in a thread last year here:

http://lists.openstack.org/pipermail/openstack-dev/2013-April/007872.html/

With that said, I really do appreciate the hunger to find new and better
ways to manage our work.  It's certainly needed and I hope to
continuously improve.

It seems one of the biggest benefits of this sort of proposal is rate
limiting how often we say yes so that we have more confidence that we
can follow up on things we say yes to.  That is indeed an improvement.
 We made a pass at trying some 

Re: [openstack-dev] [all] The future of the integrated release

2014-08-07 Thread Sean Dague
On 08/07/2014 08:54 AM, Russell Bryant wrote:
 On 08/07/2014 04:49 AM, Thierry Carrez wrote:
 Stefano Maffulli wrote:
 On Wed 06 Aug 2014 02:10:23 PM PDT, Michael Still wrote:
  - we rate limit the total number of blueprints under code review at
 any one time to a fixed number of slots. I secretly prefer the term
 runway, so I am going to use that for the rest of this email. A
 suggested initial number of runways was proposed at ten.

 oh, I like the 'slots/runaway model'. Sounds to me like kan ban (in 
 the Toyota sense not the hipster developer sense).

 A light in my head just went on.

 Let me translate what you're thinking about in other terms: the 
 slot/runaway model would switch what is now a push model into a pull 
 model. Currently we have patches coming in, pushed up for review. We 
 have then on gerrit reviewers and core reviewers shuffling through 
 these changesets, doing work and approve/comment. The reviewers have 
 little to no way to notice when they're overloaded and managers have no 
 way either. There is no way to identify when the process is suffering, 
 slowing down or not satisfying demand, if not when the backlog blows 
 up. As recent discussions demonstrate, this model is failing under our 
 growth.

 By switching to a model where we have a set of slots/runaway (buckets, 
 in Toyota's terminology) reviewers would have a clear way to *pull* new 
 reviews into their workstations to be processed. It's as simple as a 
 supermaket aisle: when there is no more pasta on the shelf, a clerk 
 goes in the backend and gets more pasta to refurbish the shelf. There 
 is no sophisticated algorithm to predict demand: it's the demand of 
 pasta that drives new pull requests (of pasta or changes to review).

 This pull mechanism would help make it very visible where the 
 bottlenecks are. At Toyota, for example, the amount of kanbans is the 
 visible way to understand the capacity of the plant. The amount of 
 slots/runaways would probably give us similar overview of the capacity 
 of each project and give us tools to solve bottlenecks before they 
 become emergencies.

 As an ex factory IT manager, I feel compelled to comment on that :)
 You're not really introducing a successful Kanban here, you're just
 clarifying that there should be a set number of workstations.

 Our current system is like a gigantic open space with hundreds of
 half-finished pieces, and a dozen workers keep on going from one to
 another with no strong pattern. The proposed system is to limit the
 number of half-finished pieces fighting for the workers attention at any
 given time, by setting a clear number of workstations.

 A true Kanban would be an interface between developers and reviewers,
 where reviewers define what type of change they have to review to
 complete production objectives, *and* developers would strive to produce
 enough to keep the kanban above the red line, but not too much (which
 would be piling up waste).

 Without that discipline, Kanbans are useless. Unless the developers
 adapt what they work on based on release objectives, you don't really
 reduce waste/inventory at all, it just piles up waiting for available
 runway slots. As I said in my original email, the main issue here is
 the imbalance between too many people proposing changes and not enough
 people caring about the project itself enough to be trusted with core
 reviewers rights.

 This proposal is not solving that, so it is not the miracle cure that
 will end all developers frustration, nor is it turning our push-based
 model into a sane pull-based one. The only way to be truly pull-based is
 to define a set of production objectives and have those objectives
 trickle up to the developers so that they don't work on something else.
 The solution is about setting release cycle goals and strongly
 communicating that everything out of those goals is clearly priority 2.

 Now I'm not saying this is a bad idea. Having too many reviews to
 consider at the same time dilutes review attention to the point where we
 don't finalize anything. Having runway slots makes sure there is a focus
 on a limited set of features at a time, which increases the chances that
 those get finalized.

 
 I found this response to be very insightful, thank you.
 
 I feel like this idea is essentially trying to figure out how to apply
 an agile process to nova.  Lots and lots of people have tried to figure
 out how to make it work for open source, and there's several reasons
 that it just doesn't.  This came up in a thread last year here:
 
 http://lists.openstack.org/pipermail/openstack-dev/2013-April/007872.html/
 
 With that said, I really do appreciate the hunger to find new and better
 ways to manage our work.  It's certainly needed and I hope to
 continuously improve.
 
 It seems one of the biggest benefits of this sort of proposal is rate
 limiting how often we say yes so that we have more confidence that we
 can follow up on things we say yes to.  That is 

Re: [openstack-dev] [all] The future of the integrated release

2014-08-07 Thread Russell Bryant
On 08/07/2014 09:07 AM, Sean Dague wrote: I think the difference is
slot selection would just be Nova drivers. I
 think there is an assumption in the old system that everyone in Nova
 core wants to prioritize the blueprints. I think there are a bunch of
 folks in Nova core that are happy having signaling from Nova drivers on
 high priority things to review. (I know I'm in that camp.)
 
 Lacking that we all have picking algorithms to hack away at the 500 open
 reviews. Which basically means it's a giant random queue.
 
 Having a few blueprints that *everyone* is looking at also has the
 advantage that the context for the bits in question will tend to be
 loaded into multiple people's heads at the same time, so is something
 that's discussable.
 
 Will it fix the issue, not sure, but it's an idea.

OK, got it.  So, success critically depends on nova-core being willing
to take review direction and priority setting from nova-drivers.  That
sort of assumption is part of why I think agile processes typically
don't work in open source.  We don't have the ability to direct people
with consistent and reliable results.

I'm afraid if people doing the review are not directly involved in at
least ACKing the selection and commiting to review something, putting
stuff in slots seems futile.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-07 Thread Sean Dague
On 08/06/2014 11:51 AM, Eoghan Glynn wrote:
 
 
 Hi everyone,

 With the incredible growth of OpenStack, our development community is
 facing complex challenges. How we handle those might determine the
 ultimate success or failure of OpenStack.

 With this cycle we hit new limits in our processes, tools and cultural
 setup. This resulted in new limiting factors on our overall velocity,
 which is frustrating for developers. This resulted in the burnout of key
 firefighting resources. This resulted in tension between people who try
 to get specific work done and people who try to keep a handle on the big
 picture.

 It all boils down to an imbalance between strategic and tactical
 contributions. At the beginning of this project, we had a strong inner
 group of people dedicated to fixing all loose ends. Then a lot of
 companies got interested in OpenStack and there was a surge in tactical,
 short-term contributions. We put on a call for more resources to be
 dedicated to strategic contributions like critical bugfixing,
 vulnerability management, QA, infrastructure... and that call was
 answered by a lot of companies that are now key members of the OpenStack
 Foundation, and all was fine again. But OpenStack contributors kept on
 growing, and we grew the narrowly-focused population way faster than the
 cross-project population.

 At the same time, we kept on adding new projects to incubation and to
 the integrated release, which is great... but the new developers you get
 on board with this are much more likely to be tactical than strategic
 contributors. This also contributed to the imbalance. The penalty for
 that imbalance is twofold: we don't have enough resources available to
 solve old, known OpenStack-wide issues; but we also don't have enough
 resources to identify and fix new issues.

 We have several efforts under way, like calling for new strategic
 contributors, driving towards in-project functional testing, making
 solving rare issues a more attractive endeavor, or hiring resources
 directly at the Foundation level to help address those. But there is a
 topic we haven't raised yet: should we concentrate on fixing what is
 currently in the integrated release rather than adding new projects ?

 We seem to be unable to address some key issues in the software we
 produce, and part of it is due to strategic contributors (and core
 reviewers) being overwhelmed just trying to stay afloat of what's
 happening. For such projects, is it time for a pause ? Is it time to
 define key cycle goals and defer everything else ?

 On the integrated release side, more projects means stretching our
 limited strategic resources more. Is it time for the Technical Committee
 to more aggressively define what is in and what is out ? If we go
 through such a redefinition, shall we push currently-integrated projects
 that fail to match that definition out of the integrated release inner
 circle ?

 The TC discussion on what the integrated release should or should not
 include has always been informally going on. Some people would like to
 strictly limit to end-user-facing projects. Some others suggest that
 OpenStack should just be about integrating/exposing/scaling smart
 functionality that lives in specialized external projects, rather than
 trying to outsmart those by writing our own implementation. Some others
 are advocates of carefully moving up the stack, and to resist from
 further addressing IaaS+ services until we complete the pure IaaS
 space in a satisfactory manner. Some others would like to build a
 roadmap based on AWS services. Some others would just add anything that
 fits the incubation/integration requirements.

 On one side this is a long-term discussion, but on the other we also
 need to make quick decisions. With 4 incubated projects, and 2 new ones
 currently being proposed, there are a lot of people knocking at the door.

 Thanks for reading this braindump this far. I hope this will trigger the
 open discussions we need to have, as an open source project, to reach
 the next level.
 
 
 Thanks Thierry, for this timely post.
 
 You've touched on multiple trains-of-thought that could indeed
 justify separate threads of their own.
 
 I agree with your read on the diverging growth rates in the
 strategic versus the tactical elements of the community.
 
 I would also be supportive of the notion of taking a cycle out to
 fully concentrate on solving existing quality/scaling/performance
 issues, if that's what you meant by pausing to define key cycle
 goals while deferring everything else.
 
 Though FWIW I think scaling back the set of currently integrated
 projects is not the appropriate solution to the problem of over-
 stretched strategic resources on the QA/infra side of the house.
 
 Rather, I think the proposed move to in-project functional
 testing, in place of throwing the kitchen sink into Tempest,
 is far more likely to pay dividends in terms of making the job
 facing the QA Trojans more tractable 

Re: [openstack-dev] [all] The future of the integrated release

2014-08-07 Thread Anne Gentle
On Thu, Aug 7, 2014 at 8:20 AM, Russell Bryant rbry...@redhat.com wrote:

 On 08/07/2014 09:07 AM, Sean Dague wrote: I think the difference is
 slot selection would just be Nova drivers. I
  think there is an assumption in the old system that everyone in Nova
  core wants to prioritize the blueprints. I think there are a bunch of
  folks in Nova core that are happy having signaling from Nova drivers on
  high priority things to review. (I know I'm in that camp.)
 
  Lacking that we all have picking algorithms to hack away at the 500 open
  reviews. Which basically means it's a giant random queue.
 
  Having a few blueprints that *everyone* is looking at also has the
  advantage that the context for the bits in question will tend to be
  loaded into multiple people's heads at the same time, so is something
  that's discussable.
 
  Will it fix the issue, not sure, but it's an idea.

 OK, got it.  So, success critically depends on nova-core being willing
 to take review direction and priority setting from nova-drivers.  That
 sort of assumption is part of why I think agile processes typically
 don't work in open source.  We don't have the ability to direct people
 with consistent and reliable results.

 I'm afraid if people doing the review are not directly involved in at
 least ACKing the selection and commiting to review something, putting
 stuff in slots seems futile.


My original thinking was I'd set aside a meeting time to review specs
especially for doc issues and API designs. What I found quickly was that
the 400+ queue in one project alone was not only daunting but felt like I
wasn't going to make a dent as a single person, try as I may.

I did my best but would appreciate any change in process to help with
prioritization. I'm pretty sure it will help someone like me, looking at
cross-project queues of specs, to know what to review first, second, third,
and what to circle back on.


 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-07 Thread John Griffith
On Thu, Aug 7, 2014 at 7:33 AM, Anne Gentle a...@openstack.org wrote:




 On Thu, Aug 7, 2014 at 8:20 AM, Russell Bryant rbry...@redhat.com wrote:

 On 08/07/2014 09:07 AM, Sean Dague wrote: I think the difference is
 slot selection would just be Nova drivers. I
  think there is an assumption in the old system that everyone in Nova
  core wants to prioritize the blueprints. I think there are a bunch of
  folks in Nova core that are happy having signaling from Nova drivers on
  high priority things to review. (I know I'm in that camp.)
 
  Lacking that we all have picking algorithms to hack away at the 500 open
  reviews. Which basically means it's a giant random queue.
 
  Having a few blueprints that *everyone* is looking at also has the
  advantage that the context for the bits in question will tend to be
  loaded into multiple people's heads at the same time, so is something
  that's discussable.
 
  Will it fix the issue, not sure, but it's an idea.

 OK, got it.  So, success critically depends on nova-core being willing
 to take review direction and priority setting from nova-drivers.  That
 sort of assumption is part of why I think agile processes typically
 don't work in open source.  We don't have the ability to direct people
 with consistent and reliable results.

 I'm afraid if people doing the review are not directly involved in at
 least ACKing the selection and commiting to review something, putting
 stuff in slots seems futile.


 My original thinking was I'd set aside a meeting time to review specs
 especially for doc issues and API designs. What I found quickly was that
 the 400+ queue in one project alone was not only daunting but felt like I
 wasn't going to make a dent as a single person, try as I may.

 I did my best but would appreciate any change in process to help with
 prioritization. I'm pretty sure it will help someone like me, looking at
 cross-project queues of specs, to know what to review first, second, third,
 and what to circle back on.


 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ​Seems everybody that's been around a while has noticed issues this
release and have talked about it, thanks Thierry for putting it together so
well and kicking off the ML thread here.

I'd agree with everything that you stated, I've also floated the idea this
past week with a few members of the Core Cinder team to have an every
other release for new drivers submissions in Cinder (I'm expecting this to
be a HUGELY popular proposal [note sarcastic tone]).

There are three things that have just crushed productivity and motivation
in Cinder this release (IMO):
1. Overwhelming number of drivers (tactical contributions)
2. Overwhelming amount of churn, literally hundreds of little changes to
modify docstrings, comments etc but no real improvements to code
3. A new sense of pride in hitting the -1 button on reviews.  A large
number of reviews these days seem to be -1 due to punctuation or
misspelling in comments and docstrings.  There's also a lot of my way of
writing this method is better because it's *clever* taking place.

In Cinder's case I don't think new features is a problem, in fact we can't
seem to get new features worked on and released because of all the other
distractions.  That being said doing a maintenance or hardening only type
of release is for sure good with me.

Anyway, I've had some plans to talk about how we might fix some of this in
Cinder at next week's sprint.  If there's a broader community effort along
these lines that's even better.

Thanks,
John
​
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-network stuck at get semaphores lock when startup

2014-08-07 Thread Ben Nemec
Unfortunately this is a known issue.  We're working on a fix:
https://bugs.launchpad.net/oslo/+bug/1327946

On 08/07/2014 03:57 AM, Alex Xu wrote:
 When I startup nova-network, it stuck at trying get lock for ebtables.
 
 @utils.synchronized('ebtables', external=True)
 def ensure_ebtables_rules(rules, table='filter'):
  .
 
 Checking the code found that invoke utils.synchronized without parameter 
 lock_path, the code will try to use
 posix semaphore.
 
 But posix semaphore won't release even the process crashed. Should we 
 fix it? I saw a lot of call for synchronized
 without lock_path.
 
 Thanks
 Alex
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-network stuck at get semaphores lock when startup

2014-08-07 Thread Alex Xu

Oops, thanks

On 2014年08月07日 22:08, Ben Nemec wrote:

Unfortunately this is a known issue.  We're working on a fix:
https://bugs.launchpad.net/oslo/+bug/1327946

On 08/07/2014 03:57 AM, Alex Xu wrote:

When I startup nova-network, it stuck at trying get lock for ebtables.

@utils.synchronized('ebtables', external=True)
def ensure_ebtables_rules(rules, table='filter'):
  .

Checking the code found that invoke utils.synchronized without parameter
lock_path, the code will try to use
posix semaphore.

But posix semaphore won't release even the process crashed. Should we
fix it? I saw a lot of call for synchronized
without lock_path.

Thanks
Alex


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Bug squashing day

2014-08-07 Thread Eugene Nikanorov
Hi neutron folks,

Today should have been 'Bug squashing day' where we go over existing bugs
filed for the project and triage/prioritize/comment on them.

I've created an etherpad with (hopefully) full list of neutron bugs:
https://etherpad.openstack.org/p/neutron-bug-squashing-day-2014-08-07

I was able to walk through a couple of almost thousand of bugs we have.
My target was to reduce the number of open bugs, so some of them I moved to
incomplete/invalid/won't fix state (not many though); then, to reduce the
number of high importance bugs, especially if they're hanging for too long.

As you can see, bugs in the etherpad are sorted by importance.
Some of my observations include:
- almost all bugs with High priority really seem like issues we should be
fixing.
In many cases submitter or initial contributor abandoned his work on the
bug...
- there are a couple of important bugs related to DVR where previously
working stuff
is broken, but in all cases there are DVR subteam members working on those,
so we're good here so far.

I also briefly described resolution for each bug, where 'n/a' means that
bug just needs to be fixed/work should be continued without any change to
state.
I'm planning to continue to go over this list and expect more bugs will go
away which previously have been marked as medium/low or wishlist.

If anyone is willing to help - you're welcome!

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Package python-django-pyscss dependencies on CentOS

2014-08-07 Thread Matthias Runge

On 07/08/14 11:11, Timur Sufiev wrote:

Thanks,

now it is clear that this requirement can be safely dropped.


As I said, it's required during build time, if you execute the tests
during build.
It's not a runtime dependency; the page you were referring to is from 
the build system.


Matthias

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-08-07 Thread Matt Riedemann



On 7/18/2014 2:55 AM, Daniel P. Berrange wrote:

On Thu, Jul 17, 2014 at 12:13:13PM -0700, Johannes Erdfelt wrote:

On Thu, Jul 17, 2014, Russell Bryant rbry...@redhat.com wrote:

On 07/17/2014 02:31 PM, Johannes Erdfelt wrote:

It kind of helps. It's still implicit in that you need to look at what
features are enabled at what version and determine if it is being
tested.

But the behavior is still broken since code is still getting merged that
isn't tested. Saying that is by design doesn't help the fact that
potentially broken code exists.


Well, it may not be tested in our CI yet, but that doesn't mean it's not
tested some other way, at least.


I'm skeptical. Unless it's tested continuously, it'll likely break at
some time.

We seem to be selectively choosing the continuous part of CI. I'd
understand if it was reluctantly because of immediate problems but
this reads like it's acceptable long-term too.


I think there are some good ideas in other parts of this thread to look
at how we can more reguarly rev libvirt in the gate to mitigate this.

There's also been work going on to get Fedora enabled in the gate, which
is a distro that regularly carries a much more recent version of libvirt
(among other things), so that's another angle that may help.


That's an improvement, but I'm still not sure I understand what the
workflow will be for developers.


That's exactly why we want to have the CI system using newer libvirt
than it does today. The patch to cap the version doesn't change what
is tested - it just avoids users hitting untested paths by default
so they're not exposed to any potential instability until we actually
get a more updated CI system


Do they need to now wait for Fedora to ship a new version of libvirt?
Fedora is likely to help the problem because of how quickly it generally
ships new packages and their release schedule but it would still hold
back some features?


Fedora has an add-on repository (virt-preview) which contains the
latest QEMU + libvirt RPMs for current stable release - this is lags
upstream by a matter of days, so there would be no appreciable delay
in getting access to newest possible releases.


Also, this explanation doesn't answer my question about what happens
when the gate finally gets around to actually testing those potentially
broken code paths.


I think we would just test out the bump and make sure it's working fine
before it's enabled for every job.  That would keep potential breakage
localized to people working on debugging/fixing it until it's ready to go.


The downside is that new features for libvirt could be held back by
needing to fix other unrelated features. This is certainly not a bigger
problem than users potentially running untested code simply because they
are on a newer version of libvirt.

I understand we have an immediate problem and I see the short-term value
in the libvirt version cap.

I try to look at the long-term and unless it's clear to me that a
solution is proposed to be short-term and there are some understood
trade-offs then I'll question the long-term implications of it.


Once CI system is regularly tracking upstream releases within a matter of
days, then the version cap is a total non-issue from a feature
availability POV. It is none the less useful in the long term, for example,
if there were a problem we miss in testing, which a deployer then hits in
the field, the version cap would allow them to get their deployment to
avoid use of the newer libvirt feature, which could be a useful workaround
for them until a fix is available.

Regards,
Daniel



FYI, there is a proposed revert of the libvirt version cap change 
mentioned previously in this thread [1].


Just bringing it up again here since the discussion should happen in the 
ML rather than gerrit.


[1] https://review.openstack.org/#/c/110754/

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >