Re: [openstack-dev] [all] [ptls] The Czar system, or how to scale PTLs

2014-08-22 Thread Rochelle.RochelleGrober
Ok, this is funny to some of us in the community.  The general populace of this 
community is so against the idea of management that they will use the term for 
a despotic dictator as a position name rather than manager.  Sorry, but this 
needed to be said.

Specific comments in line:

Thierry Carrez wrote:
 Hi everyone,
 We all know being a project PTL is an extremely busy job. That's
 in our structure the PTL is responsible for almost everything in a
 - Release management contact
 - Work prioritization
 - Keeping bugs under control
 - Communicate about work being planned or done
 - Make sure the gate is not broken
 - Team logistics (run meetings, organize sprints)
 - ...

Point of clarification:  I've heard PTL=Project Technical Lead and PTL=Program 
Technical Lead. Which is it?  It is kind of important as OpenStack grows, 
because the first is responsible for *a* project, and the second is responsible 
for all projects within a program.

I'd also like to set out as an example of a Program that is growing to 
encompass multiple projects, the Neutron Program.  Look at how it is expanding:

Multiple sub-teams for:  LBAAS, DNAAS, GBP, etc.  This model could be extended 
such that:  
- the subteam is responsible for code reviews, including the first +2 for 
design, architecture and code of the sub-project, always also keeping an eye 
out that the sub-project code continues to both integrate well with the 
program, and that the program continues to provide the needed code bits, 
architecture modifications and improvements, etc. to support the sub-project.
- the final +2/A would be from the Program reviewers to ensure that all 
integrate nicely together into a single, cohesive program.  
- This would allow sub-projects to have core reviewers, along with the program 
and be a good separation of duties.  It would also help to increase the number 
of reviews moving to merged code.
- Taken to a logical stepping stone, you would have project technical leads for 
each project, and they would make up a program council, with the program 
technical lead being the chair of the council.

This is a way to offload a good chunk of PTL tactical responsibilities and help 
them focus more on the strategic.

 They end up being completely drowned in those day-to-day operational
 duties, miss the big picture, can't help in development that much
 anymore, get burnt out. Since you're either the PTL or not the PTL,
 you're very alone and succession planning is not working that great
 There have been a number of experiments to solve that problem. John
 Garbutt has done an incredible job at helping successive Nova PTLs
 handling the release management aspect. Tracy Jones took over Nova bug
 management. Doug Hellmann successfully introduced the concept of Oslo
 liaisons to get clear point of contacts for Oslo library adoption in
 projects. It may be time to generalize that solution.
 The issue is one of responsibility: the PTL is ultimately responsible
 for everything in a project. If we can more formally delegate that
 responsibility, we can avoid getting up to the PTL for everything, we
 can rely on a team of people rather than just one person.
 Enter the Czar system: each project should have a number of liaisons /
 official contacts / delegates that are fully responsible to cover one
 aspect of the project. We need to have Bugs czars, which are
 for getting bugs under control. We need to have Oslo czars, which serve
 as liaisons for the Oslo program but also as active project-local oslo
 advocates. We need Security czars, which the VMT can go to to progress
 quickly on plugging vulnerabilities. We need release management czars,
 to handle the communication and process with that painful OpenStack
 release manager. We need Gate czars to serve as first-line-of-contact
 getting gate issues fixed... You get the idea.
Let's call spades, spades here.  Czar is not only overkill, but the wrong 

Each position suggested here exists in corporate development projects:
- Bug czar == bug manager/administrator/QA engineer/whatever - someone in 
charge of making sure bugs get triages, assigned and completed
- Oslo czar == systems engineers/project managers who make sure that the 
project is in line with the rest of the projects that together make an 
integrated release.  This position needs to stretch beyond just Oslo to 
encompass all the cross-project requirements and will likely be its own 
- Gate Czar == integration engineer(manager)/QA engineer(manager)/build-release 
engineer.  This position would also likely be a liaison to Infra.
- Security Czar == security guru (that name takes me back ;-)
- Release management Czar == Project release manager
- Doc Czar == tech editor
- Tempest Czar == QA engineer(manager)

Yes, programs are now mostly big enough to require coordination and management. 
 The roles are long defined, so 

Re: [openstack-dev] [all] [ptls] The Czar system, or how to scale PTLs

2014-08-25 Thread Rochelle.RochelleGrober

Zane Bitter wrote:
 On 22/08/14 21:02, Anne Gentle wrote:
  I'm with Rocky on the anti-czar-as-a-word camp. We all like clever
 names to
  shed the corporate stigma but this word ain't it. Liaison or lead?
 +1. The only time you hear the word 'czar' in regular life (outside of
 references to pre-revolutionary Russia) it means that the government is
 looking for a cheap PR win that doesn't require actually doing/changing
 Liaison or Contact would be fine choices IMHO.

Or, how about Secretary?  Such as part of a cabinet?  Secretary of bugs is 
kinda cool in that they collect info and report, troubleshoot, etc., but final 
decisions are directed by PTL.



OpenStack-dev mailing list

Re: [openstack-dev] [all] [ptls] The Czar system, or how to scale PTLs

2014-08-25 Thread Rochelle.RochelleGrober
Zane Bitter [August 25, 2014 1:38 PM] wrote:

. . .


 I'd say we've done fairly well, but I would attribute that at least in
 part to the fact that we've treated the PTL as effectively the
 release management contact more than the guy who will resolve
 disputes for us. In other words, despite rather than because of the
 requirement to have a PTL.
 I do think that having a PTL with no actual responsibilities runs the
 risk of having it be seen as a trophy rather than a service to the
 community. The good PTLs - which so far is all of them - are likely to
 respond by volunteering for just as many things as they were doing
 before, so that will tend to counteract the goal of preventing burnout.

I don't think anyone is saying or even really believes that distributing the 
workload would result in a PTL no actual responsibilities.  There is just so 
much to do as a PTL for the larger projects that even having a team focused on 
ensuring the tactical activities happen will still leave the PTL with a 
superhuman workload: planning, coordinating, correcting, driving, regrouping, 
focusing, liaising, etc.  

As I said in my previous mail (got lost in the conversation about what to call 
these team members), To keep growing quality, communications, contributions and 
the health of the projects and OpenStack as an ecosystem, the PTLs must not 
only be strategic thinkers, but strategic actors.  And it's pretty darn hard to 
be strategic when you're down in the tactical, day-to-day weeds.  All of it is 
important, but the only way to keep OpenStack growing *and* healthy is to start 
to specialize in organizational areas, not just code areas.

Look at the organic growth of technical projects in general.  When you start 
with just a few people, communication is easy.  Everyone knows the whole 
project and it's easy to stay on the same page.  New people come on board and 
now you need to document design, operation and organizational lore.  More 
people come on board and you need to track bugs, maybe features, and maybe 
split into groups, which means a leader needs to arise in each group such that 
the multiple groups can stay in sync and integrate their components.  And it 
continues to grow.  Some OpenStack projects have reached the state where each 
of them are really multiple projects, each with a lead.  Neutron and TripleO 
both address this situation, empowering the internal projects and project 
leads, with the PTLs becoming more strategic and more focused on the ecosystem 
of the subprojects.

I bet Kyle Mestery, Jay Pipes and Robert Collins would be happy to dissuade you 
of the idea that they don't have any responsibilities ;-)


  I'm open to the alternative solution (which would be for programs
  are not interested in having a PTL to just not have one). But then if
  things go badly wrong, you require the TC to step in with threats of
  removal of OpenStack and/or to force an election/vote in the middle
  the crisis. I'm really failing to see how that would result, in those
  hypothetical crisis scenarios, in a better outcome.
 I don't think there are any good scenarios if you get to that crisis
 point. Imagine a scenario where the community is more or less evenly
 split and neither side is willing to back down even after seeking
 guidance from the TC, the PTL breaks the deadlock by fiat in lieu of
 consensus, followed by an unusually high number of spelling mistakes
 corrections in the source code, a single-issue election, potentially a
 reversal of the decision and possibly a fork that will force the TC to
 step in and choose a side. (Note: not choosing is also a choice.)
 pretty much an unmitigated disaster too.
 Our goal must be to avoid reaching the crisis point, and it seems to me
 that it is actually helpful to make clear to projects that their
 Option A: reach consensus
 Option B: there is no Option B
 OpenStack-dev mailing list

OpenStack-dev mailing list

Re: [openstack-dev] [all] The future of the integrated release

2014-08-26 Thread Rochelle.RochelleGrober

On August 26, 2014, Anne Gentle wrote:
On Mon, Aug 25, 2014 at 8:36 AM, Sean Dague wrote:
On 08/20/2014 12:37 PM, Zane Bitter wrote:
 On 11/08/14 05:24, Thierry Carrez wrote:
 So the idea that being (and remaining) in the integrated release should
 also be judged on technical merit is a slightly different effort. It's
 always been a factor in our choices, but like Devananda says, it's more
 difficult than just checking a number of QA/integration checkboxes. In
 some cases, blessing one project in a problem space stifles competition,
 innovation and alternate approaches. In some other cases, we reinvent
 domain-specific solutions rather than standing on the shoulders of
 domain-specific giants in neighboring open source projects.

 I totally agree that these are the things we need to be vigilant about.

 Stifling competition is a big worry, but it appears to me that a lot of
 the stifling is happening even before incubation. Everyone's time is
 limited, so if you happen to notice a new project on the incubation
 trajectory doing things in what you think is the Wrong Way, you're most
 likely to either leave some drive-by feedback or to just ignore it and
 carry on with your life. What you're most likely *not* to do is to start
 a competing project to prove them wrong, or to jump in full time to the
 existing project and show them the light. It's really hard to argue
 against the domain experts too - when you're acutely aware of how
 shallow your knowledge is in a particular area it's very hard to know
 how hard to push. (Perhaps ironically, since becoming a PTL I feel I
 have to be much more cautious in what I say too, because people are
 inclined to read too much into my opinion - I wonder if TC members feel
 the same pressure.) I speak from first-hand instances of guilt here -
 for example, I gave some feedback to the Mistral folks just before the
 last design summit[1], but I haven't had time to follow it up at all. I
 wouldn't be a bit surprised if they showed up with an incubation
 request, a largely-unchanged user interface and an expectation that I
 would support it.

 The result is that projects often don't hear the feedback they need
 until far too late - often when they get to the incubation review (maybe
 not even their first incubation review). In the particularly unfortunate
 case of Marconi, it wasn't until the graduation review. (More about that
 in a second.) My best advice to new projects here is that you must be
 like a ferret up the pant-leg of any negative feedback. Grab hold of any
 criticism and don't let go until you have either converted the person
 giving it into your biggest supporter, been converted by them, or
 provoked them to start a competing project. (Any of those is a win as
 far as the community is concerned.)

 Perhaps we could consider a space like a separate mailing list
 (openstack-future?) reserved just for announcements of Related projects,
 their architectural principles, and discussions of the same?  They
 certainly tend to get drowned out amidst the noise of openstack-dev.
 (Project management, meeting announcements, and internal project
 discussion would all be out of scope for this list.)

 As for reinventing domain-specific solutions, I'm not sure that happens
 as often as is being made out. IMO the defining feature of IaaS that
 makes the cloud the cloud is on-demand (i.e. real-time) self-service.
 Everything else more or less falls out of that requirement, but the very
 first thing to fall out is multi-tenancy and there just aren't that many
 multi-tenant services floating around out there. There are a couple of
 obvious strategies to deal with that: one is to run existing software
 within a tenant-local resource provisioned by OpenStack (Trove and
 Sahara are examples of this), and the other is to wrap a multi-tenancy
 framework around an existing piece of software (Nova and Cinder are
 examples of this). (BTW the former is usually inherently less
 satisfying, because it scales at a much coarser granularity.) The answer
 to a question of the form:

 Why do we need OpenStack project $X, when open source project $Y
 already exists?

 is almost always:

 Because $Y is not multi-tenant aware; we need to wrap it with a
 multi-tenancy layer with OpenStack-native authentication, metering and
 quota management. That even allows us to set up an abstraction layer so
 that you can substitute $Z as the back end too.

 This is completely uncontroversial when you substitute X, Y, Z = Nova,
 libvirt, Xen. However, when you instead substitute X, Y, Z =
 Zaqar/Marconi, Qpid, MongoDB it suddenly becomes *highly* controversial.
 I'm all in favour of a healthy scepticism, but I think we've passed that
 point now. (How would *you* make an AMQP bus multi-tenant?)

 To be clear, Marconi did made a mistake. The Marconi API presented
 semantics to the user that excluded many otherwise-obvious choices of
 back-end plugin (i.e. Qpid/RabbitMQ). It 

Re: [openstack-dev] [all] Design Summit reloaded

2014-08-27 Thread Rochelle.RochelleGrober
On August 27, 2014 3:26 PM Clint Byrum wrote: 
Excerpts from Thierry Carrez's message of 2014-08-27 05:51:55:
 Hi everyone,
 I've been thinking about what changes we can bring to the Design Summit
 format to make it more productive. I've heard the feedback from the
 mid-cycle meetups and would like to apply some of those ideas for Paris,
 within the constraints we have (already booked space and time). Here is
 something we could do:
 Day 1. Cross-project sessions / incubated projects / other projects
 I think that worked well last time. 3 parallel rooms where we can
 address top cross-project questions, discuss the results of the various
 experiments we conducted during juno. Don't hesitate to schedule 2 slots
 for discussions, so that we have time to come to the bottom of those
 issues. Incubated projects (and maybe other projects, if space allows)
 occupy the remaining space on day 1, and could occupy pods on the
 other days.

I like it. The only thing I would add is that it would be quite useful if
the use of pods were at least partially enhanced by an unconference style
interest list.  What I mean is, on day 1 have people suggest topics and
vote on suggested topics to discuss at the pods, and from then on the pods
can host these topics. This is for the other things that aren't well
defined until the summit and don't have their own rooms for days 2 and 3.

[Rocky Grober] +100  the only thing I would add is that each morning, the 
unconference could vote for that day (or half day for that matter), that way, 
if a session or sessions from the day before generated greater interest in 
something either not listed or with low votes, the morning vote could shift 
priorities towards the now higher interest topic.


This is driven by the fact that the pods in Atlanta were almost always
busy doing something other than whatever the track that owned them
wanted. A few projects pods grew to 30-40 people a few times, eating up
all the chairs for the surrounding pods. TripleO often sat at the Heat
pod because of this for instance.

I don't think they should be fully scheduled. They're also just great
places to gather and have a good discussion, but it would be useful to
plan for topic flexibility and help coalesce interested parties, rather
than have them be silos that get taken over randomly. Especially since
there is a temptation to push the other topics to them already.

OpenStack-dev mailing list

Re: [openstack-dev] [all] Design Summit reloaded

2014-08-27 Thread Rochelle.RochelleGrober

From: Chris Jones [] 

Hi Anita

Your impromptu infra-clue-dissemination talks sound interesting (I'd like to 
see the elastic recheck fingerprint one, for example). Would it make sense to 
amplify your reach, by making some short screencasts of these sorts of things?

Chris Jones

[Rocky Grober] +1 or a session at Paris that is recorded?

 On 27 Aug 2014, at 21:48, Anita Kuno wrote:
 On 08/27/2014 02:46 PM, John Griffith wrote:
 On Wed, Aug 27, 2014 at 9:25 AM, Flavio Percoco wrote:
 On 08/27/2014 03:26 PM, Sean Dague wrote:
 On 08/27/2014 08:51 AM, Thierry Carrez wrote:
 Hi everyone,
 I've been thinking about what changes we can bring to the Design Summit
 format to make it more productive. I've heard the feedback from the
 mid-cycle meetups and would like to apply some of those ideas for Paris,
 within the constraints we have (already booked space and time). Here is
 something we could do:
 Day 1. Cross-project sessions / incubated projects / other projects
 I think that worked well last time. 3 parallel rooms where we can
 address top cross-project questions, discuss the results of the various
 experiments we conducted during juno. Don't hesitate to schedule 2 slots
 for discussions, so that we have time to come to the bottom of those
 issues. Incubated projects (and maybe other projects, if space allows)
 occupy the remaining space on day 1, and could occupy pods on the
 other days.
 Day 2 and Day 3. Scheduled sessions for various programs
 That's our traditional scheduled space. We'll have a 33% less slots
 available. So, rather than trying to cover all the scope, the idea would
 be to focus those sessions on specific issues which really require
 face-to-face discussion (which can't be solved on the ML or using spec
 discussion) *or* require a lot of user feedback. That way, appearing in
 the general schedule is very helpful. This will require us to be a lot
 stricter on what we accept there and what we don't -- we won't have
 space for courtesy sessions anymore, and traditional/unnecessary
 sessions (like my traditional release schedule one) should just move
 to the mailing-list.
 Day 4. Contributors meetups
 On the last day, we could try to split the space so that we can conduct
 parallel midcycle-meetup-like contributors gatherings, with no time
 boundaries and an open agenda. Large projects could get a full day,
 smaller projects would get half a day (but could continue the discussion
 in a local bar). Ideally that meetup would end with some alignment on
 release goals, but the idea is to make the best of that time together to
 solve the issues you have. Friday would finish with the design summit
 feedback session, for those who are still around.
 I think this proposal makes the best use of our setup: discuss clear
 cross-project issues, address key specific topics which need
 face-to-face time and broader attendance, then try to replicate the
 success of midcycle meetup-like open unscheduled time to discuss
 whatever is hot at this point.
 There are still details to work out (is it possible split the space,
 should we use the usual design summit CFP website to organize the
 scheduled time...), but I would first like to have your feedback on
 this format. Also if you have alternative proposals that would make a
 better use of our 4 days, let me know.
 I definitely like this approach. I think it will be really interesting
 to collect feedback from people about the value they got from days 2  3
 vs. Day 4.
 I also wonder if we should lose a slot from days 1 - 3 and expand the
 hallway time. Hallway track is always pretty interesting, and honestly
 at a lot of interesting ideas spring up. The 10 minute transitions often
 seem to feel like you are rushing between places too quickly some times.
 Last summit, it was basically impossible to do any hallway talking and
 even meet some folks face-2-face.
 Other than that, I think the proposal is great and makes sense to me.
 Flavio Percoco
 OpenStack-dev mailing list
 ​Sounds like a great idea to me:
 OpenStack-dev mailing list
 I think this is a great direction.
 Here is my dilemma and it might just affect me. I attended 3 mid-cycles
 this release: one of Neutron's (there were 2), QA/Infra and Cinder. The
 Neutron and Cinder ones were mostly in pursuit of figuring out third
 party and exchanging information surrounding that (which I feel was
 successful). The QA/Infra one was, well even though I feel like I have
 been awol, I still consider this my home.
 From my perspective and check with Neutron 

Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-10 Thread Rochelle.RochelleGrober
-Original Message-
From: Daniel P. Berrange [] 
Sent: Wednesday, September 10, 2014 1:45 AM

On Tue, Sep 09, 2014 at 05:14:43PM -0700, Stefano Maffulli wrote:
  To me, this means you don't really want a sin bin where you dump
  drivers and tell them not to come out until they're fit to be
  reviewed by the core; You want a trusted driver community which does
  its own reviews and means the core doesn't have to review them.
 I think we're going somewhere here, based on your comment and other's:
 we may achieve some result if we empower a new set of people to manage
 drivers, keeping them in the same repositories where they are now. This
 new set of people may not be the current core reviewers but other with
 different skillsets and more capable of understanding the driver's
 ecosystem, needs, motivations, etc.
 I have the impression this idea has been circling around for a while but
 for some reason or another (like lack of capabilities in gerrit and
 other reasons) we never tried to implement it. Maybe it's time to think
 about an implementation. We have been thinking about mentors, maybe that's a way to go?
 Sub-team with +1.5 scoring capabilities?

I think that setting up subteams is neccessary to stop us imploding but
I don't think it is enough. As long as we have one repo we're forever
going to have conflict  contention in deciding which features to accept,
which is a big factor in problems today. I favour the strong split of the
drivers into separate repositories to remove the contente between the
teams as much as is practical.

[Rocky Grober]  

There is a huge benefit to getting the drivers into separate repositories.  
Once the APIs/interfaces in Nova are clean enough to support the move, they 
will stay cleaner than if the drivers are in the same repository.  And the 
subteams will ensure that the drivers are to their level of quality.  The CI 
system will be easier to manage with thirdparty CIs for each of the drivers.  
And to get changes into Nova Core, the subteams will need to cooperate, as any 
core change that affects one driver will most likely affect others, so it will 
be in the subteams' best interests to keep the driver/core APIs clean and free 
of special cases.

From Kyle's later mail:
I think that is absolutely the case: sub-team leaders need to be vetted based 
on their upstream communication skills. I also think what we're looking at in 
Neutron is giving sub-teams a shelf-life, and spinning them down rather than 
letting them live long-term, lose focus, and wander aimlessly.

This is also a very important point that I'd like to expand on.  The subteams 
really should form a drivers team composed of each subteams' PTLs.  This 
drivers team would be the interface to Nova Core and would need those upstream 
communications skills.  This team could also be the place Nova Core/Driver API 
changes get discussed and finalized from the drivers' perspective.  Maybe the 
Drivers PTL team should even start with electing a Nova Core from its PTLs as 
the Drivers team lead.  This team would also be the perfect place for Nova PTL 
and team to work with Drivers teams to collaborate on specs and issues.

Unlike in Neutron, the subteams wouldn't roll back into the Nova core, as their 
charter/purpose will continue to develop as hypervisors, containers, bare metal 
and other new virtual control planes develop.  Getting these teams right will 
mean more agility, higher quality and better consistency within the Nova 
ecosystem.  The drivers team should become strong partners with Nova core in 
allowing Nova to innovate more quickly while addressing technical debt to 
increase quality around the Nova/drivers interactions.

[/Rocky Grober]

|:  -o- :|
|:  -o- :|
|:   -o- :|
|:   -o- :|

OpenStack-dev mailing list

OpenStack-dev mailing list

Re: [openstack-dev] [all] OpenStack bootstrapping hour - Friday Sept 19th - 3pm EST

2014-09-15 Thread Rochelle.RochelleGrober
This is *great*.  Not only for newbies, but refreshers, learning different 
approaches, putting faces to the signatures, etc.  And Mock best practices is a 
brilliant starting place for developers.

I'd like to vote for a few others:
- Development environment (different ones: PyCharms, Eclipse, IDE for Docs, etc)
- Tracking down a bug: log searching, back tracing, etc.
- Fixing a bug:  From assigning in Launchpad through clone, fix, git review, 
- Writing an integrated test: setup, data recording/collection/clean tear down.

Sorry to have such a big wish list, but for people who learn experientially, 
this will be immensely useful.


-Original Message-
From: Sean Dague [] 
Sent: Monday, September 15, 2014 3:56 PM
Subject: [openstack-dev] [all] OpenStack bootstrapping hour - Friday Sept 19th 
- 3pm EST

A few of us have decided to pull together a regular (cadence to be
determined) video series taking on deep dives inside of OpenStack,
looking at code, explaining why things work that way, and fielding
questions from anyone interested.

For lack of a better title, I've declared it OpenStack Bootstrapping Hour.

Episode 0 - Mock best practices will kick off this Friday, Sept 19th,
from 3pm - 4pm EST. Our experts for this will be Jay Pipes and Dan
Smith. It will be done as a Google Hangout on Air, which means there
will be a live youtube stream while it's on, and a recorded youtube
video that's publicly accessible once we're done.

We'll be using an etherpad during the broadcast to provide links to the
content people are looking at, as well as capture questions. That will
be our backchannel, and audience participation forum, with the advantage
that it creates a nice concise document at the end of the broadcast that
pairs well with the video. (Also: the tech test showed that while code
examples are perfectly viewable during in the final video, during the
live stream they are a little hard to read, etherpad links will help
people follow along at home).

Assuming this turns out to be useful, we're thinking about lots of other
deep dives. The intent is that these are indepth dives. We as a
community have learned so many things over the last 4 years, but as
OpenStack has gotten so large, being familiar with more than a narrow
slice is hard. This is hopefully a part of the solution to address that.
As I've told others, if nothing else, I'm looking forward to learning a
ton in the process.

Final links for the hangout + etherpad will be posted a little later in
the week. Mostly wanted to make people aware it was coming.


Sean Dague

OpenStack-dev mailing list

OpenStack-dev mailing list

[openstack-dev] Log Rationalization -- Bring it on!

2014-09-17 Thread Rochelle.RochelleGrober
TL;DR:  I consider the poor state of log consistency a major impediment for 
more widespread adoption of OpenStack and would like to volunteer to own this 
cross-functional process to begin to unify and standardize logging messages and 
attributes for Kilo while dealing with the most egregious issues as the 
community identifies them.

Recap from some mail threads:

From Sean Dague on Kilo cycle goals:

2. Consistency in southbound interfaces (Logging first)

Logging and notifications are south bound interfaces from OpenStack providing 
information to people, or machines, about what is going on.

There is also a 3rd proposed south bound with osprofiler.

For Kilo: I think it's reasonable to complete the logging standards and 
implement them. I expect notifications (which haven't quite kicked off) are 
going to take 2 cycles.

I'd honestly *really* love to see a unification path for all the the southbound 
parts, logging, osprofiler, notifications, because there is quite a bit of 
overlap in the instrumentation/annotation inside the main code for all of these.

And from Doug Hellmann:
1. Sean has done a lot of analysis and started a spec on standardizing logging 
guidelines where he is gathering input from developers, deployers, and 
operators [1]. Because it is far enough for us to see real progress, it's a 
good place for us to start experimenting with how to drive cross-project 
initiatives involving code and policy changes from outside of a single project. 
We have a couple of potentially related specs in Oslo as part of the oslo.log 
graduation work [2] [3], but I think most of the work will be within the 


And from James Blair:

1) Improve log correlation and utility

If we're going to improve the stability of OpenStack, we have to be able to 
understand what's going on when it breaks.  That's both true as developers when 
we're trying to diagnose a failure in an integration test, and it's true for 
operators who are all too often diagnosing the same failure in a real 
deployment.  Consistency in logging across projects as well as a cross-project 
request token would go a long way toward this.

While I am not currently managing an OpenStack deployment, writing tests or 
code, or debugging the stack, I have spent many years doing just that.  Through 
QA, Ops and Customer support, I have come to revel in good logging and log 
messages and curse the holes and vagaries in many systems.

Defining/refining logs to be useful and usable is a cross-functional effort 
that needs to include:

· Operators

· QA

· End Users

· Community managers

· Tech Pubs

· Translators

· Developers

· TC (which provides the forum and impetus for all the projects to 
cooperate on this)

At the moment, I think this effort may best work under the auspices of Oslo 
(oslo.log), I'd love to hear other proposals.

Here is the beginnings of my proposal of how to attack and subdue the painful 
state of logs:

· Post this email to the MLs (dev, ops, enduser) to get feedback, 
garner support and participants in the process

· In parallel:

o   Collect up problems, issues, ideas, solutions on an etherpad where anyone in the 
communities can post.

o   Categorize  reported Log issues into classes (already identified classes):

§  Format Consistency across projects

§  Log level definition and categorization across classes

§  Time syncing entries across tens of logfiles

§  Relevancy/usefulness of information provided within messages

§  Etc (missing a lot here, but I'm sure folks will speak up)

o   Analyze existing log message formats, standards across integrated projects

o   File bugs where issues identified are actual project bugs

o   Build a session outline for F2F working session at the Paris Design Summit

· At the Paris Design Summit, use a session and/or pod discussions to 
set priorities, recruit contributors, start and/or flesh out specs and 

· Proceed according to priorities, specs, blueprints, contributions and 
changes as needed as the work progresses.

· Keep an active and open rapport and reporting process for the user 
community to comment and participate in the processes.
Measures of success:

· Log messages provide consistency of format enough for productive 
mining through operator writable scripts

· Problem debugging is simplified through the ability to trust 
timestamps across all OpenStack logs (and use scripts to get to the time you 
want in any/all of the logfiles)

· Standards for format, content, levels and translations have been 
proposed and agreed to be adopted across all 

Re: [openstack-dev] [Nova] [All] API standards working group (was: Some ideas for micro-version implementation)

2014-09-23 Thread Rochelle.RochelleGrober on Tuesday, September 23, 2014 
9:09 AM wrote:


I'd like to say finally that I think there should be an OpenStack API

working group whose job it is to both pull together a set of OpenStack

API practices as well as evaluate new REST APIs proposed in the

OpenStack ecosystem to provide guidance to new projects or new

subprojects wishing to add resources to an existing REST API.



[Rocky Grober] ++

Jay, are you volunteering to head up the working group? Or at least be an 
active member?  I'll certainly follow with interest, but I think I have my 
hands full with the log rationalization working group.

OpenStack-dev mailing list

Re: [openstack-dev] [Fwd: Change in openstack/neutron[master]: Kill dnsmasq and ns-meta softly]

2014-01-28 Thread Rochelle.RochelleGrober
+1 anyway.  Sometimes I feel like we've lost the humor in our work, but at 
least I see it on IRC and now here.

Thanks for the humanity check!


From: Salvatore Orlando []
Sent: Tuesday, January 28, 2014 1:05 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Fwd: Change in openstack/neutron[master]: Kill 
dnsmasq and ns-meta softly]

It might be creative, but it's a shame that it did not serve the purpose.
At least it confirmed the kernel bug was related to process termination in 
network namespaces but was due to SIGKILL exlusively, as it occurred with 
SIGTERM as well.

On the bright side, Mark has now pushed another patch which greatly reduces the 
occurence of bug 1273386 [1]
We are also working with the ubuntu kernel team to assess whether a kernel fix 
is needed.



On 28 January 2014 18:13, Edgar Magana wrote:
No doubt about it!!!


On 1/28/14 8:45 AM, Jay Pipes 

This might just be the most creative commit message of the year.

OpenStack-dev mailing list

OpenStack-dev mailing list

OpenStack-dev mailing list

Re: [openstack-dev] [PTL] Designating required use upstream code

2014-02-05 Thread Rochelle.RochelleGrober
On Wed, Feb 5, 2014 at 12:05 PM, Russell Bryant wrote:
On 02/05/2014 11:22 AM, Thierry Carrez wrote:
 (This email is mostly directed to PTLs for programs that include one
 integrated project)

 The DefCore subcommittee from the OpenStack board of directors asked the
 Technical Committee yesterday about which code sections in each
 integrated project should be designated sections in the sense of [1]
 (code you're actually needed to run or include to be allowed to use the
 trademark). That determines where you can run alternate code (think:
 substitute your own private hypervisor driver) and still be able to call
 the result openstack.


 PTLs and their teams are obviously the best placed to define this, so it
 seems like the process should be: PTLs propose designated sections to
 the TC, which blesses them, combines them and forwards the result to the
 DefCore committee. We could certainly leverage part of the governance
 repo to make sure the lists are kept up to date.

 Comments, thoughts ?

The process you suggest is what I would prefer.  (PTLs writing proposals
for TC to approve)

Using the governance repo makes sense as a means for the PTLs to post
their proposals for review and approval of the TC.



Who gets final say if there's strong disagreement between a PTL and the
TC?  Hopefully this won't matter, but it may be useful to go ahead and
clear this up front.

The Board has some say in this, too, right? The proposal [1] is for a set of 
tests to be proposed and for the Board to approve (section 8).

What is the relationship between that test suite and the designated core areas? 
It seems that anything being tested would need to be designated as core. What 
about the inverse?

The test suite should validate that the core 
capabilities/behaviors/functionality behave as expected (positive and negative 
testing in an integrated environment).  So, the test suites would need to be 
reviewed for applicability.  Maybe, like Gerrit, there would be voting and 
nonvoting parts of tests based on whether something outside of core gets 
exercised in the process of running some tests.  Whatever the case, I doubt 
that the tests would generate a simple yes/no, but rather a score.  An 
discussion of one of the subsets of capabilities for Nova might start with the 
capabilities highlighted on this page:

The test suite would need to exercise the capabilities in these sorts of 
matrices and might product the A/B/C grades as the rest of the page elucidates.




Russell Bryant

OpenStack-dev mailing list

OpenStack-dev mailing list

Re: [openstack-dev] [Nova] RFC: Generate API sample files from API schemas

2014-02-06 Thread Rochelle.RochelleGrober

Really lots more than just +1

This leads to so many more efficiencies and increase in effectiveness.


-Original Message-
From: Vishvananda Ishaya [] 
Sent: Thursday, February 06, 2014 10:17 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] RFC: Generate API sample files from API 

On Feb 6, 2014, at 5:38 AM, Kenichi Oomichi wrote:

 I'd like to propose one idea that autogenerates API sample files from 
 API schema for Nova v3 API.
 We are working on API validation for v3 API, the works require API 
 schema which is defined with JSONSchema for each API. On the other 
 hand, API sample files of v3 API are autogenerated from the template 
 files of v3 API under nova/tests/integrated/v3/api_samples, as api_samples's 
 The API schema files are similar to the template files, because both 
 represent the API parameter structures and each API name.
 For example, the template file of keypairs is
  keypair: {
  name: %(keypair_name)s
 and the API schema file is
  create = {
  'type': 'object',
  'properties': {
  'keypair': {
  'type': 'object',
  'properties': {
  'name': {
  'type': 'string', 'minLength': 1, 'maxLength': 255,
  'pattern': '^[a-zA-Z0-9 _-]+$'
  'public_key': {'type': 'string'},
  'required': ['name'],
  'additionalProperties': False,
  'required': ['keypair'],
  'additionalProperties': False,
 When implementing new v3 API, we need to write/review both files and 
 that would be hard works. For reducing the workload, I'd like to 
 propose one idea[2] that autogenerates API sample files from API 
 schema instead of template files. We would not need to write a template file 
 of a request.


The template files were there because we didn't have a clear schema defined.

It would be awesome to get rid of the templates.


 The XML support is dropped from Nova v3 API, and the decision could 
 make this implementation easier. The NOTE is that we still need 
 response template files even if implementing this idea, because API 
 schema files of response don't exist.
 Any comments are welcome.
 Ken'ichi Ohmichi
 OpenStack-dev mailing list

OpenStack-dev mailing list

Re: [openstack-dev] [openstack-qa] Graduation Requirements + Scope of Tempest

2014-03-20 Thread Rochelle.RochelleGrober

 -Original Message-
 From: Malini Kamalambal []
 Sent: Thursday, March 20, 2014 12:13 PM
 'project specific functional testing' in the Marconi context is
 Marconi as a complete system, making Marconi API calls  verifying the
 response - just like an end user would, but without keystone. If one of
 these tests fail, it is because there is a bug in the Marconi code ,
 not because its interaction with Keystone caused it to fail.
 That being said there are certain cases where having a project
 functional test makes sense. For example swift has a functional test
 starts swift in devstack. But, those things are normally handled on a
 basis. In general if the project is meant to be part of the larger
 ecosystem then Tempest is the place to put functional testing. That way
 you know
 it works with all of the other components. The thing is in openstack
 like a project isolated functional test almost always involves another
 in real use cases. (for example keystone auth with api requests)
 One of the concerns we heard in the review was 'having the functional
 tests elsewhere (I.e within the project itself) does not count and they
 have to be in Tempest'.
 This has made us as a team wonder if we should migrate all our
 tests to Tempest.
 But from Matt's response, I think it is reasonable to continue in our
 current path  have the functional tests in Marconi coexist  along with
 the tests in Tempest.

I think that what is being asked, really is that the functional tests could be 
a single set of tests that would become a part of the tempest repository and 
that these tests would have an ENV variable as part of the configuration that 
would allow either no Keystone or Keystone or some such, if that is the 
only configuration issue that separates running the tests isolated vs. 
integrated.  The functional tests need to be as much as possible a single set 
of tests to reduce duplication and remove the likelihood of two sets getting 
out of sync with each other/development.  If they only run in the integrated 
environment, that's ok, but if you want to run them isolated to make debugging 
easier, then it should be a configuration option and a separate test job.

So, if my assumptions are correct, QA only requires functional tests for 
integrated runs, but if the project QAs/Devs want to run isolated for dev and 
devtest purposes, more power to them.  Just keep it a single set of functional 
tests and put them in the Tempest repository so that if a failure happens, 
anyone can find the test and do the debug work without digging into a separate 
project repository.

Hopefully, the tests as designed could easily take a new configuration 
directive and a short bit of work with OS QA will get the integrated FTs 
working as well as the isolated ones.


 On 3/20/14 1:59 PM, Matthew Treinish wrote:
 On Thu, Mar 20, 2014 at 11:35:15AM +, Malini Kamalambal wrote:
  Hello all,
  I have been working on adding tests in Tempest for Marconi, for the
 last few months.
  While there are many amazing people to work with, the process has
 more difficult than I expected.
  Couple of pain-points and suggestions to make the process easier for
 myself  future contributors.
  1. The QA requirements for a project to graduate needs details
 the Project must have a *basic* devstack-gate job set up
  2. The scope of Tempest needs clarification  - what tests should be
 Tempest vs. in the individual projects? Or should they be in both
 tempest and the project?
  See details below.
  1. There is little documentation on graduation requirement from a QA
 perspective beyond 'Project must have a basic devstack-gate job set
  As a result, I hear different interpretations on what a basic
 gate job is.
  This topic was discussed in one of the QA meetings a few weeks back
  Based on the discussion there, having a basic job - such as one that
 will let us know 'if a keystone change broke marconi' was  good
  My efforts in getting Marconi meet graduation requirements w.r.t
 Tempest was based on the above discussion.
  However, my conversations with the TC during Marconi's graduation
 review  lead me to believe that these requirements aren't yet
  We were told that we needed to have more test coverage in tempest, 
 having them elsewhere (i.e. functional tests in the Marconi project
 itself) was not good enough.
 So having only looked at the Marconi ML thread and not the actual TC
 minutes I might be missing the whole picture. But, from what I saw
 when I
 at both a marconi commit and a tempest commit is that there is no
 devstack-gate job on marconi commits. It's only non-voting in the
 Additionally, there isn't a non-voting job on tempest or 

Re: [openstack-dev] [openstack-qa] Graduation Requirements + Scope of Tempest

2014-03-21 Thread Rochelle.RochelleGrober

 From: Malini Kamalambal []
 We are talking about different levels of testing,
 1. Unit tests - which everybody agrees should be in the individual
 2. System Tests - 'System' referring to ( limited to), all the
 that make up the project. These are also the functional tests for the
 3. Integration Tests - This is to verify that the OS components
 well and don't break other components -Keystone being the most obvious
 example. This is where I see getting the maximum mileage out of
 I see value in projects taking ownership of the System Tests - because
 the project is not 'functionally ready', it is not ready to integrate
 other components of Openstack.
 But for this approach to be successful, projects should have diversity
 the team composition - we need more testers who focus on creating these
 This will keep the teams honest in their quality standards.

+1000  I love your approach to this.  You are right.  Functional tests for the 
project, that exist in an environment, but that exercise the intricacies of 
just the project aren't there for most projects, but really should be.  And 
these tests should be exercised against new code before the code enters the 
gerrit/Jenkins stream. But, as Malini points out, it's at most a dream for most 
projects as the test developers just aren't part of most projects.

 As long as individual projects cannot guarantee functional test
 we will need more tests in Tempest.
 But that will shift focus away from Integration Testing, which can be
 ONLY in Tempest.

+1  This is also an important point.  If functional testing belonged to the 
projects, then most of these tests would be run before a tempest test was ever 
run and would not need to be part of the integrated tests, except as a subset 
that demonstrate the functioning integration with other projects.

 Regardless of whatever we end up deciding, it will be good to have
 discussions sooner than later.
 This will help at least the new projects to move in the right

Maybe a summit topic?  How do we push functional testing into the project level 


 OpenStack-dev mailing list

OpenStack-dev mailing list

Re: [openstack-dev] [RFC] Tempest without branches

2014-04-04 Thread Rochelle.RochelleGrober
(easier to insert my questions at top of discussion as they are more general)

How would test deprecations work in a branchless Tempest?  Right now, there is 
the discussion on removing the XML tests from Tempest, yet they are still valid 
for Havana and Icehouse.  If they get removed, will they still be accessible 
and runnable for Havana version tests?  I can see running from a tagged version 
for Havana, but if you are *not* running from the tag, then the files would be 
gone.  So, I'm wondering how this would work for Refstack, testing backported 
bugfixes, etc.

Another related question arises from the discussion of Nova API versions.  
Tempest tests are being enhanced to do validation, and the newer API versions  
(2.1,  3.n, etc. when the approach is decided) will do validation, etc.  How 
will these backward incompatible tests be handled if the test that works for 
Havana gets modified to work for Juno and starts failing Havana code base?

With the discussion of project functional tests that could be maintained in one 
place, but run in two (maintenance location undecided, run locale local and 
Tempest/Integrated), how would this cross project effort be affected by a 
branchless Tempest?

Maybe we need some use cases to ferret out the corner cases of a branchless 
Tempest implementation?  I think we need to get more into some of the details 
to understand what would be needed to be added/modified/ removed to make this 
design proposal work.


From: David Kranz []
Sent: Friday, April 04, 2014 6:10 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [RFC] Tempest without branches

On 04/04/2014 07:37 AM, Sean Dague wrote:

An interesting conversation has cropped up over the last few days in -qa

and -infra which I want to bring to the wider OpenStack community. When

discussing the use of Tempest as part of the Defcore validation we came

to an interesting question:

Why does Tempest have stable/* branches? Does it need them?

Historically the Tempest project has created a stable/foo tag the week

of release to lock the version of Tempest that will be tested against

stable branches. The reason we did that is until this cycle we had

really limited nobs in tempest to control which features were tested.

stable/havana means - test everything we know how to test in havana. So

when, for instance, a new API extension landed upstream in icehouse,

we'd just add the tests to Tempest. It wouldn't impact stable/havana,

because we wouldn't backport changes.

But is this really required?

For instance, we don't branch openstack clients. They are supposed to

work against multiple server versions. Tempest, at some level, is

another client. So there is some sense there.

Tempest now also have flags on features, and tests are skippable if

services, or even extensions aren't enabled (all explicitly setable in

the tempest.conf). This is a much better control mechanism than the

course grained selection of stable/foo.

If we decided not to set a stable/icehouse branch in 2 weeks, the gate

would change as follows:

Project masters: no change

Project stable/icehouse: would be gated against Tempest master

Tempest master: would double the gate jobs, gate on project master and

project stable/icehouse on every commit.

(That last one needs infra changes to work right, those are all in

flight right now to assess doability.)

Some interesting effects this would have:

 * Tempest test enhancements would immediately apply on stable/icehouse *

... giving us more confidence. A large amount of tests added to master

in every release are enhanced checking for existing function.

 * Tempest test changes would need server changes in master and

stable/icehouse *

In trying tempest master against stable/havana we found a number of

behavior changes in projects that there had been a 2 step change in the

Tempest tests to support. But this actually means that stable/havana and

stable/icehouse for the same API version are different. Going forward

this would require master + stable changes on the projects + Tempest

changes. Which would provide much more friction in changing these sorts

of things by accident.

 * Much more stable testing *

If every Tempest change is gating on stable/icehouse, the week long

stable/havana can't pass tests won't happen. There will be much more

urgency to keep stable branches functioning.

If we got rid of branches in Tempest the path would be:

 * infrastructure to support this in infra - in process, probably

landing today

 * don't set stable/icehouse - decision needed by Apr 17th

 * changes to d-g/devstack to be extra explicit about what features

stable/icehouse should support in tempest.conf

 * see if we can make master work with stable/havana to remove the

stable/havana Tempest branch (if this is doable in a month, great, if

not just wait for havana to age out).

Re: [openstack-dev] [nova] stable branches failure to handle review backlog

2014-07-30 Thread Rochelle.RochelleGrober
 From: Gary Kotton [] 
 On 7/30/14, 8:22 PM, Kevin L. Mitchell
 On Wed, 2014-07-30 at 09:01 +0200, Flavio Percoco wrote:
  As a stable-maint, I'm always hesitant to review patches I've no
  understanding on, hence I end up just checking how big is the patch,
  whether it adds/removes new configuration options etc but, the real
  review has to be done by someone with good understanding of the
  Something I've done in the past is adding the folks that had
  the patch on master to the stable/maint review. They should know
  code already, which means it shouldn't take them long to review it.
  the sanity checks should've been done already.
  With all that said, I'd be happy to give *-core approval permissions
  stable branches, but I still think we need a dedicated team that has
  final (or at least relevant) word on the patches.
 Maybe what we need to do is give *-core permission to +2 the patches,
 but only stable/maint team has *approval* permission.  Then, the cores
 can review the code, and stable/maint only has to verify applicability
 to the stable branchŠ


This approach guarantees final say by the stable/maint team, but lets any core 
validate that the patch is appropriate from the project's technical 
perspective.  It keeps the balance but broadens the validation pool.


 Kevin L. Mitchell
 OpenStack-dev mailing list
 OpenStack-dev mailing list

OpenStack-dev mailing list

Re: [openstack-dev] [Neutron][LBaaS] SSL re-encryption scenario question

2014-04-18 Thread Rochelle.RochelleGrober
+1 for the discussion

Remember, a cloud does not always have all its backend co-located.  There are 
sometimes AZs and often other hidden network hops.  

And, to ask the obvious, what do you think the response is when you whisper 
NSA in a crowded Google data center?


-Original Message-
From: Jorge Miramontes [] 
Sent: Friday, April 18, 2014 2:13 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] SSL re-encryption scenario 

+1 for German's use cases. We need SSL re-encryption for decisions the
load balancer needs to make at the l7 layer as well. Thanks Clint, for
your thorough explanation from a security standpoint.


On 4/18/14 1:38 PM, Clint Byrum wrote:

Excerpts from Stephen Balukoff's message of 2014-04-18 10:36:11 -0700:
 Dang.  I was hoping this wasn't the case.  (I personally think it's a
 little silly not to trust your service provider to secure a network when
 they have root access to all the machines powering your cloud... but I

No one person or even group of people on the operator's network will have
full access to everything. Security is best when it comes in layers. Area
51 doesn't just have a guard shack and then you drive right into the
hangars with the UFO's and alien autopsies. There are sensors, mobile
guards, secondary checkpoints, locks on the outer doors, and locks on
the inner doors. And perhaps most importantly, the MP who approves your
entry into the first gate, does not even have access to the next one.

Your SSL terminator is a gate. What happens once an attacker (whoever
that may be, your disgruntled sysadmin, or rogue hackers) is behind that
gate _may_ be important.

 Part of the reason I was hoping this wasn't the case, isn't just
because it
 consumes a lot more CPU on the load balancers, but because now we
 potentially have to manage client certificates and CA certificates (for
 authenticating from the proxy to back-end app servers). And we also
have to
 decide whether we allow the proxy to use a different client cert / CA
 pool, or per member.
 Yes, I realize one could potentially use no client cert or CA (ie.
 encryption but no auth)...  but that actually provides almost no extra
 security over the unencrypted case:  If you can sniff the traffic
 proxy and back-end server, it's not much more of a stretch to assume you
 can figure out how to be a man-in-the-middle.

A passive attack where the MITM does not have to witness the initial
handshake or decrypt/reencrypt to sniff things is quite a bit easier to
pull off and would be harder to detect. So almost no extra security
is not really accurate. But this is just one point of data for risk

 Do any of you have a use case where some back-end members require SSL
 authentication from the proxy and some don't? (Again, deciding whether
 client cert / CA usage should attach to a pool or to a member.)
 It's a bit of a rabbit hole, eh.

Security turns into an endless rat hole when you just look at it as a
product, such as A secure load balancer.

If, however, you consider that it is really just a process of risk
assessment and mitigation, then you can find a sweet spot that works
in your business model. How much does it cost to mitigate the risk
of unencrypted backend traffic from the load balancer?  What is the
potential loss if the traffic is sniffed? How likely is it that it will
be sniffed? .. Those are ongoing questions that need to be asked and
then reevaluated, but they don't have a fruitless stream of what-if's
that have to be baked in like the product discussion. It's just part of
your process, and processes go on until they aren't needed anymore.

IMO a large part of operating a cloud is decoupling the ability to setup
a system from the ability to enable your business with a system. So
if you can communicate the risks of doing without backend encryption,
and charge the users appropriately when they choose that the risk is
worth the added cost, then I think it is worth it to automate the setup
of CA's and client certs and put that behind an API. Luckily, you will
likely find many in the OpenStack community who can turn that into a
business opportunity and will help.

OpenStack-dev mailing list

OpenStack-dev mailing list

OpenStack-dev mailing list

Re: [openstack-dev] [qa] QA Summit Meet-up Atlanta

2014-04-30 Thread Rochelle.RochelleGrober
+1 but I don't get in until late Sunday :-(  Any chance you could do this 
sometime Monday?  I'd like to meet the people behind the IRC names and email 


-Original Message-
From: Ken'ichi Ohmichi [] 
Sent: Wednesday, April 30, 2014 6:29 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [qa] QA Summit Meet-up Atlanta

2014-04-30 19:11 GMT+09:00 Koderer, Marc
 Hi folks,

 last time we met one day before the Summit started for a short meet-up.
 Should we do the same this time?

 I will arrive Saturday to recover from the jet lag ;) So Sunday 11th would be 
 fine for me.

I may be in the jet lag Sunday, but the meet-up would be nice for me;-)

Ken'ichi Ohmichi

OpenStack-dev mailing list

OpenStack-dev mailing list

Re: [openstack-dev] [Ironic] handling drivers that will not be third-party tested

2014-05-21 Thread Rochelle.RochelleGrober
+1 for community contribs and a common place for them to be sourced.

From: Devananda van der Veen []
Sent: Wednesday, May 21, 2014 5:03 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Ironic] handling drivers that will not be third-party 

I'd like to bring up the topic of drivers which, for one reason or another, are 
probably never going to have third party CI testing.

Take for example the iBoot driver proposed here:

I would like to encourage this type of driver as it enables individual 
contributors, who may be using off-the-shelf or home-built systems, to benefit 
from Ironic's ability to provision hardware, even if that hardware does not 
have IPMI or another enterprise-grade out-of-band management interface. 
However, I also don't expect the author to provide a full third-party CI 
environment, and as such, we should not claim the same level of test coverage 
and consistency as we would like to have with drivers in the gate.

As it is, Ironic already supports out-of-tree drivers. A python module that 
registers itself with the appropriate entrypoint will be made available if the 
ironic-conductor service is configured to load that driver. For what it's 
worth, I recall Nova going through a very similar discussion over the last few 

So, why not just put the driver in a separate library on github or stackforge?

OpenStack-dev mailing list

Re: [openstack-dev] [oslo] logging around olso lockutils

2014-09-25 Thread Rochelle.RochelleGrober

Exactly what I was thinking.  Semaphore races and deadlocks are important to be 
able to trace, but the normal production cloud doesn't want to see those 

What might be even better would be to also put a counter on the semaphores so 
that if they ever are 1 or 0 they report an error on normal log levels.  I'm 
assuming it would be an error.  I can't see why it would be just a warn or 
info, but, I don't know the guts of the code here.


-Original Message-
From: Joshua Harlow [] 
Sent: Thursday, September 25, 2014 12:23 PM
To:; OpenStack Development Mailing List (not for usage 
Subject: Re: [openstack-dev] [oslo] logging around olso lockutils

Or how about we add in a new log level?

A few libraries I have come across support the log level 5 (which is less than 
debug (10) but greater than notset (0))...

One usage of this is in the multiprocessing library in python itself @

Kazoo calls it the 'BLATHER' level @

Since these messages can be actually useful for lock_utils developers it could 
be useful to keep them[1]?

Just a thought...

[1] Ones mans DEBUG is another mans garbage, ha.

On Sep 25, 2014, at 12:06 PM, Ben Nemec wrote:

 On 09/25/2014 07:49 AM, Sean Dague wrote:
 Spending a ton of time reading logs, oslo locking ends up basically
 creating a ton of output at DEBUG that you have to mentally filter to
 find problems:
 2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
 ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
 Created new semaphore iptables internal_lock
 2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
 ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
 Acquired semaphore iptables lock
 2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
 ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
 Attempting to grab external lock iptables external_lock
 2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
 ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
 Got file lock /opt/stack/data/nova/nova-iptables acquire
 2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
 ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
 Got semaphore / lock _do_refresh_provider_fw_rules inner
 2014-09-24 18:44:49.244 DEBUG nova.compute.manager
 DeleteServersAdminTestXML-469708524] [instance:
 98eb8e6e-088b-4dda-ada5-7b2b79f62506] terminating bdm
 _cleanup_volumes /opt/stack/new/nova/nova/compute/
 2014-09-24 18:44:49.248 DEBUG nova.openstack.common.lockutils
 ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
 Released file lock /opt/stack/data/nova/nova-iptables release
 2014-09-24 18:44:49.248 DEBUG nova.openstack.common.lockutils
 ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
 Releasing semaphore iptables lock
 2014-09-24 18:44:49.249 DEBUG nova.openstack.common.lockutils
 ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
 Semaphore / lock released _do_refresh_provider_fw_rules inner
 Also readable here:
 (Yes, it's kind of ugly)
 What occured to me is that in debugging locking issues what we actually
 care about is 2 things semantically:
 #1 - tried to get a lock, but someone else has 

Re: [openstack-dev] Thoughts on OpenStack Layers and a Big Tent model

2014-09-26 Thread Rochelle.RochelleGrober
Robert Collins on Friday, September 26, 2014 3:33 PM wrote:
 On 27 September 2014 09:43, Jay Pipes wrote:
  Hi James, thanks for the corrections/explanations. A comment inline
 (and a
  further question) :)
  Oh, good to know. Sorry, my information about Triple-O's undercloud
 setup is
  clearly outdated. I thought that the undercloud was build from source
  repositories devstack-style. Thanks for hitting me with a cluestick.
 Thats one installation strategy :).
  Even when not installing via a distribution, and either directly
  trunk or the integrated release tarballs, I don't know that any
  TripleO opinion enters into it. TripleO uses the integrated projects
  of OpenStack to deploy an overcloud. In an overcloud, you may see
  support for some incubated projects, depending on if there's
  from the community for that support.
  OK, interesting. So, in summary, Triple-O really doesn't offer much
 of a
  this is production-ready stamp to anything based on whether it
 deploys a
  project or not. So, operators who deploy with Triple-O would be in
  you're on your own camp from the bulleted list above. Would that be
 a fair
 TripleO upstream doesn't offer a production-ready stamp for the
 workload clouds; for deploy clouds we do - simply because you can't
 use non-production-ready services to do your deploy... some of our
 stamps have substantial caveats today (e.g. Heat) - but they are being
 worked on.
 But then Nova upstream doesn't offer production-ready stamps either.
 Are cells production ready? Instance groups? Or generally any new
 *distributions* of TripleO offer production ready stamps.
 RDO offers one
 HP Helion offers one.
 In exactly the same way that distributions offer production stamps
 about Nova, distributions that use TripleO offer production stamps
 about Nova :).
 And I think this is James's point. Your category 2 above saying that
 TripleO is different is just confused: TripleO is a deployment
 architecture [evolving into a set of such], *not* a distribution
 channel. 1 and 3 are distribution channels.

[Rocky] I'd like to make a couple of points; first is how many commercial 
deployers/operators would consider any of OpenStack production ready and if 
they do, what would that subset actually be? ( a little snarky, but not really)

Second, I'd like to point out that Defcore is attempting to provide guidance 
along these lines, but may be considered a bit more strict than a Production 
Ready label.  Then again, it may be less strict, depending on test coverage;-)

Check out the scoring criteria here:

In principle, OpenStack functionality has to have been production tested with 
a fairly large distribution base, along with a number of other criteria.  As 
defined, it would definitely be a subset of production ready but it would be 
a reasonably safe subset that could then be built upon.

Beyond the DefCore criteria, RefStack is heading towards the point where any 
operator could run all of Tempest or any selection of Tempest tests against 
his/her installed cloud and view all the results.  They could even save them to 
do trend analysis.

But, this still begs the question of what is production ready especially for 
Open Source code.

Perhaps we need the release notes to be very specific on which APIs, features 
and/or projects are new/radically altered in a specific release. This could 
allow operators to install Juno release/Icehouse functionality which would 
theoretically be much better than Icehouse Release which is what a lot of 
shops would do, reasoning that Icehouse has been out six months, so the gotchas 
are known and patches have been released to fix the worst issues.  Whereas, I 
think that most people in QA and/or deep in the various projects would say that 
the functionality that was released in Icehouse that also is in Juno would be 
more performant and less buggy than the Icehouse release.

So really, the question might really be:  how do you get deployers who want 
greater stability to actually get it by deploying the current release with past 
release's functionality subset?  And how do you communicate that some of the 
past release's functionality is still not production ready?

Hard problem.  

 Robert Collins
 Distinguished Technologist
 HP Converged Cloud
 OpenStack-dev mailing list

OpenStack-dev mailing list

Re: [openstack-dev] [tempest] [devstack] Generic scripts for Tempest configuration

2014-10-24 Thread Rochelle.RochelleGrober
Hi, Timur.

Check out [1].   Boris Pavlovic has been working towards what you want for more 
than a full release cycle.  There are still major issues to be conquered, but 
having something that gets us part of the way there and can identify what can’t 
be determined so that the humans have only a subset to work out would be a 
great first step.

There are also other reviews out there that need to come together to really 
make this work.  And projects that would be the better for it (Refstack and 
Rally).  These are [2] allowing Tempest tests to run as non-admin, [3] making 
Tempest pluggable, [4] refactoring the client manager to be more flexible.

I think some others may have merged already.  The  bottom line is to refactor 
tempest such that there is a test server with the necessary tools and 
components to make it work, and a tempest lib such that writing tests can 
benefit from common procedures.

Enjoy the reading.



From: Timur Nurlygayanov []
Sent: Friday, October 24, 2014 4:05 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [tempest] [devstack] Generic scripts for Tempest 

Hi all,
we are using Tempest tests to verify every changes in different OpenStack 
components and we have scripts in devstack, which allow to configure Tempest.
We want to use Tempest tests to verify different clouds, not only installed 
with devstack and to do this we need to configure Tempest manually (or with 
some no-generic scripts, which allow to configure tempest for specific lab 
Looks like we can improve these scripts for configuration of the Tempest, which 
we have in devstack repository now and create generic scripts for Tempest, 
which can be used by devstack scripts or manually, to configure Tempest for any 
private/public OpenStack clouds. These scripts should allow to easily configure 
Tempest: user should provide only Keystone endpoint and logins/passwords, other 
parameters can be optional and can be configured automatically.

The idea is to have the generic scripts, which will allow to easily configure 
Tempest from-the-box, without deep inspection of lab configuration (but with 
the ability to change optional parameters too, if it is required).


Senior QA Engineer
OpenStack Projects
Mirantis Inc
OpenStack-dev mailing list