Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-08-11 Thread Mark McLoughlin
On Fri, 2014-08-08 at 09:06 -0400, Russell Bryant wrote:
 On 08/07/2014 08:06 PM, Michael Still wrote:
  It seems to me that the tension here is that there are groups who
  would really like to use features in newer libvirts that we don't CI
  on in the gate. Is it naive to think that a possible solution here is
  to do the following:
  
   - revert the libvirt version_cap flag
 
 I don't feel strongly either way on this.  It seemed useful at the time
 for being able to decouple upgrading libvirt and enabling features that
 come with that.

Right, I suggested the flag as a more deliberate way of avoiding the
issue that was previously seen in the gate with live snapshots. I still
think it's a pretty elegant and useful little feature, and don't think
we need to use it as proxy battle over testing requirements for new
libvirt features.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-08-12 Thread Mark McLoughlin
On Mon, 2014-08-11 at 15:25 -0700, Joe Gordon wrote:
 
 
 
 On Sun, Aug 10, 2014 at 11:59 PM, Mark McLoughlin mar...@redhat.com
 wrote:
 On Fri, 2014-08-08 at 09:06 -0400, Russell Bryant wrote:
  On 08/07/2014 08:06 PM, Michael Still wrote:
   It seems to me that the tension here is that there are
 groups who
   would really like to use features in newer libvirts that
 we don't CI
   on in the gate. Is it naive to think that a possible
 solution here is
   to do the following:
  
- revert the libvirt version_cap flag
 
  I don't feel strongly either way on this.  It seemed useful
 at the time
  for being able to decouple upgrading libvirt and enabling
 features that
  come with that.
 
 
 Right, I suggested the flag as a more deliberate way of
 avoiding the
 issue that was previously seen in the gate with live
 snapshots. I still
 think it's a pretty elegant and useful little feature, and
 don't think
 we need to use it as proxy battle over testing requirements
 for new
 libvirt features.
 
 
 Mark,
 
 
 I am not sure if I follow.  The gate issue with live snapshots has
 been worked around by turning it off [0], so presumably this patch is
 forward facing.  I fail to see how this patch is needed to help the
 gate in the future.

On the live snapshot issue specifically, we disabled it by requiring
1.3.0 for the feature. With the version cap set to 1.2.2, we won't
automatically enable this code path again if we update to 1.3.0. No
question that's a bit of a mess, though.

The point was a more general one - we learned from the live snapshot
issue that having a libvirt upgrade immediately enable new code paths
was a bad idea. The patch is a simple, elegant way of avoiding that.

  Wouldn't it just delay the issues until we change the version_cap?

Yes, that's the idea. Rather than having to scramble when the new
devstack-gate image shows up, we'd be able to work on any issues in the
context of a patch series to bump the version_cap.

 The issue I see with the libvirt version_cap [1] is best captured in
 its commit message: The end user can override the limit if they wish
 to opt-in to use of untested features via the 'version_cap' setting in
 the 'libvirt' group. This goes against the very direction nova has
 been moving in for some time now. We have been moving away from
 merging untested (re: no integration testing) features.  This patch
 changes the very direction the project is going in over testing
 without so much as a discussion. While I think it may be time that we
 revisited this discussion, the discussion needs to happen before any
 patches are merged.

You put it well - some apparently see us moving towards a zero-tolerance
policy of not having any code which isn't functionally tested in the
gate. That obviously is not the case right now.

The sentiment is great, but any zero-tolerance policy is dangerous. I'm
very much in favor of discussing this further. We should have some
principles and goals around this, but rather than argue this in the
abstract we should be open to discussing the tradeoffs involved with
individual patches.

 I am less concerned about the contents of this patch, and more
 concerned with how such a big de facto change in nova policy (we
 accept untested code sometimes) without any discussion or consensus.
 In your comment on the revert [2], you say the 'whether not-CI-tested
 features should be allowed to be merged' debate is 'clearly
 unresolved.' How did you get to that conclusion? This was never
 brought up in the mid-cycles as a unresolved topic to be discussed. In
 our specs template we say Is this untestable in gate given current
 limitations (specific hardware / software configurations available)?
 If so, are there mitigation plans (3rd party testing, gate
 enhancements, etc) [3].  We have been blocking untested features for
 some time now.

Asking is this tested in a spec template makes a tonne of sense.
Requiring some thought to be put into mitigation where a feature is
untestable in the gate makes sense. Requiring that the code is tested
where possible makes sense. It's a zero-tolerance get your code
functionally tested or GTFO policy that I'm concerned about.

 I am further perplexed by what Daniel Berrange, the patch author,
 meant when he commented [2] Regardless of the outcome of the testing
 discussion we believe this is a useful feature to have. Who is 'we'?
 Because I don't see how that can be nova-core or even nova-specs-core,
 especially considering how many members of those groups are +2 on the
 revert. So if 'we' is neither of those groups then who is 'we'?

That's for Dan to answer, but I think you're either nitpicking or have a
very serious concern.

If nitpicking, Dan could just be using the Royal 'We' :) Or he could
just mean

[openstack-dev] [nova] Retrospective veto revert policy

2014-08-12 Thread Mark McLoughlin
Hey

(Terrible name for a policy, I know)

From the version_cap saga here:

  https://review.openstack.org/110754

I think we need a better understanding of how to approach situations
like this.

Here's my attempt at documenting what I think we're expecting the
procedure to be:

  https://etherpad.openstack.org/p/nova-retrospective-veto-revert-policy

If it sounds reasonably sane, I can propose its addition to the
Development policies doc.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] so what do i do about libvirt-python if i'm on precise?

2014-08-12 Thread Mark McLoughlin
On Wed, 2014-07-30 at 15:34 -0700, Clark Boylan wrote:
 On Wed, Jul 30, 2014, at 03:23 PM, Jeremy Stanley wrote:
  On 2014-07-30 13:21:10 -0700 (-0700), Joe Gordon wrote:
   While forcing people to move to a newer version of libvirt is
   doable on most environments, do we want to do that now? What is
   the benefit of doing so?
  [...]
  
  The only dog I have in this fight is that using the split-out
  libvirt-python on PyPI means we finally get to run Nova unit tests
  in virtualenvs which aren't built with system-site-packages enabled.
  It's been a long-running headache which I'd like to see eradicated
  everywhere we can. I understand though if we have to go about it
  more slowly, I'm just excited to see it finally within our grasp.
  -- 
  Jeremy Stanley
 
 We aren't quite forcing people to move to newer versions. Only those
 installing nova test-requirements need newer libvirt.

Yeah, I'm a bit confused about the problem here. Is it that people want
to satisfy test-requirements through packages rather than using a
virtualenv?

(i.e. if people just use virtualenvs for unit tests, there's no problem
right?)

If so, is it possible/easy to create new, alternate packages of the
libvirt python bindings (from PyPI) on their own separately from the
libvirt.so and libvirtd packages?

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Nominating Jay Pipes for nova-core

2014-08-12 Thread Mark McLoughlin
On Wed, 2014-07-30 at 14:02 -0700, Michael Still wrote:
 Greetings,
 
 I would like to nominate Jay Pipes for the nova-core team.
 
 Jay has been involved with nova for a long time now.  He's previously
 been a nova core, as well as a glance core (and PTL). He's been around
 so long that there are probably other types of core status I have
 missed.
 
 Please respond with +1s or any concerns.

Was away, but +1 for the record. Would have been happy to see this some
time ago.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Mark McLoughlin
On Tue, 2014-08-05 at 18:03 +0200, Thierry Carrez wrote:
 Hi everyone,
 
 With the incredible growth of OpenStack, our development community is
 facing complex challenges. How we handle those might determine the
 ultimate success or failure of OpenStack.
 
 With this cycle we hit new limits in our processes, tools and cultural
 setup. This resulted in new limiting factors on our overall velocity,
 which is frustrating for developers. This resulted in the burnout of key
 firefighting resources. This resulted in tension between people who try
 to get specific work done and people who try to keep a handle on the big
 picture.

Always fun catching up on threads like this after being away ... :)

I think the thread has revolved around three distinct areas:

  1) The per-project review backlog, its implications for per-project 
 velocity, and ideas for new workflows or tooling

  2) Cross-project scaling issues that get worse as we add more 
 integrated projects

  3) The factors that go into deciding whether a project belongs in the 
 integrated release - including the appropriateness of its scope,
 the soundness of its architecture and how production ready it is.

The first is important - hugely important - but I don't think it has any
bearing on the makeup, scope or contents of the integrated release, but
certainly will have a huge bearing on the success of the release and the
project more generally.

The third strikes me as a part of the natural evolution around how we
think about the integrated release. I don't think there's any particular
crisis or massive urgency here. As the TC considers proposals to
integrate (or de-integrate) projects, we'll continue to work through
this. These debates are contentious enough that we should avoid adding
unnecessary drama to them by conflating the issues with more pressing,
urgent issues.

I think the second area is where we should focus. We're concerned that
we're hitting a breaking point with some cross-project issues - like
release management, the gate, a high level of non-deterministic test
failures, insufficient cross-project collaboration on technical debt
(e.g. via Oslo), difficulty in reaching consensus on new cross-project
initiatives (Sean gave the examples of Group Based Policy and Rally) -
such that drastic measures are required. Like maybe we should not accept
any new integrated projects in this cycle while we work through those
issues.

Digging deeper into that means itemizing these cross-project scaling
issues, figuring out which of them need drastic intervention, discussing
what the intervention might be and the realistic overall effects of
those interventions.

AFAICT, the closest we've come in the thread to that level of detail is
Sean's email here:

  http://lists.openstack.org/pipermail/openstack-dev/2014-August/042277.html

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Mark McLoughlin
On Thu, 2014-08-07 at 09:30 -0400, Sean Dague wrote:

 While I definitely think re-balancing our quality responsibilities back
 into the projects will provide an overall better release, I think it's
 going to take a long time before it lightens our load to the point where
 we get more breathing room again.

I'd love to hear more about this re-balancing idea. It sounds like we
have some concrete ideas here and we're saying they're not relevant to
this thread because they won't be an immediate solution?

 This isn't just QA issues, it's a coordination issue on overall
 consistency across projects. Something that worked fine at 5 integrated
 projects, got strained at 9, and I think is completely untenable at 15.

I can certainly relate to that from experience with Oslo.

But if you take a concrete example - as more new projects emerge, it
became harder to get them all using oslo.messaging and using it
consistent ways. That's become a lot better with Doug's idea of Oslo
project delegates.

But if we had not added those projects to the release, the only reason
that the problem would be more manageable is that the use of
oslo.messaging would effectively become a requirement for integration.
So, projects requesting integration have to take cross-project
responsibilities more seriously for fear their application would be
denied.

That's a very sad conclusion. Our only tool for encouraging people to
take this cross-project issue is being accepted into the release and,
once achieved, the cross-project responsibilities aren't taken so
seriously?

I don't think it's so bleak as that - given the proper support,
direction and tracking I think we're seeing in Oslo how projects will
play their part in getting to cross-project consistency.

 I think one of the big issues with a large number of projects is that
 implications of implementation of one project impact others, but people
 don't always realize. Locally correct decisions for each project may not
 be globally correct for OpenStack. The GBP discussion, the Rally
 discussion, all are flavors of this.

I think we need two things here - good examples of how these
cross-project initiatives can succeed so people can learn from them, and
for the initiatives themselves to be patiently lead by those whose goal
is a cross-project solution.

It's hard work, absolutely no doubt. The point again, though, is that it
is possible to do this type of work in such a way that once a small
number of projects adopt the approach, most of the others will follow
quite naturally.

If I was trying to get a consistent cross-project approach in a
particular area, the least of my concerns would be whether Ironic,
Marconi, Barbican or Designate would be willing to fall in line behind a
cross-project consensus.

 People are frustrated in infra load, for instance. It's probably worth
 noting that the 'config' repo currently has more commits landed than any
 other project in OpenStack besides 'nova' in this release. It has 30%
 the core team size as Nova (http://stackalytics.com/?metric=commits).

Yes, infra is an extremely busy project. I'm not sure I'd compare
infra/config commits to Nova commits in order to illustrate that,
though.

Infra is a massive endeavor, it's as critical a part of the project as
any project in the integrated release, and like other strategic
efforts struggles to attract contributors from as diverse a number of
companies as the integrated projects.

 So I do think we need to really think about what *must* be in OpenStack
 for it to be successful, and ensure that story is well thought out, and
 that the pieces which provide those features in OpenStack are clearly
 best of breed, so they are deployed in all OpenStack deployments, and
 can be counted on by users of OpenStack.

I do think we try hard to think this through, but no doubt we need to do
better. Is this conversation concrete enough to really move our thinking
along sufficiently, though?

 Because if every version of
 OpenStack deploys with a different Auth API (an example that's current
 but going away), we can't grow an ecosystem of tools around it.

There's a nice concrete example, but it's going away? What's the best
current example to talk through?

 This is organic definition of OpenStack through feedback with operators
 and developers on what's minimum needed and currently working well
 enough that people are happy to maintain it. And make that solid.
 
 Having a TC that is independently selected separate from the PTLs allows
 that group to try to make some holistic calls here.
 
 At the end of the day, that's probably going to mean saying No to more
 things. Everytime I turn around everyone wants the TC to say No to
 things, just not to their particular thing. :) Which is human nature.
 But I think if we don't start saying No to more things we're going to
 end up with a pile of mud that no one is happy with.

That we're being so abstract about all of this is frustrating. I get
that no-one wants to start a 

Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Mark McLoughlin
On Fri, 2014-08-08 at 15:36 -0700, Devananda van der Veen wrote:
 On Tue, Aug 5, 2014 at 10:02 AM, Monty Taylor mord...@inaugust.com wrote:

  Yes.
 
  Additionally, and I think we've been getting better at this in the 2 cycles
  that we've had an all-elected TC, I think we need to learn how to say no on
  technical merit - and we need to learn how to say thank you for your
  effort, but this isn't working out Breaking up with someone is hard to do,
  but sometimes it's best for everyone involved.
 
 
 I agree.
 
 The challenge is scaling the technical assessment of projects. We're
 all busy, and digging deeply enough into a new project to make an
 accurate assessment of it is time consuming. Some times, there are
 impartial subject-matter experts who can spot problems very quickly,
 but how do we actually gauge fitness?

Yes, it's important the TC does this and it's obvious we need to get a
lot better at it.

The Marconi architecture threads are an example of us trying harder (and
kudos to you for taking the time), but it's a little disappointing how
it has turned out. On the one hand there's what seems like a this
doesn't make any sense gut feeling and on the other hand an earnest,
but hardly bite-sized justification for how the API was chosen and how
it lead to the architecture. Frustrating that appears to not be
resulting in either improved shared understanding, or improved
architecture. Yet everyone is trying really hard.

 Letting the industry field-test a project and feed their experience
 back into the community is a slow process, but that is the best
 measure of a project's success. I seem to recall this being an
 implicit expectation a few years ago, but haven't seen it discussed in
 a while.

I think I recall us discussing a must have feedback that it's
successfully deployed requirement in the last cycle, but we recognized
that deployers often wait until a project is integrated.

 I'm not suggesting we make a policy of it, but if, after a
 few cycles, a project is still not meeting the needs of users, I think
 that's a very good reason to free up the hold on that role within the
 stack so other projects can try and fill it (assuming that is even a
 role we would want filled).

I'm certainly not against discussing de-integration proposals. But I
could imagine a case for de-integrating every single one of our
integrated projects. None of our software is perfect. How do we make
sure we approach this sanely, rather than run the risk of someone
starting a witch hunt because of a particular pet peeve?

I could imagine a really useful dashboard showing the current state of
projects along a bunch of different lines - summary of latest
deployments data from the user survey, links to known scalability
issues, limitations that operators should take into account, some
capturing of trends so we know whether things are improving. All of this
data would be useful to the TC, but also hugely useful to operators.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Mark McLoughlin
On Tue, 2014-08-12 at 14:26 -0400, Eoghan Glynn wrote:
   It seems like this is exactly what the slots give us, though. The core 
 review
  team picks a number of slots indicating how much work they think they can
  actually do (less than the available number of blueprints), and then
  blueprints queue up to get a slot based on priorities and turnaround time
  and other criteria that try to make slot allocation fair. By having the
  slots, not only is the review priority communicated to the review team, it
  is also communicated to anyone watching the project.
 
 One thing I'm not seeing shine through in this discussion of slots is
 whether any notion of individual cores, or small subsets of the core
 team with aligned interests, can champion blueprints that they have
 a particular interest in.
 
 For example it might address some pain-point they've encountered, or
 impact on some functional area that they themselves have worked on in
 the past, or line up with their thinking on some architectural point.
 
 But for whatever motivation, such small groups of cores currently have
 the freedom to self-organize in a fairly emergent way and champion
 individual BPs that are important to them, simply by *independently*
 giving those BPs review attention.
 
 Whereas under the slots initiative, presumably this power would be
 subsumed by the group will, as expressed by the prioritization
 applied to the holding pattern feeding the runways?
 
 I'm not saying this is good or bad, just pointing out a change that
 we should have our eyes open to.

Yeah, I'm really nervous about that aspect.

Say a contributor proposes a new feature, a couple of core reviewers
think it's important exciting enough for them to champion it but somehow
the 'group will' is that it's not a high enough priority for this
release, even if everyone agrees that it is actually cool and useful.

What does imposing that 'group will' on the two core reviewers and
contributor achieve? That the contributor and reviewers will happily
turn their attention to some of the higher priority work? Or we lose a
contributor and two reviewers because they feel disenfranchised?
Probably somewhere in the middle.

On the other hand, what happens if work proceeds ahead even if not
deemed a high priority? I don't think we can say that the contributor
and two core reviewers were distracted from higher priority work,
because blocking this work is probably unlikely to shift their focus in
a productive way. Perhaps other reviewers are distracted because they
feel the work needs more oversight than just the two core reviewers? It
places more of a burden on the gate?

I dunno ... the consequences of imposing group will worry me more than
the consequences of allowing small groups to self-organize like this.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Mark McLoughlin
On Tue, 2014-08-12 at 14:12 -0700, Joe Gordon wrote:


 Here is the full nova proposal on  Blueprint in Kilo: Runways and
 Project Priorities
  
 https://review.openstack.org/#/c/112733/
 http://docs-draft.openstack.org/33/112733/4/check/gate-nova-docs/5f38603/doc/build/html/devref/runways.html

Thanks again for doing this.

Four points in the discussion jump out at me. Let's see if I can
paraphrase without misrepresenting :)

  - ttx - we need tools to be able to visualize these runways

  - danpb - the real problem here is that we don't have good tools to 
help reviewers maintain a todo list which feeds, in part, off 
blueprint prioritization

  - eglynn - what are the implications for our current ability for 
groups within the project to self-organize?

  - russellb - why is different from reviewers sponsoring blueprints, 
how will it work better?


I've been struggling to articulate a tooling idea for a while now. Let
me try again based on the runways idea and the thoughts above ...


When a reviewer sits down to do some reviews, their goal should be to
work through the small number of runways they're signed up to and drive
the list of reviews that need their attention to zero.

Reviewers should be able to create their own runways and allow others
sign up to them.

The reviewers responsible for that runway are responsible for pulling
new reviews from explicitly defined feeder runways.

Some feeder runways could be automated; no more than a search query for
say new libvirt patches which aren't already in the libvirt driver
runway.

All of this activity should be visible to everyone. It should be
possible to look at all the runways, see what runways a patch is in,
understand the flow between runways, etc.


There's a lot of detail that would have to be worked out, but I'm pretty
convinced there's an opportunity to carve up the review backlog, empower
people to help out with managing the backlog, give reviewers manageable
queues for them to stay on top of, help ensure that project priorization
is one of the drivers of reviewer activity and increases contributor
visibility into how decisions are made.

Mark.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] so what do i do about libvirt-python if i'm on precise?

2014-08-13 Thread Mark McLoughlin
On Wed, 2014-08-13 at 10:26 +0100, Daniel P. Berrange wrote:
 On Tue, Aug 12, 2014 at 10:09:52PM +0100, Mark McLoughlin wrote:
  On Wed, 2014-07-30 at 15:34 -0700, Clark Boylan wrote:
   On Wed, Jul 30, 2014, at 03:23 PM, Jeremy Stanley wrote:
On 2014-07-30 13:21:10 -0700 (-0700), Joe Gordon wrote:
 While forcing people to move to a newer version of libvirt is
 doable on most environments, do we want to do that now? What is
 the benefit of doing so?
[...]

The only dog I have in this fight is that using the split-out
libvirt-python on PyPI means we finally get to run Nova unit tests
in virtualenvs which aren't built with system-site-packages enabled.
It's been a long-running headache which I'd like to see eradicated
everywhere we can. I understand though if we have to go about it
more slowly, I'm just excited to see it finally within our grasp.
-- 
Jeremy Stanley
   
   We aren't quite forcing people to move to newer versions. Only those
   installing nova test-requirements need newer libvirt.
  
  Yeah, I'm a bit confused about the problem here. Is it that people want
  to satisfy test-requirements through packages rather than using a
  virtualenv?
  
  (i.e. if people just use virtualenvs for unit tests, there's no problem
  right?)
  
  If so, is it possible/easy to create new, alternate packages of the
  libvirt python bindings (from PyPI) on their own separately from the
  libvirt.so and libvirtd packages?
 
 The libvirt python API is (mostly) automatically generated from a
 description of the XML that is built from the C source files. In
 tree with have fakelibvirt which is a semi-crappy attempt to provide
 a pure python libvirt client API with the same signature. IIUC, what
 you are saying is that we should get a better fakelibvirt that is
 truely identical with same API coverage /signatures as real libvirt ?

No, I'm saying that people are installing packaged versions of recent
releases of python libraries. But they're skeptical about upgrading
their libvirt packages. With the work done to enable libvirt be uploaded
to PyPI, can't the two be decoupled? Can't we have packaged versions of
the recent python bindings on PyPI that are independent of the base
packages containing libvirt.so and libvirtd?

(Or I could be completely misunderstanding the issue people are seeing)

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] stable branches failure to handle review backlog

2014-08-13 Thread Mark McLoughlin
On Tue, 2014-07-29 at 14:04 +0200, Thierry Carrez wrote:
 Ihar Hrachyshka a écrit :
  On 29/07/14 12:15, Daniel P. Berrange wrote:
  Looking at the current review backlog I think that we have to
  seriously question whether our stable branch review process in
  Nova is working to an acceptable level
  
  On Havana
  
- 43 patches pending
- 19 patches with a single +2
- 1 patch with a -1
- 0 patches wit a -2
- Stalest waiting 111 days since most recent patch upload
- Oldest waiting 250 days since first patch upload
- 26 patches waiting more than 1 month since most recent upload
- 40 patches waiting more than 1 month since first upload
  
  On Icehouse:
  
- 45 patches pending
- 17 patches with a single +2
- 4 patches with a -1
- 1 patch with a -2
- Stalest waiting 84 days since most recent patch upload
- Oldest waiting 88 days since first patch upload
- 10 patches waiting more than 1 month since most recent upload
- 29 patches waiting more than 1 month since first upload
  
  I think those stats paint a pretty poor picture of our stable branch
  review process, particularly Havana.
  
  It should not take us 250 days for our review team to figure out whether
  a patch is suitable material for a stable branch, nor should we have
  nearly all the patches waiting more than 1 month in Havana.
  
  These branches are not getting sufficient reviewer attention and we need
  to take steps to fix that.
  
  If I had to set a benchmark, assuming CI passes, I'd expect us to either
  approve or reject submissions for stable within a 2 week window in the
  common case, 1 month at the worst case.
  
  Totally agreed.
 
 A bit of history.
 
 At the dawn of time there were no OpenStack stable branches, each
 distribution was maintaining its own stable branches, duplicating the
 backporting work.

I'm not sure how much backporting was going on at the time of the Essex
summit. I'm sure Ubuntu had some backports, but that was probably about
it?

  At some point it was suggested (mostly by RedHat and
 Canonical folks) that there should be collaboration around that task,
 and the OpenStack project decided to set up official stable branches
 where all distributions could share the backporting work. The stable
 team group was seeded with package maintainers from all over the distro
 world.

During that first design summit session, it was mainly you, me and
Daviey discussing. Both you and Daviey saw this primarily about distros
collaborating, but I never saw it that way.

I don't see how any self-respecting open-source project can throw a
release over the wall and have no ability to address critical bugs with
that release until the next release 6 months later which will also
include a bunch of new feature work with new bugs. That's not a distro
maintainer point of view.

At that Essex summit, we were lamenting how many critical bugs in Nova
had been discovered shortly after the Diablo release. Our inability to
do a bugfix release of Nova for Diablo seemed like a huge problem to me.

 So these branches originally only exist as a convenient place to
 collaborate on backporting work. This is completely separate from
 development work, even if those days backports are often proposed by
 developers themselves. The stable branch team is separate from the rest
 of OpenStack teams. We have always been very clear tht if the stable
 branches are no longer maintained (i.e. if the distributions don't see
 the value of those anymore), then we'll consider removing them. We, as a
 project, only signed up to support those as long as the distros wanted them.

You can certainly argue that the project never signed up for the
responsibility. I don't see it that way, but there was certainly always
a debate whether this was the project taking responsibility for bugfix
releases or whether it was just downstream distros collaborating.

The thing about branches going away if they're not maintained isn't
anything unusual. If *any* effort within the project becomes so
unmaintained due to a lack of interest such that we can't stand over it,
then we should consider retiring it.

 We have been adding new members to the stable branch teams recently, but
 those tend to come from development teams rather than downstream
 distributions, and that starts to bend the original landscape.
 Basically, the stable branch needs to be very conservative to be a
 source of safe updates -- downstream distributions understand the need
 to weigh the benefit of the patch vs. the disruption it may cause.
 Developers have another type of incentive, which is to get the fix they
 worked on into stable releases, without necessarily being very
 conservative. Adding more -core people to the stable team to compensate
 the absence of distro maintainers will ultimately kill those branches.

That's quite a leap to say that -core team members will be so incapable
of the appropriate level of conservatism that the branch will be 

Re: [openstack-dev] [all] 3rd Party CI vs. Gerrit

2014-08-13 Thread Mark McLoughlin
On Wed, 2014-08-13 at 12:05 -0700, James E. Blair wrote:
 cor...@inaugust.com (James E. Blair) writes:
 
  Sean Dague s...@dague.net writes:
 
  This has all gone far enough that someone actually wrote a Grease Monkey
  script to purge all the 3rd Party CI content out of Jenkins UI. People
  are writing mail filters to dump all the notifications. Dan Berange
  filters all them out of his gerrit query tools.
 
  I should also mention that there is a pending change to do something
  similar via site-local Javascript in our Gerrit:
 
https://review.openstack.org/#/c/95743/
 
  I don't think it's an ideal long-term solution, but if it works, we may
  have some immediate relief without all having to install greasemonkey
  scripts.
 
 You may have noticed that this has merged, along with a further change
 that shows the latest results in a table format.  (You may need to
 force-reload in your browser to see the change.)

Beautiful! Thank you so much to everyone involved.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Retrospective veto revert policy

2014-08-14 Thread Mark McLoughlin
On Tue, 2014-08-12 at 15:56 +0100, Mark McLoughlin wrote:
 Hey
 
 (Terrible name for a policy, I know)
 
 From the version_cap saga here:
 
   https://review.openstack.org/110754
 
 I think we need a better understanding of how to approach situations
 like this.
 
 Here's my attempt at documenting what I think we're expecting the
 procedure to be:
 
   https://etherpad.openstack.org/p/nova-retrospective-veto-revert-policy
 
 If it sounds reasonably sane, I can propose its addition to the
 Development policies doc.

(In the spirit of we really need to step back and laugh at ourselves
sometimes ... )

Two years ago, we were worried about patches getting merged in less than
2 hours and had a discussion about imposing a minimum review time. How
times have changed! Is it even possible to land a patch in less than two
hours now? :)

Looking back over the thread, this part stopped me in my tracks:

  https://lists.launchpad.net/openstack/msg08625.html

On Tue, Mar 13, 2012, Mark McLoughlin markmc@xx wrote:

 Sometimes there can be a few folks working through an issue together and
 the patch gets pushed and approved so quickly that no-one else gets a
 chance to review.

Everyone has an opportunity to review even after a patch gets merged.

JE

It's not quite perfect, but if you squint you could conclude that
Johannes and I have both completely reversed our opinions in the
intervening two years :)

The lesson I take from that is to not get too caught up in the current
moment. We're growing and evolving rapidly. If we assume everyone is
acting in good faith, and allow each other to debate earnestly without
feelings getting hurt ... we should be able to work through anything.

Now, back on topic - digging through that thread, it doesn't seem we
settled on the idea of we can just revert it later if someone has an
objection in this thread. Does anyone recall when that idea first came
up?

Thanks,
Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Retrospective veto revert policy

2014-08-14 Thread Mark McLoughlin
On Tue, 2014-08-12 at 15:56 +0100, Mark McLoughlin wrote:
 Hey
 
 (Terrible name for a policy, I know)
 
 From the version_cap saga here:
 
   https://review.openstack.org/110754
 
 I think we need a better understanding of how to approach situations
 like this.
 
 Here's my attempt at documenting what I think we're expecting the
 procedure to be:
 
   https://etherpad.openstack.org/p/nova-retrospective-veto-revert-policy
 
 If it sounds reasonably sane, I can propose its addition to the
 Development policies doc.

Proposed here: https://review.openstack.org/114188

Thanks,
Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-18 Thread Mark McLoughlin
On Mon, 2014-08-18 at 14:23 +0200, Thierry Carrez wrote:
 Clint Byrum wrote:
  Here's why folk are questioning Ceilometer:
  
  Nova is a set of tools to abstract virtualization implementations.
  Neutron is a set of tools to abstract SDN/NFV implementations.
  Cinder is a set of tools to abstract block-device implementations.
  Trove is a set of tools to simplify consumption of existing databases.
  Sahara is a set of tools to simplify Hadoop consumption.
  Swift is a feature-complete implementation of object storage, none of
  which existed when it was started.
  Keystone supports all of the above, unifying their auth.
  Horizon supports all of the above, unifying their GUI.
  
  Ceilometer is a complete implementation of data collection and alerting.
  There is no shortage of implementations that exist already.
  
  I'm also core on two projects that are getting some push back these
  days:
  
  Heat is a complete implementation of orchestration. There are at least a
  few of these already in existence, though not as many as their are data
  collection and alerting systems.
  
  TripleO is an attempt to deploy OpenStack using tools that OpenStack
  provides. There are already quite a few other tools that _can_ deploy
  OpenStack, so it stands to reason that people will question why we
  don't just use those. It is my hope we'll push more into the unifying
  the implementations space and withdraw a bit from the implementing
  stuff space.
  
  So, you see, people are happy to unify around a single abstraction, but
  not so much around a brand new implementation of things that already
  exist.
 
 Right, most projects focus on providing abstraction above
 implementations, and that abstraction is where the real domain
 expertise of OpenStack should be (because no one else is going to do it
 for us). Every time we reinvent something, we are at larger risk because
 we are out of our common specialty, and we just may not be as good as
 the domain specialists. That doesn't mean we should never reinvent
 something, but we need to be damn sure it's a good idea before we do.
 It's sometimes less fun to piggyback on existing implementations, but if
 they exist that's probably what we should do.

It's certainly a valid angle to evaluate projects on, but it's also easy
to be overly reductive about it - e.g. that rather than re-implement
virtualization management, Nova should just be a thin abstraction over
vSphere, XenServer and oVirt.

To take that example, I don't think we as a project should be afraid of
having such discussions but it wouldn't be productive to frame that
conversation as the sky is falling, Nova re-implements the wheel, we
should de-integrate it.

 While Ceilometer is far from alone in that space, what sets it apart is
 that even after it was blessed by the TC as the one we should all
 converge on, we keep on seeing competing implementations for some (if
 not all) of its scope. Convergence did not happen, and without
 convergence we struggle in adoption. We need to understand why, and if
 this is fixable.

Convergence did not happen is a little unfair. It's certainly a busy
space, and things like Monasca and InfluxDB are new developments. I'm
impressed at how hard the Ceilometer team works to embrace such
developments and patiently talks through possibilities for convergence.
This attitude is something we should be applauding in an integrated
project.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [ptls] The Czar system, or how to scale PTLs

2014-08-22 Thread Mark McLoughlin
On Fri, 2014-08-22 at 11:01 -0400, Zane Bitter wrote:

 I don't see that as something the wider OpenStack community needs to 
 dictate. We have a heavyweight election process for PTLs once every 
 cycle because that used to be the process for electing the TC. Now that 
 it no longer serves this dual purpose, PTL elections have outlived their 
 usefulness.
 
 If projects want to have a designated tech lead, let them. If they want 
 to have the lead elected in a form of representative democracy, let 
 them. But there's no need to impose that process on every project. If 
 they want to rotate the tech lead every week instead of every 6 months, 
 why not let them? We'll soon see from experimentation which models work. 
 Let a thousand flowers bloom, c.

I like the idea of projects being free to experiment with their
governance rather than the TC mandating detailed governance models from
above.

But I also like the way Thierry is taking a trend we're seeing work out
well across multiple projects, and generalizing it. If individual
projects are to adopt explicit PTL duty delegation, then all the better
if those projects adopt it in similar ways.

i.e. this should turn out to be an optional best practice model that
projects can choose to adopt, in much the way the *-specs repo idea took
hold.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Launchpad tracking of oslo projects

2014-08-26 Thread Mark McLoughlin
On Fri, 2014-08-22 at 11:59 +0200, Thierry Carrez wrote:
 TL;DR:
 Let's create an Oslo projectgroup in Launchpad to track work across all
 Oslo libraries. In library projects, let's use milestones connected to
 published versions rather than the common milestones.

Sounds good to me, Thierry. Thanks for the thoughtful proposal.

The part about using integrated release milestones was more about
highlighting that we follow a similar development model and cadence -
i.e. it's helpful from a planning perspective to predict whether a given
feature is likely to land in juno-1, juno-2 or juno-3. When it comes to
release time, though, I'd much rather have a launchpad milestone that
reflects the release itself rather than the development milestone. 

Sounds like we need to choose between using launchpad milestones for
planning or releases, and choosing the latter makes sense to me.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] usage patterns for oslo.config

2014-08-26 Thread Mark McLoughlin
On Mon, 2014-08-11 at 15:06 -0400, Doug Hellmann wrote:
 On Aug 8, 2014, at 7:22 PM, Devananda van der Veen devananda@gmail.com 
 wrote:
 
  On Fri, Aug 8, 2014 at 12:41 PM, Doug Hellmann d...@doughellmann.com 
  wrote:
  
  That’s right. The preferred approach is to put the register_opt() in
  *runtime* code somewhere before the option will be used. That might be in
  the constructor for a class that uses an option, for example, as described
  in
  http://docs.openstack.org/developer/oslo.config/cfg.html#registering-options
  
  Doug
  
  Interesting.
  
  I've been following the prevailing example in Nova, which is to
  register opts at the top of a module, immediately after defining them.
  Is there a situation in which one approach is better than the other?
 
 The approach used in Nova is the “old” way of doing it. It works, but
 assumes that all of the application code is modifying a global
 configuration object. The runtime approach allows you to pass a
 configuration object to a library, which makes it easier to mock the
 configuration for testing and avoids having the configuration options
 bleed into the public API of the library. We’ve started using the
 runtime approach in new Oslo libraries that have configuration
 options, but changing the implementation in existing application code
 isn’t strictly necessary.

I've been meaning to dig up some of the old threads and reviews to
document how we got here.

But briefly:

  * this global CONF variable originates from the gflags FLAGS variable 
in Nova before oslo.config

  * I was initially determined to get rid of any global variable use 
and did a lot of work to allow glance use oslo.config without a 
global variable

  * one example detail of this work - when you use paste.deploy to 
load an app, you have no ability to pass a config object 
through paste.deploy to the app. I wrote a little helper that 
used a thread-local variable to mimic this pass-through.

  * with glance done, I moved on to making keystone use oslo.config and 
initially didn't use the global variable. Then I ran into a veto 
from termie who felt very strongly that a global variable should be 
used.

  * in the end, I bought the argument that the use of a global variable 
was pretty deeply ingrained (especially in Nova) and that we should 
aim for consistent coding patterns across projects (i.e. Oslo 
shouldn't be just about shared code, but also shared patterns). The 
only realistic standard pattern we could hope for was the use of 
the global variable.

  * with that agreed, we reverted glance back to using a global 
variable and all projects followed suit

  * the case of libraries is different IMO - we'd be foolish to design 
APIs which lock us into using the global object

So ... I wouldn't quite agree that this is the new way vs the old
way, but I think it would be reasonable to re-open the discussion about
using the global object in our applications. Perhaps, at least, we could
reduce our dependence on it.

Oh look, we have a FAQ on this:

https://wiki.openstack.org/wiki/Oslo#Why_does_oslo.config_have_a_CONF_object.3F_Global_object_SUCK.21

Mark.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] usage patterns for oslo.config

2014-08-27 Thread Mark McLoughlin
On Tue, 2014-08-26 at 10:00 -0400, Doug Hellmann wrote:
 On Aug 26, 2014, at 6:30 AM, Mark McLoughlin mar...@redhat.com wrote:
 
  On Mon, 2014-08-11 at 15:06 -0400, Doug Hellmann wrote:
  On Aug 8, 2014, at 7:22 PM, Devananda van der Veen 
  devananda@gmail.com wrote:
  
  On Fri, Aug 8, 2014 at 12:41 PM, Doug Hellmann d...@doughellmann.com 
  wrote:
  
  That’s right. The preferred approach is to put the register_opt() in
  *runtime* code somewhere before the option will be used. That might be in
  the constructor for a class that uses an option, for example, as 
  described
  in
  http://docs.openstack.org/developer/oslo.config/cfg.html#registering-options
  
  Doug
  
  Interesting.
  
  I've been following the prevailing example in Nova, which is to
  register opts at the top of a module, immediately after defining them.
  Is there a situation in which one approach is better than the other?
  
  The approach used in Nova is the “old” way of doing it. It works, but
  assumes that all of the application code is modifying a global
  configuration object. The runtime approach allows you to pass a
  configuration object to a library, which makes it easier to mock the
  configuration for testing and avoids having the configuration options
  bleed into the public API of the library. We’ve started using the
  runtime approach in new Oslo libraries that have configuration
  options, but changing the implementation in existing application code
  isn’t strictly necessary.
  
  I've been meaning to dig up some of the old threads and reviews to
  document how we got here.
  
  But briefly:
  
   * this global CONF variable originates from the gflags FLAGS variable 
 in Nova before oslo.config
  
   * I was initially determined to get rid of any global variable use 
 and did a lot of work to allow glance use oslo.config without a 
 global variable
  
   * one example detail of this work - when you use paste.deploy to 
 load an app, you have no ability to pass a config object 
 through paste.deploy to the app. I wrote a little helper that 
 used a thread-local variable to mimic this pass-through.
  
   * with glance done, I moved on to making keystone use oslo.config and 
 initially didn't use the global variable. Then I ran into a veto 
 from termie who felt very strongly that a global variable should be 
 used.
  
   * in the end, I bought the argument that the use of a global variable 
 was pretty deeply ingrained (especially in Nova) and that we should 
 aim for consistent coding patterns across projects (i.e. Oslo 
 shouldn't be just about shared code, but also shared patterns). The 
 only realistic standard pattern we could hope for was the use of 
 the global variable.
  
   * with that agreed, we reverted glance back to using a global 
 variable and all projects followed suit
  
   * the case of libraries is different IMO - we'd be foolish to design 
 APIs which lock us into using the global object
  
  So ... I wouldn't quite agree that this is the new way vs the old
  way, but I think it would be reasonable to re-open the discussion about
  using the global object in our applications. Perhaps, at least, we could
  reduce our dependence on it.
 
 The aspect I was calling “old” was the “register options at import
 time” pattern, not the use of a global. Whether we use a global or
 not, registering options at runtime in a code path that will be using
 them is better than relying on import ordering to ensure options are
 registered before they are used.

I don't think I've seen code (except for obscure cases) which uses the
CONF global directly (as opposed to being passed CONF as a parameter)
but doesn't register the options at import time.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] Request to include AMQP 1.0 support in Juno-3

2014-08-28 Thread Mark McLoughlin
On Thu, 2014-08-28 at 13:24 +0200, Flavio Percoco wrote:
 On 08/27/2014 03:35 PM, Ken Giusti wrote:
  Hi All,
  
  I believe Juno-3 is our last chance to get this feature [1] included
  into olso.messaging.
  
  I honestly believe this patch is about as low risk as possible for a
  change that introduces a whole new transport into oslo.messaging.  The
  patch shouldn't affect the existing transports at all, and doesn't
  come into play unless the application specifically turns on the new
  'amqp' transport, which won't be the case for existing applications.
  
  The patch includes a set of functional tests which exercise all the
  messaging patterns, timeouts, and even broker failover. These tests do
  not mock out any part of the driver - a simple test broker is included
  which allows the full driver codepath to be executed and verified.
  
  IFAIK, the only remaining technical block to adding this feature,
  aside from core reviews [2], is sufficient infrastructure test coverage.
  We discussed this a bit at the last design summit.  The root of the
  issue is that this feature is dependent on a platform-specific library
  (proton) that isn't in the base repos for most of the CI platforms.
  But it is available via EPEL, and the Apache QPID team is actively
  working towards getting the packages into Debian (a PPA is available
  in the meantime).
  
  In the interim I've proposed a non-voting CI check job that will
  sanity check the new driver on EPEL based systems [3].  I'm also
  working towards adding devstack support [4], which won't be done in
  time for Juno but nevertheless I'm making it happen.
  
  I fear that this feature's inclusion is stuck in a chicken/egg
  deadlock: the driver won't get merged until there is CI support, but
  the CI support won't run correctly (and probably won't get merged)
  until the driver is available.  The driver really has to be merged
  first, before I can continue with CI/devstack development.
  
  [1] 
  https://blueprints.launchpad.net/oslo.messaging/+spec/amqp10-driver-implementation
  [2] https://review.openstack.org/#/c/75815/
  [3] https://review.openstack.org/#/c/115752/
  [4] https://review.openstack.org/#/c/109118/
 
 
 Hi Ken,
 
 Thanks a lot for your hard work here. As I stated in my last comment on
 the driver's review, I think we should let this driver land and let
 future patches improve it where/when needed.
 
 I agreed on letting the driver land as-is based on the fact that there
 are patches already submitted ready to enable the gates for this driver.

I feel bad that the driver has been in a pretty complete state for quite
a while but hasn't received a whole lot of reviews. There's a lot of
promise to this idea, so it would be ideal if we could unblock it.

One thing I've been meaning to do this cycle is add concrete advice for
operators on the state of each driver. I think we'd be a lot more
comfortable merging this in Juno if we could somehow make it clear to
operators that it's experimental right now. My idea was:

  - Write up some notes which discusses the state of each driver e.g.

  - RabbitMQ - the default, used by the majority of OpenStack 
deployments, perhaps list some of the known bugs, particularly 
around HA.

  - Qpid - suitable for production, but used in a limited number of 
deployments. Again, list known issues. Mention that it will 
probably be removed with the amqp10 driver matures.

  - Proton/AMQP 1.0 - experimental, in active development, will
support  multiple brokers and topologies, perhaps a pointer to a
wiki page with the current TODO list

  - ZeroMQ - unmaintained and deprecated, planned for removal in
Kilo

  - Propose this addition to the API docs and ask the operators list 
for feedback

  - Propose a patch which adds a load-time deprecation warning to the 
ZeroMQ driver

  - Include a load-time experimental warning in the proton driver

Thoughts on that?

(I understand the ZeroMQ situation needs further discussion - I don't
think that's on-topic for the thread, I was just using it as example of
what kind of advice we'd be giving in these docs)

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] libvirt version_cap, a postmortem

2014-08-30 Thread Mark McLoughlin

Hey

The libvirt version_cap debacle continues to come up in conversation and
one perception of the whole thing appears to be:

  A controversial patch was ninjaed by three Red Hat nova-cores and 
  then the same individuals piled on with -2s when a revert was proposed
  to allow further discussion.

I hope it's clear to everyone why that's a pretty painful thing to hear.
However, I do see that I didn't behave perfectly here. I apologize for
that.

In order to understand where this perception came from, I've gone back
over the discussions spread across gerrit and the mailing list in order
to piece together a precise timeline. I've appended that below.

Some conclusions I draw from that tedious exercise:

 - Some people came at this from the perspective that we already have 
   a firm, unwritten policy that all code must have functional written 
   tests. Others see that test all the things is interpreted as a
   worthy aspiration, but is only one of a number of nuanced factors
   that needs to be taken into account when considering the addition of
   a new feature.

   i.e. the former camp saw Dan Smith's devref addition as attempting 
   to document an existing policy (perhaps even a more forgiving 
   version of an existing policy), whereas other see it as a dramatic 
   shift to a draconian implementation of test all the things.

 - Dan Berrange, Russell and I didn't feel like we were ninjaing a
   controversial patch - you can see our perspective expressed in 
   multiple places. The patch would have helped the live snapshot 
   issue, and has other useful applications. It does not affect the 
   broader testing debate.

   Johannes was a solitary voice expressing concerns with the patch, 
   and you could see that Dan was particularly engaged in trying to 
   address those concerns and repeating his feeling that the patch was 
   orthogonal to the testing debate.

   That all being said - the patch did merge too quickly.

 - What exacerbates the situation - particularly when people attempt to 
   look back at what happened - is how spread out our conversations 
   are. You look at the version_cap review and don't see any of the 
   related discussions on the devref policy review nor the mailing list 
   threads. Our disjoint methods of communicating contribute to 
   misunderstandings.

 - When it came to the revert, a couple of things resulted in 
   misunderstandings, hurt feelings and frayed tempers - (a) that our 
   retrospective veto revert policy wasn't well understood and (b) 
   a feeling that there was private, in-person grumbling about us at 
   the mid-cycle while we were absent, with no attempt to talk to us 
   directly.


To take an even further step back - successful communities like ours
require a huge amount of trust between the participants. Trust requires
communication and empathy. If communication breaks down and the pressure
we're all under erodes our empathy for each others' positions, then
situations can easily get horribly out of control.

This isn't a pleasant situation and we should all strive for better.
However, I tend to measure our flamewars against this:

  https://mail.gnome.org/archives/gnome-2-0-list/2001-June/msg00132.html

GNOME in June 2001 was my introduction to full-time open-source
development, so this episode sticks out in my mind. The two individuals
in that email were/are immensely capable and reasonable people, yet ...

So far, we're doing pretty okay compared to that and many other
open-source flamewars. Let's make sure we continue that way by avoiding
letting situations fester.


Thanks, and sorry for being a windbag,
Mark.

---

= July 1 =

The starting point is this review:

   https://review.openstack.org/103923

Dan Smith proposes a policy that the libvirt driver may not use libvirt
features until they have been available in Ubuntu or Fedora for at least
30 days.

The commit message mentions:

  broken us in the past when we add a new feature that requires a newer
   libvirt than we test with, and we discover that it's totally broken
   when we upgrade in the gate.

which AIUI is a reference to the libvirt live snapshot issue the
previous week, which is described here:

  https://review.openstack.org/102643

where upgrading to Ubuntu Trusty meant the libvirt version in use in the
gate went from 0.9.8 to 1.2.2, which caused the live snapshot code
paths in Nova for the first time, which appeared to be related to some
serious gate instability (although the exact root cause wasn't
identified).

Some background on the libvirt version upgrade can be seen here:

  
http://lists.openstack.org/pipermail/openstack-dev/2014-March/thread.html#30284

= July 1 - July 8 =

Back and forth debate mostly between Dan Smith and Dan Berrange. Sean
votes +2, Dan Berrange votes -2.

= July 14 =

Russell adds his support to Dan Berrange's position, votes -2. Some
debate between Dan and Dan continues. Joe Gordon votes +2. Matt
Riedemann expresses support-in-principal for 

Re: [openstack-dev] [Zaqar] Zaqar graduation (round 2) [was: Comments on the concerns arose during the TC meeting]

2014-09-12 Thread Mark McLoughlin
On Wed, 2014-09-10 at 14:51 +0200, Thierry Carrez wrote:
 Flavio Percoco wrote:
  [...]
  Based on the feedback from the meeting[3], the current main concern is:
  
  - Do we need a messaging service with a feature-set akin to SQS+SNS?
  [...]
 
 I think we do need, as Samuel puts it, some sort of durable
 message-broker/queue-server thing. It's a basic application building
 block. Some claim it's THE basic application building block, more useful
 than database provisioning. It's definitely a layer above pure IaaS, so
 if we end up splitting OpenStack into layers this clearly won't be in
 the inner one. But I think IaaS+ basic application building blocks
 belong in OpenStack one way or another. That's the reason I supported
 Designate (everyone needs DNS) and Trove (everyone needs DBs).
 
 With that said, I think yesterday there was a concern that Zaqar might
 not fill the some sort of durable message-broker/queue-server thing
 role well. The argument goes something like: if it was a queue-server
 then it should actually be built on top of Rabbit; if it was a
 message-broker it should be built on top of postfix/dovecot; the current
 architecture is only justified because it's something in between, so
 it's broken.
 
 I guess I don't mind that much zaqar being something in between:
 unless I misunderstood, exposing extra primitives doesn't prevent the
 queue-server use case from being filled. Even considering the
 message-broker case, I'm also not convinced building it on top of
 postfix/dovecot would be a net win compared to building it on top of
 Redis, to be honest.

AFAICT, this part of the debate boils down to the following argument:

  If Zaqar implemented messaging-as-a-service with only queuing 
  semantics (and no random access semantics), it's design would 
  naturally be dramatically different and simply implement a 
  multi-tenant REST API in front of AMQP queues like this:

https://www.dropbox.com/s/yonloa9ytlf8fdh/ZaqarQueueOnly.png?dl=0

  and that this architecture would allow for dramatically improved 
  throughput for end-users while not making the cost of providing the 
  service prohibitive to operators.

You can't dismiss that argument out-of-hand, but I wonder (a) whether
the claimed performance improvement is going to make a dramatic
difference to the SQS-like use case and (b) whether backing this thing
with an RDBMS and multiple highly available, durable AMQP broker
clusters is going to be too much of a burden on operators for whatever
performance improvements it does gain.

But the troubling part of this debate is where we repeatedly batter the
Zaqar team with hypotheses like these and appear to only barely
entertain their carefully considered justification for their design
decisions like:

  
https://wiki.openstack.org/wiki/Frequently_asked_questions_%28Zaqar%29#Is_Zaqar_a_provisioning_service_or_a_data_API.3F
  
https://wiki.openstack.org/wiki/Frequently_asked_questions_%28Zaqar%29#What_messaging_patterns_does_Zaqar_support.3F

I would like to see an SQS-like API provided by OpenStack, I accept the
reasons for Zaqar's design decisions to date, I respect that those
decisions were made carefully by highly competent members of our
community and I expect Zaqar to evolve (like all projects) in the years
ahead based on more real-world feedback, new hypotheses or ideas, and
lessons learned from trying things out.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting

2014-09-12 Thread Mark McLoughlin
On Wed, 2014-09-10 at 12:46 -0700, Monty Taylor wrote:
 On 09/09/2014 07:04 PM, Samuel Merritt wrote:
  On 9/9/14, 4:47 PM, Devananda van der Veen wrote:

  The questions now before us are:
  - should OpenStack include, in the integrated release, a
  messaging-as-a-service component?
 
  I certainly think so. I've worked on a few reasonable-scale web
  applications, and they all followed the same pattern: HTTP app servers
  serving requests quickly, background workers for long-running tasks, and
  some sort of durable message-broker/queue-server thing for conveying
  work from the first to the second.
 
  A quick straw poll of my nearby coworkers shows that every non-trivial
  web application that they've worked on in the last decade follows the
  same pattern.
 
  While not *every* application needs such a thing, web apps are quite
  common these days, and Zaqar satisfies one of their big requirements.
  Not only that, it does so in a way that requires much less babysitting
  than run-your-own-broker does.
 
 Right. But here's the thing.
 
 What you just described is what we all thought zaqar was aiming to be in 
 the beginning. We did not think it was a GOOD implementation of that, so 
 while we agreed that it would be useful to have one of those, we were 
 not crazy about the implementation.

Those generalizations are uncomfortably sweeping.

What Samuel just described is one of the messaging patterns that Zaqar
implements and some (members of the TC?) believed that this messaging
pattern was the only pattern that Zaqar aimed to implement.

Some (members of the TC?) formed strong, negative opinions about how
this messaging pattern was implemented, but some/all of those same
people agreed a messaging API implementing those semantics would be a
useful thing to have.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thoughts on OpenStack Layers and a Big Tent model

2014-09-22 Thread Mark McLoughlin
Hey

On Thu, 2014-09-18 at 11:53 -0700, Monty Taylor wrote:
 Hey all,
 
 I've recently been thinking a lot about Sean's Layers stuff. So I wrote
 a blog post which Jim Blair and Devananda were kind enough to help me edit.
 
 http://inaugust.com/post/108

Lots of great stuff here, but too much to respond to everything in
detail.

I love the way you've framed this in terms of the needs of developers,
distributors, deployers and end users. I'd like to make sure we're
focused on tackling those places where we're failing these groups, so:


 - Developers

   I think we're catering pretty well to developers with the big tent
   concept of Programs. There's been some good discussion about how
   Programs could be better at embracing projects in their related area,
   and that would be great to pursue. But the general concept - of 
   endorsing and empowering teams of people collaborating in the
   OpenStack way - has a lot of legs, I think.

   I also think our release cadence does a pretty good job of serving 
   developers. We've talked many times about the benefit of it, and I'd 
   like to see it applied to more than just the server projects.

   OTOH, the integrated gate is straining, and a source of frustration 
   for everyone. You raise the question of whether everything currently 
   in the integrated release needs to be co-gated, and I totally agree 
   that needs re-visiting.


 - Distributors

   We may be doing a better job of catering to distributors than any 
   other group. For example, things like the release cadence, stable 
   branch and common set of dependencies works pretty well.

.  The concept of an integrated release (with an incubation process) is
   great, because it nicely defines a set of stuff that distributors
   should ship. Certainly, life would be more difficult for distributors
   if there was a smaller set of projects in the release and a whole 
   bunch of other projects which are interesting to distro users, but 
   with an ambiguous level of commitment from our community. Right now, 
   our integration process has a huge amount of influence over what 
   gets shipped by distros, and that in turn serves distro users by 
   ensuring a greater level of commonality between distros.


 - Operators

   I think the feedback we've been getting over the past few cycles 
   suggests we are failing this group the most.

   Operators want to offer a compelling set of services to their users, 
   but they want those services to be stable, performant, and perhaps 
   most importantly, cost-effective. No operator wants to have to
   invest a huge amount of time in getting a new service running.

   You suggest a Production Ready tag. Absolutely - our graduation of 
   projects has been interpreted as meaning production ready, when 
   it's actually more useful as a signal to distros rather than 
   operators. Graduation does not necessarily imply that a service is
   ready for production, no matter how you define production.

   I'd like to think we could give more nuanced advice to operators than
   a simple tag, but perhaps the information gathering process that
   projects would need to go through to be awarded that tag would 
   uncover the more detailed information for operators.

   You could question whether the TC is the right body for this 
   process. How might it work if the User Committee owned this?

   There are many other ways we can and should help operators, 
   obviously, but this setting expectations is the aspect most 
   relevant to this discussion.


 - End users

   You're right that we don't pay sufficient attention to this group.
   For me, the highest priority challenge here is interoperability. 
   Particularly interoperability between public clouds.

   The only real interop effort to date you can point to is the 
   board-owned DefCore and RefStack efforts. The idea being that a
   trademark program with API testing requirements will focus minds on
   interoperability. I'd love us (as a community) to be making more
   rapid progress on interoperability, but at least there are no
   encouraging signs that we should make some definite progress soon.

   Your end-user focused concrete suggestions (#7-#10) are interesting,
   and I find myself thinking about how much of a positive effect on 
   interop each of them would have. For example, making our tools 
   multi-cloud aware would help encourage people to demand interop from 
   their providers. I also agree that end-user tools should support 
   older versions of our APIs, but don't think that necessarily implies 
   rolling releases.



So, if I was to pick the areas which I think would address our most
pressing challenges:

  1) Shrinking the integrated gate, and allowing per-project testing 
 strategies other than shoving every integrated project into the 
 gate.

  2) Giving more direction to operators about the readiness of our 
 projects for different use cases. A process around awarding 

Re: [openstack-dev] TransportURL and virtualhost/exchnage (was Re: [Oslo] Layering olso.messaging usage of config)

2013-12-09 Thread Mark McLoughlin
Hi Gordon,

On Fri, 2013-12-06 at 18:36 +, Gordon Sim wrote:
 On 11/18/2013 04:44 PM, Mark McLoughlin wrote:
  On Mon, 2013-11-18 at 11:29 -0500, Doug Hellmann wrote:
  IIRC, one of the concerns when oslo.messaging was split out was
  maintaining support for existing deployments with configuration files that
  worked with oslo.rpc. We had said that we would use URL query parameters
  for optional configuration values (with the required values going into
  other places in the URL)
 [...]
  I hadn't ever considered exposing all configuration options via the URL.
  We have a lot of fairly random options, that I don't think you need to
  configure per-connection if you have multiple connections in the one
  application.
 
 I certainly agree that not all configuration options may make sense in a 
 URL. However if you will forgive me for hijacking this thread 
 momentarily on a related though tangential question/suggestion...
 
 Would it make sense to (and/or even be possible to) take the 'exchange' 
 option out of the API, and let transports deduce their implied 
 scope/namespace purely from the transport URL in perhaps transport 
 specific ways?
 
 E.g. you could have rabbit://my-host/my-virt-host/my-exchange or 
 rabbit://my-host/my-virt-host or rabbit://my-host//my-exchange, and the 
 rabbit driver would ensure that the given virtualhost and or exchange 
 was used.
 
 Alternatively you could have zmq://my-host:9876 or zmq://my-host:6789 
 to 'scope' 0MQ communication channels. and hypothetically 
 something-new://my-host/xyz, where xyz would be interpreted by the 
 driver in question in a relevant way to scope the interactions over that 
 transport.
 
 Applications using RPC would then assume they were using a namespace 
 free from the danger of collisions with other applications, but this 
 would all be driven through transport specific configuration.
 
 Just a suggestion based on my initial confusion through ignorance on the 
 different scoping mechanisms described in the API docs. It may not be 
 feasible or may have negative consequences I have not in my naivety 
 foreseen.

It's not a completely unreasonable approach to take, but my thinking was
that a transport URL connects you to a conduit which multiple
applications could be sharing so you need the application to specify its
own application namespace.

e.g. you can have 'scheduler' topics for both Nova and Cinder, and you
need each application to specify its exchange whereas the administrator
is in full control of the transport URL and doesn't need to worry about
application namespacing on the transport.

There are three ways the exchange appears in the API:

  1) A way for an application to set up the default exchange it
 operates in:

 messaging.set_transport_defaults(control_exchange='nova')

 
http://docs.openstack.org/developer/oslo.messaging/transport.html#oslo.messaging.set_transport_defaults

  2) The server can explicitly say what exchange its listening on:

 target = messaging.Target(exchange='nova',
   topic='scheduler',
   server='myhost')
 server = messaging.get_rpc_server(transport, target, endpoints)

 
http://docs.openstack.org/developer/oslo.messaging/server.html#oslo.messaging.get_rpc_server

  3) The client can explicitly say what exchange to connect to:

 target = messaging.Target(exchange='nova',
   topic='scheduler')
 client = messaging.RPCClient(transport, target)

 
http://docs.openstack.org/developer/oslo.messaging/rpcclient.html#oslo.messaging.RPCClient

But also the admin can override the default exchange so that e.g. you
could put two instances of the application on the same transport, but
with different exchanges.

Now, in saying all that, we know that fanout topics of the same name
will conflict even if the exchange name is different:

  https://bugs.launchpad.net/oslo.messaging/+bug/1173552

So that means the API doesn't work quite as intended yet, but I think
the idea makes sense.

I'm guessing you have a concern about how transports might implement
this application namespacing? Could you elaborate if so?

Thanks,
Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [olso] [cinder] upgrade issues in lock_path in cinder after oslo utils sync

2013-12-09 Thread Mark McLoughlin
On Mon, 2013-12-09 at 11:11 -0600, Ben Nemec wrote:
 On 2013-12-09 10:55, Sean Dague wrote:
  On 12/09/2013 11:38 AM, Clint Byrum wrote:
  Excerpts from Sean Dague's message of 2013-12-09 08:17:45 -0800:
  On 12/06/2013 05:40 PM, Ben Nemec wrote:
  On 2013-12-06 16:30, Clint Byrum wrote:
  Excerpts from Ben Nemec's message of 2013-12-06 13:38:16 -0800:
  
  
  On 2013-12-06 15:14, Yuriy Taraday wrote:
  
  Hello, Sean.
  
  I get the issue with upgrade path. User doesn't want to update
  config unless one is forced to do so.
  But introducing code that weakens security and let it stay is an
  unconditionally bad idea.
  It looks like we have to weigh two evils: having troubles 
  upgrading
  and lessening security. That's obvious.
  
  Here are my thoughts on what we can do with it:
  1. I think we should definitely force user to do appropriate
  configuration to let us use secure ways to do locking.
  2. We can wait one release to do so, e.g. issue a deprecation
  warning now and force user to do it the right way later.
  3. If we are going to do 2. we should do it in the service that 
  is
  affected not in the library because library shouldn't track 
  releases
  of an application that uses it. It should do its thing and do it
  right (secure).
  
  So I would suggest to deal with it in Cinder by importing
  'lock_path' option after parsing configs and issuing a deprecation
  warning and setting it to tempfile.gettempdir() if it is still 
  None.
  
  This is what Sean's change is doing, but setting lock_path to
  tempfile.gettempdir() is the security concern.
  
  Yuriy's suggestion is that we should let Cinder override the config
  variable's default with something insecure. Basically only 
  deprecate
  it in Cinder's world, not oslo's. That makes more sense from a 
  library
  standpoint as it keeps the library's expected interface stable.
  
  Ah, I see the distinction now.  If we get this split off into
  oslo.lockutils (which I believe is the plan), that's probably what 
  we'd
  have to do.
  
  
  Since there seems to be plenty of resistance to using /tmp by 
  default,
  here is my proposal:
  
  1) We make Sean's change to open files in append mode. I think we 
  can
  all agree this is a good thing regardless of any config changes.
  
  2) Leave lockutils broken in Icehouse if lock_path is not set, as 
  I
  believe Mark suggested earlier. Log an error if we find that
  configuration. Users will be no worse off than they are today, and 
  if
  they're paying attention they can get the fixed lockutils behavior
  immediately.
  
  Broken how? Broken in that it raises an exception, or broken in 
  that it
  carries a security risk?
  
  Broken as in external locks don't actually lock.  If we fall back to
  using a local semaphore it might actually be a little better because
  then at least the locks work within a single process, whereas before
  there was no locking whatsoever.
  
  Right, so broken as in doesn't actually locks, potentially 
  completely
  scrambles the user's data, breaking them forever.
  
  
  Things I'd like to see OpenStack do in the short term, ranked in 
  ascending
  order of importance:
  
  4) Upgrade smoothly.
  3) Scale.
  2) Help users manage external risks.
  1) Not do what Sean described above.
  
  I mean, how can we even suggest silently destroying integrity?
  
  I suggest merging Sean's patch and putting a warning in the release
  notes that running without setting this will be deprecated in the next
  release. If that is what this is preventing this debate should not 
  have
  happened, and I sincerely apologize for having delayed it. I believe 
  my
  mistake was assuming this was something far more trivial than without
  this patch we destroy users' data.
  
  I thought we were just talking about making upgrades work. :-P
  
  Honestly, I haven't looked exactly how bad the corruption would be. But
  we are locking to handle something around simultaneous db access in
  cinder, so I'm going to assume the worst here.
 
 Yeah, my understanding is that this doesn't come up much in actual use 
 because lock_path is set in most production environments.  Still, 
 obviously not cool when your locks don't lock, which is why we made the 
 unpleasant change to require lock_path.  It wasn't something we did 
 lightly (I even sent something to the list before it merged, although I 
 got no responses at the time).

What would happen if we required each service to set a sane default
here?

e.g. for Nova, would a dir under $state_path work? It just needs to be a
directory that isn't world-writeable but is writeable by whatever user
Nova is running as.

Practically speaking, this just means that Cinder needs to do:

  lockutils.set_defaults(lock_path=os.path.join(CONF.state_path, 'tmp'))

and the current behaviour of lockutils.py is fine.

Hmm, that feels like I'm missing something?

Mark.


___
OpenStack-dev mailing list

Re: [openstack-dev] Retiring reverify no bug

2013-12-09 Thread Mark McLoughlin
On Mon, 2013-12-09 at 10:49 -0800, James E. Blair wrote:
 Hi,
 
 On Wednesday December 11, 2013 we will remove the ability to use
 reverify no bug to re-trigger gate runs for changes that have failed
 tests.
 
 This was previously discussed[1] on this list.  There are a few key
 things to keep in mind:
 
 * This only applies to reverify, not recheck.  That is, it only
   affects the gate pipeline, not the check pipeline.  You can still use
   recheck no bug to make sure that your patch still works.
 
 * Core reviewers can still resubmit a change to the queue by leaving
   another Approved vote.  Please don't abuse this to bypass the intent
   of this change: to help identify and close gate-blocking bugs.
 
 * You may still use reverify bug # to re-enqueue if there is a bug
   report for a failure, and of course you are encouraged to file a bug
   report if there is not.  Elastic-recheck is doing a great job of
   indicating which bugs might have caused a failure.
 
 As discussed in the previous thread, the goal is to prevent new
 transient bugs from landing in code by ensuring that if a change fails a
 gate test that it is because of a known bug, and not because it's
 actually introducing a bug, so please do your part to help in this
 effort.

I wonder could we make it standard practice for an infra bug to get
filed whenever there's a known issue causing gate jobs to fail so that
everyone can use that bug number when re-triggering?

(Apologies if that's already happening)

I guess we'd want to broadcast that bug number with statusbot?

Basically, the times I've used 'reverify no bug' is where I see some job
failures that look like an infra issue that was already resolved.

Thanks,
Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][TripleO] Nested resources

2013-12-09 Thread Mark McLoughlin
On Tue, 2013-12-10 at 09:40 +1300, Robert Collins wrote:
 On 6 December 2013 14:11, Fox, Kevin M kevin@pnnl.gov wrote:
  I think the security issue can be handled by not actually giving the 
  underlying resource to the user in the first place.
 
  So, for example, if I wanted a bare metal node's worth of resource for my 
  own containering, I'd ask for a bare metal node and use a blessed image 
  that contains docker+nova bits that would hook back to the cloud. I 
  wouldn't be able to login to it, but containers started on it would be able 
  to access my tenant's networks. All access to it would have to be through 
  nova suballocations. The bare resource would count against my quotas, but 
  nothing run under it.
 
  Come to think of it, this sounds somewhat similar to what is planned for 
  Neutron service vm's. They count against the user's quota on one level but 
  not all access is directly given to the user. Maybe some of the same 
  implementation bits could be used.
 
 This is a super interesting discussion - thanks for kicking it off.
 
 I think it would be fantastic to be able to use containers for
 deploying the cloud rather than full images while still running
 entirely OpenStack control up and down the stack.

Where I think it gets really interesting is to be able to auto-scale
controller services (think nova-api based on request latency) in small
increments just you'd expect to be able to manage a scale-out app on a
cloud.

i.e. our overcloud Heat stack would allocate some baremetal machines,
but then just schedule the controller services to run in small
containers (or VMs) on any of those machines, and then have them
auto-scale.

 Briefly, what we need to be able to do that is:
 
  - the ability to bring up an all in one node with everything on it to
 'seed' the environment.
 - we currently do that by building a disk image, and manually
 running virsh to start it

I'm not sure that would need to change.

  - the ability to reboot a machine *with no other machines running* -
 we need to be able to power off and on a datacentre - and have the
 containers on it come up correctly configured, networking working,
 running etc.

That's tricky because your undercloud Nova DB/conductor needs to be
available for the machine to know what services it's supposed to be
running. It sounds like a reasonable thing to want even for standard KVM
compute nodes too, though.

  - we explicitly want to be just using OpenStack APIs for all the
 deployment operations after the seed is up; so no direct use of lxc or
 docker or whathaveyou.

Yes.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] First steps towards amqp 1.0

2013-12-09 Thread Mark McLoughlin
On Mon, 2013-12-09 at 16:05 +0100, Flavio Percoco wrote:
 Greetings,
 
 As $subject mentions, I'd like to start discussing the support for
 AMQP 1.0[0] in oslo.messaging. We already have rabbit and qpid drivers
 for earlier (and different!) versions of AMQP, the proposal would be
 to add an additional driver for a _protocol_ not a particular broker.
 (Both RabbitMQ and Qpid support AMQP 1.0 now).
 
 By targeting a clear mapping on to a protocol, rather than a specific
 implementation, we would simplify the task in the future for anyone
 wishing to move to any other system that spoke AMQP 1.0. That would no
 longer require a new driver, merely different configuration and
 deployment. That would then allow openstack to more easily take
 advantage of any emerging innovations in this space.

Sounds sane to me.

To put it another way, assuming all AMQP 1.0 client libraries are equal,
all the operator cares about is that we have a driver that connect into
whatever AMQP 1.0 messaging topology they want to use.

Of course, not all client libraries will be equal, so if we don't offer
the choice of library/driver to the operator, then the onus is on us to
pick the best client library for this driver.

(Enjoying the rest of this thread too, thanks to Gordon for his
insights)

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] Core reviewer update Dec

2013-12-09 Thread Mark McLoughlin
On Tue, 2013-12-10 at 13:31 +1300, Robert Collins wrote:

 We have a bit of a bug in OpenStack today, IMO, in that there is more
 focus on being -core than on being a good effective reviewer. IMO
 that's backwards: the magic switch that lets you set +2 and -2 is a
 responsibility, and that has some impact on the weight your comments
 in reviews have on other people - both other core and non-core, but
 the contribution we make by reviewing doesn't suddenly get
 significantly better by virtue of being -core. There is an element of
 trust and faith in personality etc - you don't want destructive
 behaviour in code review, but you don't want that from anyone - it's
 not a new requirement place on -core.

FWIW, I see the this focus on being -core as an often healthy desire
to be recognized as a good effective reviewer.

I guess that's related to where you said something similar in the Heat
thread:

  http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg11121.html

  there is a meme going around (I don't know if it's true or not) that
  some people are assessed - performance review stuff within vendor
  organisations - on becoming core reviewers.

For example, if managers in these organizations said to people I want
to spend a significant proportion of your time contributing good and
effective upstream reviews that would be a good thing, right?

One way that such well intentioned managers could know whether the
reviewing is good and effective is whether the reviewers are getting
added to the -core teams. That also seems mostly positive. Certainly
better than looking at reviewer stats?

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-12 Thread Mark McLoughlin
On Wed, 2013-12-11 at 13:33 +0100, Jiří Stránský wrote:
 Hi all,
 
 TL;DR: I believe that As an infrastructure administrator, Anna wants a 
 CLI for managing the deployment providing the same fundamental features 
 as UI. With the planned architecture changes (making tuskar-api thinner 
 and getting rid of proxying to other services), there's not an obvious 
 way to achieve that. We need to figure this out. I present a few options 
 and look forward for feedback.
..

 1) Make a thicker python-tuskarclient and put the business logic there. 
 Make it consume other python-*clients. (This is an unusual approach 
 though, i'm not aware of any python-*client that would consume and 
 integrate other python-*clients.)
 
 2) Make a thicker tuskar-api and put the business logic there. (This is 
 the original approach with consuming other services from tuskar-api. The 
 feedback on this approach was mostly negative though.)

FWIW, I think these are the two most plausible options right now.

My instinct is that tuskar could be a stateless service which merely
contains the business logic between the UI/CLI and the various OpenStack
services.

That would be a first (i.e. an OpenStack service which doesn't have a
DB) and it is somewhat hard to justify. I'd be up for us pushing tuskar
as a purely client-side library used by the UI/CLI (i.e. 1) as far it
can go until we hit actual cases where we need (2).

One example worth thinking through though - clicking deploy my
overcloud will generate a Heat template and sent to the Heat API.

The Heat template will be fairly closely tied to the overcloud images
(i.e. the actual image contents) we're deploying - e.g. the template
will have metadata which is specific to what's in the images.

With the UI, you can see that working fine - the user is just using a UI
that was deployed with the undercloud.

With the CLI, it is probably not running on undercloud machines. Perhaps
your undercloud was deployed a while ago and you've just installed the
latest TripleO client-side CLI from PyPI. With other OpenStack clients
we say that newer versions of the CLI should support all/most older
versions of the REST APIs.

Having the template generation behind a (stateless) REST API could allow
us to define an API which expresses deploy my overcloud and not have
the client so tied to a specific undercloud version.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][glance] Oslo.cfg resets not really resetting the CONF

2013-12-16 Thread Mark McLoughlin
On Tue, 2013-12-17 at 11:17 +0530, Amala Basha Alungal wrote:
 Hi Mark, Ben
 
 
 The reset() method in turn calls the clear() method which does an
 unregister_opt(). However the unregister_opt only unregisters the
 config_opts. The entire set of options inside _opts remain as is.
 We've filed a bug on the oslo end. 

Yes, that's working as designed.

Those two options are registered by __call__() so reset() unregisters
only them.

The idea is that you can register lots and then do __call__() and
reset() without affecting the registered options.

Mark.

 On Tue, Dec 17, 2013 at 5:27 AM, Mark McLoughlin mar...@redhat.com
 wrote:
 Hi
 
 On Fri, 2013-12-13 at 14:14 +0530, Amala Basha Alungal wrote:
  Hi,
 
 
 
  I stumbled into a situation today where in I had to write
 few tests that
  modifies the oslo.config.cfg and in turn resets the values
 back in a tear
 
  down. Acc to the docs, oslo.cfg reset() *Clears the object
 state and
  unsets overrides and defaults. *but, it doesn't seem to be
 happening, as
  the subsequent tests that are run retains these modified
 values and tests
  behave abnormally. The patch has been submitted for review
 
  herehttps://review.openstack.org/#/c/60188/1.
  Am I missing something obvious?
 
 
 From https://bugs.launchpad.net/oslo/+bug/1261376 :
 
   reset() will clear any values read from the command line or
 config
   files and it will also remove any values set with
 set_default() or
   set_override()
 
   However, it will not undo register_opt() - there is
 unregister_opt()
   for that purpose
 
 Maybe if you pushed a version of
 https://review.openstack.org/60188
 which uses reset() and explain how it's not working as you
 expected?
 
 Thanks,
 Mark.
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 -- 
 Thanks And Regards
 Amala Basha
 +91-7760972008



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [governance] Becoming a Program, before applying for incubation

2013-12-17 Thread Mark McLoughlin
On Tue, 2013-12-17 at 11:25 +0100, Thierry Carrez wrote:
 Mark McLoughlin wrote:
  I'm not totally convinced we need such formality around the TC
  expressing its support for an early-stage program/project/effort/team.
 
 This is a difficult balance.
 
 You want to help a number of projects attract more contributors and
 reach critical mass. For that, they want visibility, and one way to give
 them that is a formal blessing. On the other hand, we want to encourage
 the spontaneous creation of teams and projects and reduce formality in
 the early stages as much as possible...
 
  How about if we had an RFC process (hmm, but not in the IETF sense)
  whereby an individual or team can submit a document expressing a
  position and ask the TC to give its feedback? We would record that
  feedback in the governance repo, and it would be a short piece of prose
  (perhaps even recording a diversity of views amongst the TC members)
  rather than a yes/no status vote.
  
  In the case of a fledgling project, they'd write up something like a
  first draft of an incubation application and we'd give our feedback,
  encouragement, whatever.
 
 I think that level of formality would not trigger enough visibility, so
 we'd be back to square 1 with projects applying for incubation because
 it's the only way to get the visibility they need to attract enough
 contributors.

How about if we had an emerging projects page where the TC feedback on
each project would be listed?

That would give visibility to our feedback, without making it a yes/no
blessing. Ok, whether to list any feedback about the project on the page
is a yes/no decision, but at least it allows us to fully express why we
find the project promising, what people need to help with in order for
it to be incubated, etc.

With a formal yes/no status, I think we'd struggle with projects which
we're not quite ready to even bless with an emerging status but we
still want to encourage them - this allows us to bless a project as
emerging but be explicit about our level of support for it.

 So I think we need some official label. It can be attached to the
 project (emerging technology), or it can be attached to the team and
 its mission (incoming/wannabe/emerging program).
 
 One benefit of applying it to the team is that the effort can then more
 naturally graduate to become a full-fledged program when the project
 gets incubated or integrated, without applying to become a program.

I'm not loving the idea of blessing a team more or less independently of
the project they're producing.

I'd tend to be more wary of blessing a program rather than a fledgling
project - if a couple of developers show up and want to be blessed as
the lampshade program, then I'd feel we're placing an awful lot of
trust in them because we're making them responsible for everything
lampshade related in OpenStack. If instead we were just saying we like
this lampshade project you're working on, then we leave room for that
project to grow or wither or be obsoleted by another group working on a
competing lampshade project.

  Setting a very low bar for the officialness of becoming a Program seems
  wrong to me - I wouldn't like to see Programs being added and then later
  removed with any sort of regularity. Part of what people are looking for
  is an indication of what's coming down the track and the endorsement
  implicit in becoming a Program - before a long-term viable team has been
  established - seems too strong for me.
 
 That's a fair remark. Programs come with some expectation of
 durability and permanence, and using the same term might set
 expectations wrong. If we settle for a team-attached label, we should
 probably avoid using program in its name (and call it
 incoming/wannabe/emerging effort or something).
 
  Even though this doesn't grant ATC status to the people working on those
  projects, I'm struggling to see that as a burning issue for anyone -
  honestly, if you're working on an early-stage, keen-to-be-incubated
  project then I'd be surprised if you didn't find some small way to
  contribute to one of our many ATC-granting projects.
 
 Ideally I'd like to address the case of the programs created from an
 incubated project in the same governance change. With the expectation
 of durable endorsement we'd like to see attached with Programs, it may
 make sense to *not* automatically create a program when a project gets
 incubated, but rather when a project is finally integrated.
 
 So two options here:
 
 - Consider incubated as emerging, not create a program and not grant ATC
 status
 
 - Consider incubated as integrated, create a program and grant ATC status

Honestly, I struggle to care much about ATC status or those programs
which are associated with individual projects - the principles I care
about here are:

  (1) that we should be fairly permissive about granting ATC status and

  (2) programs allow us bless as official those efforts or teams who
  aren't primarily

Re: [openstack-dev] [governance] Becoming a Program, before applying for incubation

2013-12-17 Thread Mark McLoughlin
On Tue, 2013-12-17 at 13:44 +0100, Thierry Carrez wrote:
 Mark McLoughlin wrote:
  How about if we had an emerging projects page where the TC feedback on
  each project would be listed?
  
  That would give visibility to our feedback, without making it a yes/no
  blessing. Ok, whether to list any feedback about the project on the page
  is a yes/no decision, but at least it allows us to fully express why we
  find the project promising, what people need to help with in order for
  it to be incubated, etc.
  
  With a formal yes/no status, I think we'd struggle with projects which
  we're not quite ready to even bless with an emerging status but we
  still want to encourage them - this allows us to bless a project as
  emerging but be explicit about our level of support for it.
 
 I agree that being able to express our opinion on a project in shades of
 grey is valuable... The main drawback of using a non-boolean status for
 that is that you can't grant any benefit to it. So we'd not be able to
 say emerging projects get design summit space.
 
 They can still collaborate in unconference space or around empty tables,
 but then we are back to the problem we are trying to solve: increase
 visibility of promising projects pre-incubation.

Have an emerging projects track and leave it up to the track coordinator
to decide prioritize the most interesting sessions and the most advanced
projects (according to the TC's feedback)  ?

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Less option (was: [oslo.config] Centralized config management)

2014-01-10 Thread Mark McLoughlin
On Thu, 2014-01-09 at 16:34 -0800, Joe Gordon wrote:
 On Thu, Jan 9, 2014 at 3:01 PM, Jay Pipes jaypi...@gmail.com wrote:
 
  On Thu, 2014-01-09 at 23:56 +0100, Julien Danjou wrote:
   On Thu, Jan 09 2014, Jay Pipes wrote:
  
Hope you don't mind, I'll jump in here :)
   
On Thu, 2014-01-09 at 11:08 -0800, Nachi Ueno wrote:
Hi Jeremy
   
Don't you think it is burden for operators if we should choose correct
combination of config for multiple nodes even if we have chef and
puppet?
   
It's more of a burden for operators to have to configure OpenStack in
multiple ways.
  
   I also think projects should try to minimize configuration options at
   their minimum so operators are completely lost. Opening the sample
   nova.conf and seeing 696 options is not what I would call user friendly.
  
 
 
 
 There was talk a while back about marking different config options as basic
 and advanced (or something along those lines) to help make it easier for
 operators.

You might be thinking of this session summit I led:

  https://etherpad.openstack.org/p/grizzly-nova-config-options

My thinking was we first move config options into groups to make it
easier for operators to make sense of the available options and then we
would classify them (as e.g. tuning, experimental, debug) and
exclude some classifications from the sample config file.

Sadly, I never even made good progress on Tedious Task 2 :: Group.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] pep8 gating fails due to tools/config/check_uptodate.sh

2014-02-04 Thread Mark McLoughlin
On Mon, 2014-01-13 at 16:49 +, Sahid Ferdjaoui wrote:
 Hello all,
 
 It looks 100% of the pep8 gate for nova is failing because of a bug reported, 
 we probably need to mark this as Critical.
 
https://bugs.launchpad.net/nova/+bug/1268614
 
 Ivan Melnikov has pushed a patchset waiting for review:
https://review.openstack.org/#/c/66346/
 
 http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRVJST1I6IEludm9jYXRpb25FcnJvcjogXFwnL2hvbWUvamVua2lucy93b3Jrc3BhY2UvZ2F0ZS1ub3ZhLXBlcDgvdG9vbHMvY29uZmlnL2NoZWNrX3VwdG9kYXRlLnNoXFwnXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjQzMjAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM4OTYzMTQzMzQ4OSwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

This just came up on #openstack-infra ...

It's a general problem that is going to occur more frequently.

Nova now includes config options from keystoneclient and oslo.messaging
in its sample config file.

That means that as soon as a new option is added to either library, then
check_uptodate.sh will start failing.

One option discussed is to remove the sample config files from source
control and have the sample be generated at build/packaging time.

So long as we minimize the dependencies required to generate the sample
file, this should be manageable.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] pep8 gating fails due to tools/config/check_uptodate.sh

2014-02-04 Thread Mark McLoughlin
On Wed, 2014-02-05 at 01:19 +0900, Sean Dague wrote:
 On 02/05/2014 12:37 AM, Mark McLoughlin wrote:
  On Mon, 2014-01-13 at 16:49 +, Sahid Ferdjaoui wrote:
  Hello all,
 
  It looks 100% of the pep8 gate for nova is failing because of a bug 
  reported, 
  we probably need to mark this as Critical.
 
 https://bugs.launchpad.net/nova/+bug/1268614
 
  Ivan Melnikov has pushed a patchset waiting for review:
 https://review.openstack.org/#/c/66346/
 
  http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRVJST1I6IEludm9jYXRpb25FcnJvcjogXFwnL2hvbWUvamVua2lucy93b3Jrc3BhY2UvZ2F0ZS1ub3ZhLXBlcDgvdG9vbHMvY29uZmlnL2NoZWNrX3VwdG9kYXRlLnNoXFwnXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjQzMjAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM4OTYzMTQzMzQ4OSwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==
  
  This just came up on #openstack-infra ...
  
  It's a general problem that is going to occur more frequently.
  
  Nova now includes config options from keystoneclient and oslo.messaging
  in its sample config file.
  
  That means that as soon as a new option is added to either library, then
  check_uptodate.sh will start failing.
  
  One option discussed is to remove the sample config files from source
  control and have the sample be generated at build/packaging time.
  
  So long as we minimize the dependencies required to generate the sample
  file, this should be manageable.
 
 The one big drawback here is that today you can point people to a git
 url, and they will then have a sample config file for Nova (or Tempest
 or whatever you are pointing them at). If this is removed, then we'll
 need / want some other way to make those samples easily available on the
 web, not only at release time.

I think that's best addressed by automatically publishing to somewhere
other than git.openstack.org. AFAIR there's already been a bunch of work
done around automatically pulling config options into the docs.

 On a related point, It's slightly bothered me that we're allow libraries
 to define stanzas in our config files. It seems like a leaky abstraction

It is a little unusual, yes.

 that's only going to get worse over time as we graduate more of oslo,
 and the coupling gets even worse.

Worse in what respect?

 Has anyone considered if it's possible to stop doing that, and have the
 libraries only provide an object model that takes args and instead leave
 config declaration to the instantiation points for those objects?

I think that would result in useless duplication (of those declarations)
and leave us open to projects having different config file options for
the same things.

 Because having a nova.conf file that's 30% options coming from
 underlying libraries that are not really controlable in nova seems like
 a recipe for a headache.

Why?

 We already have a bunch of that issue today
 with changing 3rd party logging libraries in oslo, that mostly means to
 do that in nova we first just go and change the incubator, then sync the
 changes back.

That's a different issue - if we had a properly abstracted logging API,
we could commit to future API compat and publish an oslo.logging
library.

The syncing pain you describe would go away, and the proper abstraction
would mean the things that Nova needs to control would be under Nova's
control.

 I do realize this would be a rather substantial shift from current
 approach, but current approach seems to be hitting a new complexity
 point that we're only just starting to feel the pain on.

The issue at hand is that we've discovered that checking an
autogenerated file into git causes issues ... hardly the first time
we've learned that lesson.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-docs] Conventions on naming

2014-02-05 Thread Mark McLoughlin
On Wed, 2014-02-05 at 11:52 +0100, Thierry Carrez wrote:
 Steve Gordon wrote:
  From: Anne Gentle anne.gen...@rackspace.com
  Based on today's Technical Committee meeting and conversations with the
  OpenStack board members, I need to change our Conventions for service names
  at
  https://wiki.openstack.org/wiki/Documentation/Conventions#Service_and_project_names
  .
 
  Previously we have indicated that Ceilometer could be named OpenStack
  Telemetry and Heat could be named OpenStack Orchestration. That's not the
  case, and we need to change those names.
 
  To quote the TC meeting, ceilometer and heat are other modules (second
  sentence from 4.1 in
  http://www.openstack.org/legal/bylaws-of-the-openstack-foundation/)
  distributed with the Core OpenStack Project.
 
  Here's what I intend to change the wiki page to:
   Here's the list of project and module names and their official names and
  capitalization:
 
  Ceilometer module
  Cinder: OpenStack Block Storage
  Glance: OpenStack Image Service
  Heat module
  Horizon: OpenStack dashboard
  Keystone: OpenStack Identity Service
  Neutron: OpenStack Networking
  Nova: OpenStack Compute
  Swift: OpenStack Object Storage
 
 Small correction. The TC had not indicated that Ceilometer could be
 named OpenStack Telemetry and Heat could be named OpenStack
 Orchestration. We formally asked[1] the board to allow (or disallow)
 that naming (or more precisely, that use of the trademark).
 
 [1]
 https://github.com/openstack/governance/blob/master/resolutions/20131106-ceilometer-and-heat-official-names
 
 We haven't got a formal and clear answer from the board on that request
 yet. I suspect they are waiting for progress on DefCore before deciding.
 
 If you need an answer *now* (and I suspect you do), it might make sense
 to ask foundation staff/lawyers about using those OpenStack names with
 the current state of the bylaws and trademark usage rules, rather than
 the hypothetical future state under discussion.

Basically, yes - I think having the Foundation confirm that it's
appropriate to use OpenStack Telemetry in the docs is the right thing.

There's an awful lot of confusion about the subject and, ultimately,
it's the Foundation staff who are responsible for enforcing (and giving
advise to people on) the trademark usage rules. I've cc-ed Jonathan so
he knows about this issue.

But FWIW, the TC's request is asking for Ceilometer and Heat to be
allowed use their Telemetry and Orchestration names in *all* of the
circumstances where e.g. Nova is allowed use its Compute name.

Reading again this clause in the bylaws:

  The other modules which are part of the OpenStack Project, but
   not the Core OpenStack Project may not be identified using the
   OpenStack trademark except when distributed with the Core OpenStack
   Project.

it could well be said that this case of naming conventions in the docs
for the entire OpenStack Project falls under the distributed with case
and it is perfectly fine to refer to OpenStack Telemetry in the docs.
I'd really like to see the Foundation staff give their opinion on this,
though.

Thanks,
Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTL] Designating required use upstream code

2014-02-05 Thread Mark McLoughlin
On Wed, 2014-02-05 at 17:22 +0100, Thierry Carrez wrote:
 (This email is mostly directed to PTLs for programs that include one
 integrated project)
 
 The DefCore subcommittee from the OpenStack board of directors asked the
 Technical Committee yesterday about which code sections in each
 integrated project should be designated sections in the sense of [1]
 (code you're actually needed to run or include to be allowed to use the
 trademark). That determines where you can run alternate code (think:
 substitute your own private hypervisor driver) and still be able to call
 the result openstack.
 
 [1] https://wiki.openstack.org/wiki/Governance/CoreDefinition
 
 PTLs and their teams are obviously the best placed to define this, so it
 seems like the process should be: PTLs propose designated sections to
 the TC, which blesses them, combines them and forwards the result to the
 DefCore committee. We could certainly leverage part of the governance
 repo to make sure the lists are kept up to date.
 
 Comments, thoughts ?

I think what would be useful to the board is if we could describe at a
high level which parts of each project have a pluggable interface and
whether we encourage out-of-tree implementations of those pluggable
interfaces.

That's actually a pretty tedious thing to document properly - think
about e.g. whether we encourage out-of-tree WSGI middlewares.

There's a flip-side to this designated sections thing that bothers me
after talking it through with Michael Still - I think it's perfectly
reasonable for vendors to e.g. backport fixes to their products without
that backport ever seeing the light of day upstream (say it was too
invasive for the stable branch).

This can't be a case of e.g. enforcing the sha1 sums of files. If we
want to go that route, let's just use the AGPL :)

I don't have a big issue with the way the Foundation currently enforces
you must use the code - anyone who signs a trademark agreement with
the Foundation agrees to include the entirety of Nova's code. That's
very vague, but I assume the Foundation can terminate the agreement if
it thinks the other party is acting in bad faith.

Basically, I'm concerned about us swinging from a rather lax you must
include our code rule to an overly strict you must make no downstream
modifications to our code.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] why do we put a license in every file?

2014-02-05 Thread Mark McLoughlin
On Wed, 2014-02-05 at 16:29 +, Greg Hill wrote:
 I'm new, so I'm sure there's some history I'm missing, but I find it
 bizarre that we have to put the same license into every single file of
 source code in our projects.  In my past experience, a single LICENSE
 file at the root-level of the project has been sufficient to declare
 the license chosen for a project.  Github even has the capacity to
 choose a license and generate that file for you, it's neat. 

Take a look at this thread on legal-discuss last month:

  http://lists.openstack.org/pipermail/legal-discuss/2014-January/thread.html

But yeah, as others say - per-file license headers help make the license
explicit when it is copied to other projects.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] supported dependency versioning and testing

2014-02-21 Thread Mark McLoughlin
On Thu, 2014-02-20 at 10:31 -0800, Joe Gordon wrote:
 Hi All,
 
 I discussion recently came up inside of nova about what it means
 supported version for a dependency means.  in libvirt we gate on the
 minimal version that we support but for all python dependencies we
 gate on the highest version that passes our requirements. While we all
 agree that having two different ways of choosing which version to test
 (min and max) is bad, there are good arguments for doing both.
 
 testing most recent version:
 * We want to make sure we support the latest and greatest
 * Bug fixes
 * Quickly discover backwards incompatible changes so we can deal
 with them as they arise instead of in batch
 
 Testing lowest version supported:
 * Make sure we don't land any code that breaks compatibility with
 the lowest version we say we support

Good summary.

 A few questions and ideas on how to move forward.
 * How do other projects deal with this? This problem isn't unique
 in OpenStack.
 * What are the issues with making one gate job use the latest
 versions and one use the lowest supported versions?

I think this would be very useful.

Obviously it would take some effort on someone's part to set this up
initially, but I don't immediately see it being much of an ongoing
burden on developers.

 * Only test some things on every commit or every day (periodic
 jobs)? But no one ever fixes those things when they break? who wants
 to own them? distros? deployers?

The gate job above would most likely lead to us trying hard to maintain
support for older.

A periodic job would either go stale or we'd keep it going simply by
dropping support for older libraries. (Maybe that's ok)

 * Other solutions?
 * Does it make sense to gate on the lowest version of libvirt but
 the highest version of python libs?

We might be unconsciously drawing a platform vs app line here - that
libvirt is part of the platform and the python library stack is part of
our app - while still giving a nod to supporting the notion that the
python library stack is part of the platform.

Put it another way - we'd be wary of dropping support for an older
libvirt (because it rules out support for a platform) but not so much
with dropping support for an older python library (because meh, it's not
*really* part of the platform).

Or another way, we give explicit guidance about what exact versions of
libvirt we support (i.e. test with specific distros and whatever
versions of libvirt they have) and leave those versions newer than the
oldest version we explicitly support as a grey area. Similarly, we give
explicit guidance about the exact python library stack we support (i.e.
what we test now in the gate) and leave the older versions as a grey
area.

Perhaps rather than focusing on making this absolutely black and white,
we should focus on better communicating what we actually focus our
testing on? (i.e. rather than making the grey areas black, improve the
white areas)

Concretely, for every commit merged, we could publish:

  - the set of commits tested
  - details of the jobs passed:
  - the distro
  - installed packages and versions
  - output of pip freeze
  - configuration used
  - tests passed

Meh, feeling like I'm going off-topic a bit.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Marconi] Why is marconi a queue implementation vs a provisioning API?

2014-03-19 Thread Mark McLoughlin
On Wed, 2014-03-19 at 10:17 +1300, Robert Collins wrote:
 So this came up briefly at the tripleo sprint, and since I can't seem
 to find a /why/ document
 (https://wiki.openstack.org/wiki/Marconi/Incubation#Raised_Questions_.2B_Answers
 and https://wiki.openstack.org/wiki/Marconi#Design don't supply this)

I think we need a slight reset on this discussion. The way this email
was phrased gives a strong sense of Marconi is a dumb idea, it's going
to take a lot to persuade me otherwise.

That's not a great way to start a conversation, but it's easy to
understand - a TC member sees a project on the cusp of graduating and,
when they finally get a chance to look closely at it, a number of things
don't make much sense. Wait! Stop! WTF! is a natural reaction if you
think a bad decision is about to be made.

We've all got to understand how pressurized a situation these graduation
and incubation discussions are. Projects put an immense amount of work
into proving themselves worthy of being an integrated project, they get
fairly short bursts of interaction with the TC, TC members aren't
necessarily able to do a huge amount of due diligence in advance and yet
TC members are really, really keen to avoid either undermining a healthy
project around some cool new technology or undermining OpenStack by
including an unhealthy project or sub-par technology.

And then there's the time pressure where a decision has to be made by a
certain date and if that decision is not this time, the six months
delay until the next chance for a positive decision can be really
draining on motivation and momentum when everybody had been so focused
on getting a positive decision this time around.

We really need cool heads here and, above all, to try our best to assume
good faith, intentions and ability on both sides.


Some of the questions Robert asked are common questions and I know they
were discussed during the incubation review. However, the questions
persist and it's really important that TC members (and the community at
large) feel they can stand behind the answers to those questions. If I'm
chatting to someone and they ask me why does OpenStack need to
implement its own messaging broker?, I need to have a good answer.

How about we do our best to put the implications for the graduation
decision aside for a bit and focus on collaboratively pulling together a
FAQ that everyone can buy into? The raised questions and answers
section of the incubation review linked above is a good start, but I
think we can take this email as feedback that those questions and
answers need much improvement.

This could be a good pattern for all new projects - if the TC and the
new project can't work together to draft a solid FAQ like this, then
it's not a good sign for the project.

See below for my attempt to summarize the questions and how we might go
about answering them. Is this a reasonable start?

Mark.


Why isn't Marconi simply an API for provisioning and managing AMQP, Kestrel,
ZeroMQ, etc. brokers and queues? Why is a new broker implementation needed?

 = I'm not sure I can summarize the answer here - the need for a HTTP data
plane API, the need for multi-tenancy, etc.? Maybe a table listing the
required features and whether they're provided by these existing solutions.

Maybe there's also an element of we think we can do a better job. If so,
the point probably worth addressing is OpenStack shouldn't attempt to write
a new database, or a new hypervisor, or a new SDN controller, or a new block
storage implementation ... so why should we write a implement a new message
broker? If this is just a bad analogy, explain why?

Implementing a message queue using an SQL DB seems like a bad idea, why is
Marconi doing that?

 = Perhaps explain why MongoDB is a good storage technology for this use case
and the SQLalchemy driver is just a toy.

Marconi's default driver depends on MongoDB which is licensed under the AGPL.
This license is currently a no-go for some organizations, so what plans does
Marconi have to implement another production-ready storage driver that supports
all API features?

 = Discuss the Redis driver plans?

Is Marconi designed to be suitable for use by OpenStack itself?

 = Discuss that it's not currently in scope and why not. In what way does the
OpenStack use case differ from the applications Marconi's current API
focused on?

How should a client subscribe to a queue?

 = Discuss that it's not by GET /messages but instead POST /claims?limit=N




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Marconi] Why is marconi a queue implementation vs a provisioning API?

2014-03-20 Thread Mark McLoughlin
On Thu, 2014-03-20 at 01:28 +, Joshua Harlow wrote:
 Proxying from yahoo's open source director (since he wasn't initially
 subscribed to this list, afaik he now is) on his behalf.
 
 From Gil Yehuda (Yahoo’s Open Source director).
 
 I would urge you to avoid creating a dependency between Openstack code
 and any AGPL project, including MongoDB. MongoDB is licensed in a very
 strange manner that is prone to creating unintended licensing mistakes
 (a lawyer’s dream). Indeed, MongoDB itself presents Apache licensed
 drivers – and thus technically, users of those drivers are not
 impacted by the AGPL terms. MongoDB Inc. is in the unique position to
 license their drivers this way (although they appear to violate the
 AGPL license) since MongoDB is not going to sue themselves for their
 own violation. However, others in the community create MongoDB drivers
 are licensing those drivers under the Apache and MIT licenses – which
 does pose a problem.
 
 Why? The AGPL considers 'Corresponding Source' to be defined as “the
 source code for shared libraries and dynamically linked subprograms
 that the work is specifically designed to require, such as by intimate
 data communication or control flow between those subprograms and other
 parts of the work. Database drivers *are* work that is designed to
 require by intimate data communication or control flow between those
 subprograms and other parts of the work. So anyone using MongoDB with
 any other driver now invites an unknown --  that one court case, one
 judge, can read the license under its plain meaning and decide that
 AGPL terms apply as stated. We have no way to know how far they apply
 since this license has not been tested in court yet.
 Despite all the FAQs MongoDB puts on their site indicating they don't
 really mean to assert the license terms, normally when you provide a
 license, you mean those terms. If they did not mean those terms, they
 would not use this license. I hope they intended to do something good
 (to get contributions back without impacting applications using their
 database) but, even good intentions have unintended consequences.
 Companies with deep enough pockets to be lawsuit targets, and
 companies who want to be good open source citizens face the problem
 that using MongoDB anywhere invites the future risk of legal
 catastrophe. A simple development change in an open source project can
 change the economics drastically. This is simply unsafe and unwise.
 
 OpenStack's ecosystem is fueled by the interests of many commercial
 ventures who wish to cooperate in the open source manner, but then
 leverage commercial opportunities they hope to create. I suggest that
 using MongoDB anywhere in this project will result in a loss of
 opportunity -- real or perceived, that would outweigh the benefits
 MongoDB itself provides.
 
 tl;dr version: If you want to use MongoDB in your company, that's your
 call. Please don't turn anyone who uses OpenStack components into a
 unsuspecting MongoDB users. Instead, decouple the database from the
 project. It's not worth the legal risk, nor the impact on the
 Apache-ness of this project.

Thanks for that, Josh and Gil.

Rather than cross-posting, I think this MongoDB/AGPLv3 discussion should
continue on the legal-discuss mailing list:

  http://lists.openstack.org/pipermail/legal-discuss/2014-March/thread.html#174

Bear in mind that we (OpenStack, as a project and community) need to
judge whether this is a credible concern or not. If some users said they
were only willing to deploy Apache licensed code in their organization,
we would dismiss that notion pretty quickly. Is this AGPLv3 concern
sufficiently credible that OpenStack needs to take it into account when
making important decisions? That's what I'm hoping to get to in the
legal-discuss thread.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Marconi] Why is marconi a queue implementation vs a provisioning API?

2014-03-20 Thread Mark McLoughlin
On Wed, 2014-03-19 at 12:37 -0700, Devananda van der Veen wrote:
 Let me start by saying that I want there to be a constructive discussion
 around all this. I've done my best to keep my tone as non-snarky as I could
 while still clearly stating my concerns. I've also spent a few hours
 reviewing the current code and docs. Hopefully this contribution will be
 beneficial in helping the discussion along.

Thanks, I think it does.

 For what it's worth, I don't have a clear understanding of why the Marconi
 developer community chose to create a new queue rather than an abstraction
 layer on top of existing queues. While my lack of understanding there isn't
 a technical objection to the project, I hope they can address this in the
 aforementioned FAQ.
 
 The reference storage implementation is MongoDB. AFAIK, no integrated
 projects require an AGPL package to be installed, and from the discussions
 I've been part of, that would be a show-stopper if Marconi required
 MongoDB. As I understand it, this is why sqlalchemy support was required
 when Marconi was incubated. Saying Marconi also supports SQLA is
 disingenuous because it is a second-class citizen, with incomplete API
 support, is clearly not the recommended storage driver, and is going to be
 unusuable at scale (I'll come back to this point in a bit).
 
 Let me ask this. Which back-end is tested in Marconi's CI? That is the
 back-end that matters right now. If that's Mongo, I think there's a
 problem. If it's SQLA, then I think Marconi should declare any features
 which SQLA doesn't support to be optional extensions, make SQLA the
 default, and clearly document how to deploy Marconi at scale with a SQLA
 back-end.
 
 
 Then there's the db-as-a-queue antipattern, and the problems that I have
 seen result from this in the past... I'm not the only one in the OpenStack
 community with some experience scaling MySQL databases. Surely others have
 their own experiences and opinions on whether a database (whether MySQL or
 Mongo or Postgres or ...) can be used in such a way _at_scale_ and not fall
 over from resource contention. I would hope that those members of the
 community would chime into this discussion at some point. Perhaps they'll
 even disagree with me!
 
 A quick look at the code around claim (which, it seems, will be the most
 commonly requested action) shows why this is an antipattern.
 
 The MongoDB storage driver for claims requires _four_ queries just to get a
 message, with a serious race condition (but at least it's documented in the
 code) if multiple clients are claiming messages in the same queue at the
 same time. For reference:
 
 https://github.com/openstack/marconi/blob/master/marconi/queues/storage/mongodb/claims.py#L119
 
 The SQLAlchemy storage driver is no better. It's issuing _five_ queries
 just to claim a message (including a query to purge all expired claims
 every time a new claim is created). The performance of this transaction
 under high load is probably going to be bad...
 
 https://github.com/openstack/marconi/blob/master/marconi/queues/storage/sqlalchemy/claims.py#L83
 
 Lastly, it looks like the Marconi storage drivers assume the storage
 back-end to be infinitely scalable. AFAICT, the mongo storage driver
 supports mongo's native sharding -- which I'm happy to see -- but the SQLA
 driver does not appear to support anything equivalent for other back-ends,
 eg. MySQL. This relegates any deployment using the SQLA backend to the
 scale of only what one database instance can handle. It's unsuitable for
 any large-scale deployment. Folks who don't want to use Mongo are likely to
 use MySQL and will be promptly bitten by Marconi's lack of scalability with
 this back end.
 
 While there is a lot of room to improve the messaging around what/how/why,
 and I think a FAQ will be very helpful, I don't think that Marconi should
 graduate this cycle because:
 (1) support for a non-AGPL-backend is a legal requirement [*] for Marconi's
 graduation;
 (2) deploying Marconi with sqla+mysql will result in an incomplete and
 unscalable service.
 
 It's possible that I'm wrong about the scalability of Marconi with sqla +
 mysql. If anyone feels that this is going to perform blazingly fast on a
 single mysql db backend, please publish a benchmark and I'll be very happy
 to be proved wrong. To be meaningful, it must have a high concurrency of
 clients creating and claiming messages with (num queues)  (num clients)
  (num messages), and all clients polling on a reasonably short interval,
 based on what ever the recommended client-rate-limit is. I'd like the test
 to be repeated with both Mongo and SQLA back-ends on the same hardware for
 comparison.

My guess (and it's just a guess) is that the Marconi developers almost
wish their SQLA driver didn't exist after reading your email because of
the confusion it's causing. My understanding is that the SQLA driver is
not intended for production usage.

If Marconi just had a MongoDB driver, I think 

Re: [openstack-dev] [legal-discuss] [Marconi] Why is marconi a queue implementation vs a provisioning API?

2014-03-20 Thread Mark McLoughlin
On Thu, 2014-03-20 at 12:07 +0100, Thierry Carrez wrote:
 Monty Taylor wrote:
  On 03/20/2014 01:30 AM, Radcliffe, Mark wrote:
  The problem with AGPL is that the scope is very uncertain and the
  determination of the  consequences are very fact intensive. I was the
  chair of the User Committee in developing the GPLv3 and I am therefor
  quite familiar with the legal issues.  The incorporation of AGPLv3
  code Into OpenStack Project  is a significant decision and should not
  be made without input from the Foundation. At a minimum, the
  inclusion of APLv3 code means that the OpenStack Project is no longer
  solely an Apache v2 licensed project because AGPLv3 code cannot be
  licensed under Apache v. 2 License.  Moreover, the inclusion of such
  code is inconsistent with the current CLA provisions.
 
  This code can be included but it is an important decision that should
  be made carefully.
  
  I agree - but in this case, I think that we're not talking about
  including AGPL code in OpenStack as much as we are talking about using
  an Apache2 driver that would talk to a server that is AGPL ... if the
  deployer chooses to install the AGPL software. I don't think we're
  suggesting that downloading or installing openstack itself would involve
  downloading or installing AGPL code.
 
 Yes, the issue here is more... a large number of people want to stay
 away from AGPL. Should the TC consider adding to the OpenStack
 integrated release a component that requires AGPL software to be
 installed alongside it ? It's not really a legal issue (hence me
 stopping the legal-issues cross-posting).

We need to understand the reasons people want to stay away from the
AGPL. Those reasons appear to be legal reasons, and not necessarily
well founded. I think legal-discuss can help tease those out.

I don't (yet) accept that there's a reasonable enough concern for the
OpenStack project to pander to.

I'm no fan of the AGPL, but we need to be able to articulate any policy
decision we make here beyond some people don't like the AGPL.

For example, if we felt the AGPL fears weren't particularly well founded
then we could make a policy decision that projects should have an
abstraction that would allow those with AGPL fears add support for
another technology ... but that the project wouldn't be required to do
so itself before graduating.

 This was identified early on as a concern with Marconi and the SQLA
 support was added to counter that concern. The question then becomes,
 how usable this SQLA option actually is ? If it's sluggish, unusable in
 production or if it doesn't fully support the proposed Marconi API, then
 I think we still have that concern.

I understood that a future Redis driver was what the Marconi team had in
mind to address this concern and the SQLA driver wasn't intended for
production use.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Multiple patches in one review

2014-03-25 Thread Mark McLoughlin
On Mon, 2014-03-24 at 10:49 -0400, Russell Bryant wrote:
 Gerrit support for a patch series could certainly be better.

There has long been talking about gerrit getting topic review
functionality, whereby you could e.g. approve a whole series of patches
from a topic view.

See:

  https://code.google.com/p/gerrit/issues/detail?id=51
  https://groups.google.com/d/msg/repo-discuss/5oRra_tLKMA/rxwU7pPAQE8J

My understanding is there's a fork of gerrit out there with this
functionality that some projects are using successfully.

Mark.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack vs. SQLA 0.9

2014-03-25 Thread Mark McLoughlin
FYI, allowing 0.9 recently merged into openstack/requirements:

  https://review.openstack.org/79817

This is a good example of how we should be linking gerrit and mailing
list discussions together more. I don't think the gerrit review was
linked in this thread nor was the mailing list discussion linked in the
gerrit review.

Mark.

On Thu, 2014-03-13 at 22:45 -0700, Roman Podoliaka wrote:
 Hi all,
 
 I think it's actually not that hard to fix the errors we have when
 using SQLAlchemy 0.9.x releases.
 
 I uploaded two changes two Nova to fix unit tests:
 - https://review.openstack.org/#/c/80431/ (this one should also fix
 the Tempest test run error)
 - https://review.openstack.org/#/c/80432/
 
 Thanks,
 Roman
 
 On Thu, Mar 13, 2014 at 7:41 PM, Thomas Goirand z...@debian.org wrote:
  On 03/14/2014 02:06 AM, Sean Dague wrote:
  On 03/13/2014 12:31 PM, Thomas Goirand wrote:
  On 03/12/2014 07:07 PM, Sean Dague wrote:
  Because of where we are in the freeze, I think this should wait until
  Juno opens to fix. Icehouse will only be compatible with SQLA 0.8, which
  I think is fine. I expect the rest of the issues can be addressed during
  Juno 1.
 
  -Sean
 
  Sean,
 
  No, it's not fine for me. I'd like things to be fixed so we can move
  forward. Debian Sid has SQLA 0.9, and Jessie (the next Debian stable)
  will be released SQLA 0.9 and with Icehouse, not Juno.
 
  We're past freeze, and this requires deep changes in Nova DB to work. So
  it's not going to happen. Nova provably does not work with SQLA 0.9, as
  seen in Tempest tests.
 
-Sean
 
  I'd be nice if we considered more the fact that OpenStack, at some
  point, gets deployed on top of distributions... :/
 
  Anyway, if we can't do it because of the freeze, then I will have to
  carry the patch in the Debian package. Never the less, someone will have
  to work and fix it. If you know how to help, it'd be very nice if you
  proposed a patch, even if we don't accept it before Juno opens.
 
  Thomas Goirand (zigo)
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [depfreeze] [horizon] Exception request: python-keystoneclient=0.7.0

2014-03-27 Thread Mark McLoughlin
On Thu, 2014-03-27 at 13:53 +, Julie Pichon wrote:
 Hi,
 
 I would like to request a depfreeze exception to bump up the keystone
 client requirement [1], in order to reenable the ability for users to
 update their own password with Keystone v3 in Horizon in time for
 Icehouse [2]. This capability is requested by end-users quite often but
 had to be deactivated at the end of Havana due to some issues that are
 now resolved, thanks to the latest keystone client release. Since this
 is a library we control, hopefully this shouldn't cause too much trouble
 for packagers.
 
 Thank you for your consideration.
 
 Julie
 
 
 [1] https://review.openstack.org/#/c/83287/
 [2] https://review.openstack.org/#/c/59918/

IMHO, it's hard to imagine that Icehouse requiring a more recent version
of keystoneclient being a problem or risk for anyone.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] oslo.messaging 1.3.0 released

2014-04-01 Thread Mark McLoughlin
Hi

oslo.messaging 1.3.0 is now available on pypi and should be available in
our mirror shortly.

Full release notes are available here:

  http://docs.openstack.org/developer/oslo.messaging/

The master branch will soon be open for Juno targeted development and
we'll publish 1.4.0aN beta releases from master before releasing 1.4.0
for the Juno release.

A stable/icehouse branch will be created for important bugfixes that
will be released as 1.3.N.

Thanks,
Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] use of the oslo namespace package

2014-04-08 Thread Mark McLoughlin
On Mon, 2014-04-07 at 15:24 -0400, Doug Hellmann wrote:
 We can avoid adding to the problem by putting each new library in its
 own package. We still want the Oslo name attached for libraries that
 are really only meant to be used by OpenStack projects, and so we need
 a naming convention. I'm not entirely happy with the crammed
 together approach for oslotest and oslosphinx. At one point Dims and
 I talked about using a prefix oslo_ instead of just oslo, so we
 would have oslo_db, oslo_i18n, etc. That's also a bit ugly,
 though. Opinions?

Uggh :)

 Given the number of problems we have now (I help about 1 dev per week
 unbreak their system),

I've seen you do this - kudos on your patience.

  I think we should also consider renaming the
 existing libraries to not use the namespace package. That isn't a
 trivial change, since it will mean updating every consumer as well as
 the packaging done by distros. If we do decide to move them, I will
 need someone to help put together a migration plan. Does anyone want
 to volunteer to work on that?

One thing to note for any migration plan on this - we should use a new
pip package name for the new version so people with e.g.

   oslo.config=1.2.0

don't automatically get updated to a version which has the code in a
different place. You should need to change to e.g.

  osloconfig=1.4.0

 Before we make any changes, it would be good to know how bad this
 problem still is. Do developers still see issues on clean systems, or
 are all of the problems related to updating devstack boxes? Are people
 figuring out how to fix or work around the situation on their own? Can
 we make devstack more aggressive about deleting oslo libraries before
 re-installing them? Are there other changes we can make that would be
 less invasive?

I don't have any great insight, but hope we can figure something out.
It's crazy to think that even though namespace packages appear to work
pretty well initially, it might end up being so unworkable we would need
to switch.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Change ListOpt and DictOpt default values

2013-10-10 Thread Mark McLoughlin
Hi Flavio,

On Thu, 2013-10-10 at 14:40 +0200, Flavio Percoco wrote:
 Greetings,
 
 I'd like to propose to change both ListOpt and DictOpt default values
 to [] and {} respectively. These values are, IMHO, saner defaults than
 None for this 2 options and behavior won't be altered - unles `is not
 None` is being used.
 
 Since I may be missing some history, I'd like to ask if there's a
 reason why None was kept as the default `default` value for this 2 options?
 
 As mentioned above, this change may be backward incompatible in cases
 like:
 
 if conf.my_list_opt is None:
 
 
 Does anyone if there are cases like this?

I'd need a lot of persuasion that this won't break some use of
oslo.config somewhere. Not why would anyone do that? hand-waving.
People do all sorts of weird stuff with APIs.

If people really think this is a big issue, I'd make it opt-in. Another
boolean flag like the recently added validate_default_values.

As regards bumping the major number and making incompatible changes - I
think we should only consider that when there's a tonne of legacy
compatibility stuff that we want to get rid of. For example, if we had a
bunch of opt-in flags like these, then there'd come a point where we'd
say let's do 2.0 and clean those up. However, doing such a thing is
disruptive and I'd only be in favour of it if the backwards compat
support was really getting in our way.

Thanks,
Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Call for a clear COPYRIGHT-HOLDERS file in all OpenStack projects (and [trove] python-troveclient_0.1.4-1_amd64.changes REJECTED)

2013-10-21 Thread Mark McLoughlin
On Mon, 2013-10-21 at 10:28 -0700, Clint Byrum wrote:
 Excerpts from Robert Collins's message of 2013-10-20 02:25:43 -0700:
  On 20 October 2013 02:35, Monty Taylor mord...@inaugust.com wrote:
  
   However, even as a strong supporter of accurate license headers, I would
   like to know more about the FTP masters issue. I dialog with them, as
   folks who deal with this issue and its repercutions WAY more than any of
   us might be really nice.
  
  Debian takes it's responsibilities under copyright law very seriously.
  The integrity of the debian/copyright metadata is checked on the first
  upload for a package (and basically not thereafter, which is either
  convenient or pragmatic or a massive hole in rigour depending on your
  point of view. The goal is to ensure that a) the package is in the
  right repository in Debian (main vs nonfree) and b) that Debian can
  redistribute it and c) that downstreams of Debian who decide to use
  the package can confidently do so. Files with differing redistribution
  licenses that aren't captured in debian/copyright are an issue for c);
  files with different authors and the same redistribution licence
  aren't a problem for a/b/c *but* the rules the FTP masters enforce
  don't make that discrimination: the debian/copyright file needs to be
  a concordance of both copyright holders and copyright license.
  
  Personally, I think it should really only be a concordance of
  copyright licenses, and the holders shouldn't be mentioned, but thats
  not the current project view.
  
 
 The benefit to this is that by at least hunting down project leadership
 and getting an assertion and information about the copyright holder
 situation, a maintainer tends to improve clarity upstream.

By improve clarity, you mean compile an accurate list of all
copyright holders? Why is this useful information?

Sure, we could also improve clarity by compiling a list of all the
cities in the world where some OpenStack code has been authored ... but
*why*?

  Often things
 that are going into NEW are, themselves, new to the world, and often
 those projects have not done the due diligence to state their license
 and take stock of their copyright owners.

I think OpenStack has done plenty of due diligence around the licensing
of its code and that all copyright holders agree to license their code
under those terms.

 I think that is one reason
 the process survives despite perhaps going further than is necessary to
 maintain Debian's social contract integrity.

This is related to some social contract? Please explain.

 I think OpenStack has taken enough care to ensure works are attributable
 to their submitters that Debian should have a means to accept that
 this project is indeed licensed as such. Perhaps a statement detailing
 the process OpenStack uses to ensure this can be drafted and included
 in each repository. It is not all that dissimilar to what MySQL did by
 stating the OpenSource linking exception for libmysqlclient's
 GPL license explicitly in a file that is now included with the tarballs.

You objected to someone else on this thread conflating copyright
ownership and licensing. Now you do the same. There is absolutely no
ambiguity about OpenStack's license.

Our CLA process for new contributors is documented here:

  
https://wiki.openstack.org/wiki/How_To_Contribute#Contributors_License_Agreement

The key thing for Debian to understand is that all OpenStack
contributors agree to license their code under the terms of the Apache
License. I don't see why a list of copyright holders would clarify the
licensing situation any further.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Call for a clear COPYRIGHT-HOLDERS file in all OpenStack projects (and [trove] python-troveclient_0.1.4-1_amd64.changes REJECTED)

2013-10-21 Thread Mark McLoughlin
On Tue, 2013-10-22 at 01:55 +0800, Thomas Goirand wrote:
 On 10/21/2013 09:28 PM, Mark McLoughlin wrote:
  In other words, what exactly is a list of copyright holders good for?
 
 At least avoid pain and reject when uploading to the Debian NEW queue...

I'm sorry, that is downstream Debian pain. It shouldn't be inflicted on
upstream unless it is generally a useful thing.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Call for a clear COPYRIGHT-HOLDERS file in all OpenStack projects (and [trove] python-troveclient_0.1.4-1_amd64.changes REJECTED)

2013-10-22 Thread Mark McLoughlin
On Tue, 2013-10-22 at 14:09 +0800, Thomas Goirand wrote:
 On 10/22/2013 04:55 AM, Mark McLoughlin wrote:
  Talk to the Trove developers and politely ask them whether the copyright
  notices in their code reflects what they see as the reality.
  
  I'm sure it would help them if you pointed out to them some significant
  chunks of code from the commit history which don't appear to have been
  written by a HP employee.
 
 I did this already. Though if I raised the topic in this list (as
 opposed to contact the Trove maintainers privately), this was for a
 broader scope, to make sure it doesn't happen again and again.
 
  Simply adding a Rackspace copyright notice to a file or two which has
  had a significant contribution by someone from Rackspace would be enough
  to resolve your concerns completely.
 
 But how to make sure that there's no *other* copyright holders, and that
 my debian/copyright is right? Currently, there's no way...

I've never seen a project where copyright headers weren't occasionally
missing some copyright holders. I suspect Debian has managed just fine
with those projects and can manage just fine with OpenStack's copyright
headers too.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Call for a clear COPYRIGHT-HOLDERS file in all OpenStack projects (and [trove] python-troveclient_0.1.4-1_amd64.changes REJECTED)

2013-10-22 Thread Mark McLoughlin
On Tue, 2013-10-22 at 14:19 +0800, Thomas Goirand wrote:
 On 10/22/2013 04:48 AM, Mark McLoughlin wrote:
  On Tue, 2013-10-22 at 01:55 +0800, Thomas Goirand wrote:
  On 10/21/2013 09:28 PM, Mark McLoughlin wrote:
  In other words, what exactly is a list of copyright holders good for?
 
  At least avoid pain and reject when uploading to the Debian NEW queue...
  
  I'm sorry, that is downstream Debian pain.
 
 I agree, it is painful, though it is a necessary pain. Debian has always
 been strict with copyright stuff. This should be seen as a freeness Q/A,
 so that we make sure everything is 100% free, rather than an annoyance.

A list of copyright holders does nothing to improve the freeness of
OpenStack.

  It shouldn't be inflicted on
  upstream unless it is generally a useful thing.
 
 There's no other ways to do things, unfortunately. How would I make sure
 a software is free, and released in the correct license, if upstream
 doesn't declare it properly? There's been some cases on packages I
 wanted to upload, where there was just:
 
 Classifier: License :: OSI Approved :: MIT License
 
 in *.egg-info/PKG-INFO, and that's it. If the upstream authors don't fix
 this by adding a clear LICENSE file (with the correct definition of the
 MIT License, which is confusing because there's been many of them), then
 the package couldn't get in. Lucky, upstream authors of that python
 module fixed that, and the package was re-uploaded and validated by the
 FTP masters.

I fully understand the importance of making it completely clear what the
license of a project is and have had to package projects that don't make
this clear. Fedora's guidelines on the subject are e.g.

https://fedoraproject.org/wiki/Packaging:LicensingGuidelines#License_Text

 I'm not saying that this was the case for Trove (the exactitude of the
 copyright holder list in debian/copyright is less of an issue), though
 I'm just trying to make you understand that you can't just ignore the
 issue and say I don't care, that's Debian's problem. This simply
 doesn't work (unless you would prefer OpenStack package to *not* be in
 Debian, which I'm sure isn't the case here).

I can say Debian policies that no-one can provide any justification for
is Debian's problem. And that's the case with this supposed Debian
requires a complete list of copyright holders policy.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] RFC: Filtering boring commit subjects from ChangeLog

2013-10-28 Thread Mark McLoughlin
On Sun, 2013-10-27 at 21:50 -0400, Monty Taylor wrote:
 Hey all!
 
 We're adding a little bit of code to pbr to make the auto-generated
 ChangeLog files a bit more useful. Currently, they are just the git
 changelog, which is kinda useless. So we wrote this:
 
 https://review.openstack.org/#/c/52367/
 
 which produces output like this:
 
 http://paste.openstack.org/show/4  # on a tag
 and
 http://paste.openstack.org/show/50001  # not on a tag
 
 It underscores the need for commit subjects to communicate something,
 which is a good thing.
 
 With that there, it makes changes like:
 * Updated from global requirements
 and
 * Update my mailmap
 
 Kinda lame in the changelog. So we were thinking - what if we recognized
 one or more subject tags to skip things going into the ChangeLog file.
 My first thought was:
 
 NIT:  # use this for tiny commits that are not really worth changelogging
 
 and
 
 AUTO: # use for commits generated by our machines, such as the global
 requirements sync or the Translations sync.
 
 What do people think? Should we bother? Are those good? Would something
 else be better? It's sort of an opt-in feature, so adding it SHOULDN'T
 bork too many people.

So long as it isn't so SHOUTY it could work out nicely :)

Getting these ChangeLogs published is goodness - the more the more
people see their one-line message around the place, the more people
they'll make an effort to write a decent one.

Cheers,
Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] When is it okay for submitters to say 'I don't want to add tests' ?

2013-10-31 Thread Mark McLoughlin
On Thu, 2013-10-31 at 15:37 +1300, Robert Collins wrote:
 This is a bit of a social norms thread
 
 I've been consistently asking for tests in reviews for a while now,
 and I get the occasional push-back. I think this falls into a few
 broad camps:
 
 A - there is no test suite at all, adding one in unreasonable
 B - this thing cannot be tested in this context (e.g. functional tests
 are defined in a different tree)
 C - this particular thing is very hard to test
 D - testing this won't offer benefit
 E - other things like this in the project don't have tests
 F - submitter doesn't know how to write tests
 G - submitter doesn't have time to write tests

Nice breakdown.

 Now, of these, I think it's fine not add tests in cases A, B, C in
 combination with D, and D.
 
 I don't think E, F or G are sufficient reasons to merge something
 without tests, when reviewers are asking for them. G in the special
 case that the project really wants the patch landed - but then I'd
 expect reviewers to not ask for tests or to volunteer that they might
 be optional.

I totally agree with the sentiment but, especially when it's a newcomer
to the project, I try to put myself in the shoes of the patch submitter
and double-check whether what we're asking is reasonable.

For example, if someone shows up to Nova with their first OpenStack
contribution, it fixes something which is unquestionably a bug - think
typo like raise NotFund('foo') - and testing this code patch requires
more than adding a simple new scenario to existing tests ...

That, for me, is an example of -1, we need a test! untested code is
broken! is really shooting the messenger, not valuing the newcomers
contribution and risks turning that person off the project forever.
Reviewers being overly aggressive about this where the project doesn't
have full test coverage to begin with really makes us seem unwelcoming.

In cases like that, I'd be of a mind to go +2 Awesome! Thanks for
catching this! It would be great to have a unit test for this, but it's
clear the current code is broken so I'm fine with merging the fix
without a test. You could say it's now the reviewers responsibility to
merge a test, but if that requirement then turns off reviewers even
reviewing such a patch, then that doesn't help either.

So, with all of this, let's make sure we don't forget to first
appreciate the effort that went into submitting the patch that lacks
tests.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] welcoming new committers (was Re: When is it okay for submitters to say 'I don't want to add tests' ?)

2013-10-31 Thread Mark McLoughlin
On Thu, 2013-10-31 at 11:49 -0700, Stefano Maffulli wrote:

 Another idea that Tom suggested is to use gerrit automation to send back
 to first time committers something in addition to the normal 'your patch
 is waiting for review' message. The message could be something like:
 
  thank you for your first contribution to OpenStack. Your patch will
  now be tested automatically by OpenStack testing frameworks and once
  the automatic tests pass, it will be reviewed by other friendly
  developers. They will give you comments and may require you to refine
  it.
  
  Nobody gets his patch approved at first try so don't be concerned
  when someone will require you to do more iterations.
  
  Patches usually take 3 to 7 days to be approved so be patient and be
  available on IRC to ask and answer questions about your work. The
  more you participate in the community the more rewarding it is for
  you. You may also notice that the more you get to know people and get
  to be known, the faster your patches will be reviewed and eventually
  approved. Get to know others and be known by doing code reviews:
  anybody can and should do it.
...

Very nicely done. I really like it.

Thanks,
Mark.

 PS I put the text on
 https://etherpad.openstack.org/p/welcome-new-committers for refinements.
 



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-tc] Proposal to recognize indirect contributions to our code base

2013-11-11 Thread Mark McLoughlin
Hi Nick,

On Mon, 2013-11-11 at 15:20 +0100, Nicolas Barcet wrote:
 Dear TC members,
 
 Our companies are actively encouraging our respective customers to have the
 patches they mission us to make be contributed back upstream.  In order to
 encourage this behavior from them and others, it would be nice that if
 could gain some visibility as sponsors of the patches in the same way we
 get visibility as authors of the patches today.
 
 The goal here is not to provide yet another way to count affiliations of
 direct contributors, nor is it a way to introduce sales pitches in contrib.
  The only acceptable and appropriate use of the proposal we are making is
 to signal when a patch made by a contributor for another comany than the
 one he is currently employed by.
 
 For example if I work for a company A and write a patch as part of an
 engagement with company B, I would signal that Company B is the sponsor of
 my patch this way, not Company A.  Company B would under current
 circumstances not get any credit for their indirect contribution to our
 code base, while I think it is our intent to encourage them to contribute,
 even indirectly.
 
 To enable this, we are proposing that the commit text of a patch may
 include a
sponsored-by: sponsorname
 line which could be used by various tools to report on these commits.
  Sponsored-by should not be used to report on the name of the company the
 contributor is already affiliated to.

Honestly, I've an immediately negative reaction to the prospect of e.g.

  Sponsored-By: Red Hat
  Sponsored-By: IBM

appearing in our commit messages.

I feel strongly that the project is first and foremost a community of
individuals and we instinctively push as much of corporate backing side
of things outside of the project. We try to spend as little time as
possible talking about our affiliations as possible.

And, IMHO, the git commit log is particularly sacred ground - almost
above anything else, it is a place for purely technical details.

However, I do think we'll be able to figure out some way of making it
easier for tools to track more complex affiliations.

Our affiliation databases are all keyed off email addresses right now,
so how about if we allowed for encoding affiliation/sponsorship in
addresses? e.g.

  Author: Mark McLoughlin markmc+...@redhat.com

and we could register that address as work done by Mark on behalf of
IBM ?

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-tc] Proposal to recognize indirect contributions to our code base

2013-11-11 Thread Mark McLoughlin
On Mon, 2013-11-11 at 11:41 -0500, Russell Bryant wrote:
 On 11/11/2013 10:57 AM, Mark McLoughlin wrote:
  Hi Nick,
  
  On Mon, 2013-11-11 at 15:20 +0100, Nicolas Barcet wrote:
  Dear TC members,
 
  Our companies are actively encouraging our respective customers to have the
  patches they mission us to make be contributed back upstream.  In order to
  encourage this behavior from them and others, it would be nice that if
  could gain some visibility as sponsors of the patches in the same way we
  get visibility as authors of the patches today.
 
  The goal here is not to provide yet another way to count affiliations of
  direct contributors, nor is it a way to introduce sales pitches in contrib.
   The only acceptable and appropriate use of the proposal we are making is
  to signal when a patch made by a contributor for another comany than the
  one he is currently employed by.
 
  For example if I work for a company A and write a patch as part of an
  engagement with company B, I would signal that Company B is the sponsor of
  my patch this way, not Company A.  Company B would under current
  circumstances not get any credit for their indirect contribution to our
  code base, while I think it is our intent to encourage them to contribute,
  even indirectly.
 
  To enable this, we are proposing that the commit text of a patch may
  include a
 sponsored-by: sponsorname
  line which could be used by various tools to report on these commits.
   Sponsored-by should not be used to report on the name of the company the
  contributor is already affiliated to.
  
  Honestly, I've an immediately negative reaction to the prospect of e.g.
  
Sponsored-By: Red Hat
Sponsored-By: IBM
  
  appearing in our commit messages.
  
  I feel strongly that the project is first and foremost a community of
  individuals and we instinctively push as much of corporate backing side
  of things outside of the project. We try to spend as little time as
  possible talking about our affiliations as possible.
  
  And, IMHO, the git commit log is particularly sacred ground - almost
  above anything else, it is a place for purely technical details.
 
 This was exactly my reaction, as well.  I just hadn't been able to come
 up with a good alternate proposal, yet.
 
  However, I do think we'll be able to figure out some way of making it
  easier for tools to track more complex affiliations.
  
  Our affiliation databases are all keyed off email addresses right now,
  so how about if we allowed for encoding affiliation/sponsorship in
  addresses? e.g.
  
Author: Mark McLoughlin markmc+...@redhat.com
  
  and we could register that address as work done by Mark on behalf of
  IBM ?
 
 That doesn't seem any better to me.  It actually seems more likely to
 break, since someone could be using an email address with '+' in it for
 some other reason, right?

Oh, I'm not saying we parse the ibm bit and key off that. Just that we
can associate affiliation with email address while still allowing people
to use the same email address for everything.

A better example - say Robert De Niro works for Nebula but uses his
gmail address for everything:

  Author: Robert De Niro taxidri...@gmail.com

but if he did some work sponsored by the NSA, he could do:

  Author: Robert De Niro taxidriver+soldmys...@gmail.com

and we'd have the tracking tools associate the first email address with
Nebula and the second with the NSA.

 I think it may be worth looking at this from a different angle.  Perhaps
 we should tone down the focus on company metrics, and perhaps remove
 them completely from anything we control or have influence over.

I'm down with that. We've definitely jumped the shark on this.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Bad review patterns

2013-11-11 Thread Mark McLoughlin
On Fri, 2013-11-08 at 09:32 +1300, Robert Collins wrote:
 On 7 November 2013 13:15, Day, Phil philip@hp.com wrote:
 
 
 
 
  Core reviewers look for the /comments/ from people, not just the votes. A
  +1 from someone that isn't core is meaningless unless they are known to be
  a thoughtful code reviewer. A -1 with no comment is also bad, because it
  doesn't help the reviewee get whatever the issue is fixed.
 
  It's very much not OK to spend an hour reviewing something and then +1
  with no comment: if I, and I think any +2er across the project see a patch 
  that
  needs an hour of review, with a commentless +1, we'll likely discount the 
  +1
  as being meaningful.
 
 
  I don't really get what you're saying here Rob.   It seems to me an almost 
  inevitable
  part of the review process that useful comments are going to be mostly 
  negative.
  If someone has invested that amount of effort because the patch is complex, 
  or
  It took then a while to work their way back into that part of the systems, 
  etc, but
  having given the code careful consideration they decide it's good - do you 
  want
  comments in there saying I really like your code, Well done on fixing 
  such a
  complex problem or some such ?
 
 Negative comments are fine: I was saying that statistically, having an
 hour-long patch (which BTW puts it waaay past the diminishing returns
 on patch size tradeoff) to review and then having nothing at all to
 say about it is super suspect. Sure, having nothing to say on a 10
 line patch - hit a +1, move on. But something that takes an hour to
 review? I'd expect at minimum the thing to prompt some suggestions
 *even while it gets a +1*.
 
  I just don't see how you can use a lack or presence of positive feedback in 
  a +1 as
  any sort of indication on the quality of that +1.   Unless you start asking 
  reviewers
  to précis the change in their own words to show that they understood it I 
  don't
  really see how additional positive comments help in most cases.   Perhaps 
  if you
  have some specific examples of this it would help to clarify
 
 It might even be negative feedback. Like - this patch is correct but
 the fact it had to be done as one patch shows our code structure here
 is wonky. If you could follow up with something to let us avoid future
 mammoth patches, that would be awesome.
 
 Again, I'm talking about hour long reviews, which is the example Radomir gave.
 
 Another way of presenting what I'm saying is this: Code reviews are a
 dialogue. How often in a discussion with a live human would you have
 no feedback at all, if they were reading a speech to you. You might go
 'that was a great speech' (+2) and *still* have something to add. As
 an observer given a very long speech, and an audience, I'd suspect
 folk went to sleep if they didn't have commentary :)

Totally agree with you here. I almost completely discount +1s with no
comments on a large patch.

I make a habit of leaving comments in reviews - positive, negative,
neutral, whatever. If I have something to say which might be useful to
the author, other reviewers, my future self, whatever ... then I'll say
it.

e.g. if I spend 10 minutes looking at one part of a patch, ultimately
convincing myself that there really is no better approach and the author
has made the right tradeoffs ... then I'll say it. I'll briefly describe
the tradeoffs, the other options that I guess the author considered and
discounted.

I sometimes feel guilty about this because I know patch authors just
want their +2 and often don't want to read through my verbiage ... but,
as you say, this is a dialogue and the dialogue can yield some
interesting thoughts and ideas.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Bad review patterns

2013-11-11 Thread Mark McLoughlin
On Thu, 2013-11-07 at 20:40 -0500, David Ripton wrote:
 On 11/07/2013 07:54 PM, Sean Dague wrote:
  On 11/08/2013 01:37 AM, Pedro Roque Marques wrote:
  Radomir,
  An extra issue that i don't believe you've covered so far is about comment 
  ownership. I've just read an email on the list that follows a pattern that 
  i've heard many complaints about:
 -1 with a reasonable comment, submitter addresses the comment, reviewer 
  never comes back.
 
  Reviewers do need to allocate time to come back and follow up on the 
  answers to their comments.
 
  Perhaps there is an issue with the incentive system. You can earn karma by 
  doing a code review... certainly you want to incentivise developers that 
  help the project by improving the code quality. But if the incentive 
  system allows for drive by shooting code reviews that can be a problem.
 
  It's not really an incentive system problem, this is some place where
  there are some gerrit limitations (especially when your list of reviewed
  code is long). Hopefully once we get a gerrit upgrade we can dashboard
  out some new items like that via the new rest API.
 
  I agree that reviewers could be doing better. But definitely also
  realize that part of this is just that there is *so* much code to review.
 
  Realize that most core reviewers aren't ignoring or failing to come back
  on patches intentionally. There is just *so* much of it. I feel guilty
  all the time by how big a review queue I have, but I also need a few
  hours a day not doing OpenStack (incredible to believe). This is where
  non core reviewers can really help in addressing the first couple of
  rounds of review to prune and improve the easy stuff.
 
  We're all in this together,
 
 Is there a way for Gerrit to only send email when action is required, 
 rather than on any change to any review you've touched?  If Gerrit sent 
 less mail, it would be easier to treat its mails as a critical call to 
 action to re-review.  (There's probably a way to use fancy message 
 filtering to accomplish this, but that would only work for people 
 willing/able to set up such filtering.)

I know you're discounting filtering here, but FWIW I filter on the email
body containing:

  Gerrit-Reviewer: Mark McLoughlin

so that I have all email related to reviews I'm subscribed to in a
single folder. I try hard to stay on top of this folder to avoid being a
drive-by-reviewer.

I group the mails by thread, so I don't mind all the email gerrit sends
out but if e.g. I wanted to only see notifications of new patch sets I'd
filter on:

  Gerrit-MessageType: newpatchset

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Configuration validation

2013-11-11 Thread Mark McLoughlin
Hi Nikola,

On Mon, 2013-11-11 at 12:44 +0100, Nikola Đipanov wrote:
 Hey all,
 
 During the summit session on the the VMWare driver roadmap, a topic of
 validating the passed configuration prior to starting services came up
 (see [1] for more detail on how it's connected to that specific topic).
 
 Several ideas were thrown around during the session mostly documented in
 [1].
 
 There are a few more cases when something like this could be useful (see
 bug [2] and related patch [3]), and I was wondering if a slightly
 different approach might be useful. For example use an already existing
 validation hook in the service class [4] to call into a validation
 framework that will potentially stop the service with proper
 logging/notifications. The obvious benefit would be that there is no
 pre-run required from the user, and the danger of running a
 misconfigured stack is smaller.

One thing worth trying would be to encode the validation rules in the
config option declaration.

Some rules could be straightforward, like:

opts = [
  StrOpt('foo_url',
 validate_rule=cfg.MatchesRegexp('(git|http)://')),
]

but the rule you describe is more complex e.g.

def validate_proxy_url(conf, group, key, value):
if not conf.vnc_enabled:
return
if conf.ssl_only and value.startswith(http://;):
raise ValueError('ssl_only option detected, but ...')

opts = [
  StrOpt('novncproxy_base_url',
 validate_rule=validate_proxy_url),
  ...
]

I'm not sure I love this yet, but it's worth experimenting with.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do we have some guidelines for mock, stub, mox when writing unit test?

2013-11-11 Thread Mark McLoughlin
On Mon, 2013-11-11 at 12:07 +, John Garbutt wrote:
 On 11 November 2013 10:27, Rosa, Andrea (HP Cloud Services)
 andrea.r...@hp.com wrote:
  Hi
 
 Generally mock is supposed to be used over mox now for python 3 support.
 
  That is my understanding too
 
 +1
 
 But I don't think we should waste all our time re-writing all our mox
 and stub tests. Lets just leave this to happen organically for now as
 we add and refactor tests. We probably need to take the hit at some
 point, but that doesn't feel like we should do that right now.

Hmm, I don't follow this stance.

Adding Python3 support is a goal we all share.

If we're saying that the use of mox stands in the way of that goal, but
that we'd really prefer if people didn't work on porting tests from mox
to mock yet ... then are we saying we don't value people working on
porting to Python3?

And if we plan to use a Python3 compatible version of mox[1], then isn't
the Python3 argument irrelevant and saying use mock for new tests just
means we'll end up with a mixture of mox and mock?

Thanks,
Mark.

[1] - https://github.com/emonty/pymox


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] New API requirements, review of GCE

2013-11-15 Thread Mark McLoughlin
On Fri, 2013-11-15 at 11:28 -0500, Russell Bryant wrote:
 Greetings,
 
 We've talked a lot about requirements for new compute drivers [1].  I
 think the same sort of standards shold be applied for a new third-party
 API, such as the GCE API [2].
 
 Before we can consider taking on a new API, it should have full test
 suite coverage.  Ideally this would be extensions to tempest.  It should
 also be set up to run on every Nova patch via CI.
 
 Beyond that point, now is a good time to re-consider how we want to
 support new third-party APIs.  Just because EC2 is in the tree doesn't
 mean that has to be how we support them going forward.  Should new APIs
 go into their own repositories?
 
 I used to be against this idea.  However, as Nova's has grown, the
 importance of finding the right spots to split is even higher.  My
 objection was primarily based on assuming we'd have to make the Python
 APIs stable.  I still do not think we should make them stable, but I
 don't think that's a huge issue, since it should be mitigated by running
 CI so the API maintainers quickly get notified when updates are necessary.
 
 Taking on a whole new API seems like an even bigger deal than accepting
 a new compute driver, so it's an important question.
 
 If we went this route, I would encourage new third-party APIs to build
 themselves up in a stackforge repo.  Once it's far enough along, we
 could then evaluate officially bringing it in as an official sub-project
 of the OpenStack Compute program.

I do think there should be a high bar for new APIs. More than just CI,
but that there is a viable group of contributors around the API who are
involved in OpenStack more generally than just maintaining the API in
question.

I don't at all like the idea of drivers or APIs living in separate repos
and building on unstable Nova APIs. Anything which we accept is a part
of OpenStack should not get randomly made unusable by one contributor
while other contributors constantly have to scramble to catch up. Either
stuff winds up being broken too often or we stifle progress in Nova
because we're afraid to make breaking changes.

I happened to meet Thijs Metsch, the maintainer of OCCI for Nova
yesterday:

  https://github.com/tmetsch/occi-os
  https://wiki.openstack.org/wiki/Occi

and he described how often he had to fix the API implementation to adapt
to changes in Nova's API. My advice was to work towards having the API
be included in Nova (understand that it's a long road and there would be
a bunch of difficult requirements) or take the less technically
attractive option of proxying the OCCI API to the stable OpenStack REST
API.

For someone like Thijs, choosing a middle road where the API
implementation is going to be constantly broken is going to suck for him
and his users.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] New API requirements, review of GCE

2013-11-15 Thread Mark McLoughlin
On Fri, 2013-11-15 at 12:19 -0500, Russell Bryant wrote:
 On 11/15/2013 12:01 PM, Mark McLoughlin wrote:
  On Fri, 2013-11-15 at 11:28 -0500, Russell Bryant wrote:
  Greetings,
 
  We've talked a lot about requirements for new compute drivers [1].  I
  think the same sort of standards shold be applied for a new third-party
  API, such as the GCE API [2].
 
  Before we can consider taking on a new API, it should have full test
  suite coverage.  Ideally this would be extensions to tempest.  It should
  also be set up to run on every Nova patch via CI.
 
  Beyond that point, now is a good time to re-consider how we want to
  support new third-party APIs.  Just because EC2 is in the tree doesn't
  mean that has to be how we support them going forward.  Should new APIs
  go into their own repositories?
 
  I used to be against this idea.  However, as Nova's has grown, the
  importance of finding the right spots to split is even higher.  My
  objection was primarily based on assuming we'd have to make the Python
  APIs stable.  I still do not think we should make them stable, but I
  don't think that's a huge issue, since it should be mitigated by running
  CI so the API maintainers quickly get notified when updates are necessary.
 
  Taking on a whole new API seems like an even bigger deal than accepting
  a new compute driver, so it's an important question.
 
  If we went this route, I would encourage new third-party APIs to build
  themselves up in a stackforge repo.  Once it's far enough along, we
  could then evaluate officially bringing it in as an official sub-project
  of the OpenStack Compute program.
  
  I do think there should be a high bar for new APIs. More than just CI,
  but that there is a viable group of contributors around the API who are
  involved in OpenStack more generally than just maintaining the API in
  question.
  
  I don't at all like the idea of drivers or APIs living in separate repos
  and building on unstable Nova APIs. Anything which we accept is a part
  of OpenStack should not get randomly made unusable by one contributor
  while other contributors constantly have to scramble to catch up. Either
  stuff winds up being broken too often or we stifle progress in Nova
  because we're afraid to make breaking changes.
  
  I happened to meet Thijs Metsch, the maintainer of OCCI for Nova
  yesterday:
  
https://github.com/tmetsch/occi-os
https://wiki.openstack.org/wiki/Occi
  
  and he described how often he had to fix the API implementation to adapt
  to changes in Nova's API. My advice was to work towards having the API
  be included in Nova (understand that it's a long road and there would be
  a bunch of difficult requirements) or take the less technically
  attractive option of proxying the OCCI API to the stable OpenStack REST
  API.
  
  For someone like Thijs, choosing a middle road where the API
  implementation is going to be constantly broken is going to suck for him
  and his users.
 
 Thanks for the comments.  Very interesting ...
 
 A significantly high bar such that having it in the tree isn't a drain
 on nova development (significant commitment by the maintainers, as well
 as solid test coverage) makes sense.
 
 Would we consider ripping it out if contribution goes down, or testing
 doesn't keep up?

I'd think so, but it should be a pretty serious drop - stuff coming and
going regularly would be more of an issue for users than reduced
quality.

 If so, do we apply the same standards to the EC2 API?
 How much time do we give EC2 to get up to this standard before we rip
 it out?

I hadn't understood the EC2 API to be in such a woeful state. Are we
saying the implementation is so bad it's not at all useful for users? Or
that a lack of testing means we see a far higher rather of regressions
than in e.g. the OpenStack API? Or just that we don't see much progress
on it?

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [RFC] Straw man to start the incubation / graduation requirements discussion

2013-11-15 Thread Mark McLoughlin
On Wed, 2013-11-13 at 06:57 -0500, Sean Dague wrote:
 (Apologies, this started on the TC list, and really should have started
 on -dev, correctly posting here now for open discussion)
 
 There were a few chats at summit about this, mostly on the infra /
 devstack / qa side of the house. Consider the following a straw man to
 explain the current state of the world, and what I'd like to see change here
 I call out projects by name here, not to
 make fun of them, but that I think concrete examples bring the point
 home much more than abstract arguments (at least for me).
 
 This is looking at raising the bar quite a bit along the way. However,
 as someone that spends a lot of time trying to keep the whole ball of
 wax holding together, and is spending a lot of time retroactively trying
 to get projects into our integrated gate (and huge pain to everyone, as
 their gate commits get bounced by racey projects), I think we really
 need to up this bar if we want a 20 integrated project version of
 OpenStack to hold together.

Thanks for doing this. The requirements look good to me.

I think it's about time we gathered all requirements together and
properly documented them so people realize there's a much bigger
picture. I've started pulling together some stuff here:

  https://etherpad.openstack.org/p/incubation-and-integration-requirements

but clearly there's a lot of work to do.

One thing I really, really want is for the rules to be accompanied with
a good explanation of what the rules are there to achieve. We cannot let
ourselves turn into a community that over-zealously applies rules to the
extent that the rules do more damage than good.

There's always got to be a judgement call involved. I'm happy that
Ceilometer graduated, even though it doesn't have gating tests. I think
it has been a positive addition and I'd rather have it without the
gating tests than not at all.

The guidelines like this will greatly encourage projects to up their
game and hopefully we'll rarely be faced with a generally awesome
project wanting to graduate but it not having integration tests.
However, if that did happen, we need it to at least be possible for us
to have a rational, big-picture conversation about whether some
rule-bending is the best thing overall for the project.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron Tempest code sprint - 2nd week of January, Montreal, QC, Canada

2013-11-15 Thread Mark McLoughlin
On Fri, 2013-11-15 at 12:47 -0500, Anita Kuno wrote:
 On 11/15/2013 12:34 PM, Russell Bryant wrote:
  On 11/15/2013 12:16 PM, Kyle Mestery (kmestery) wrote:
  On Nov 15, 2013, at 11:04 AM, Dan Smith d...@danplanet.com wrote:
  Thanks for weighing in, I do hope to keep the conversation going.
  Add my name to the list of people that won't be considering the trip as
  a result.
 
  Are people really saying that if they show up for this Tempest sprint
  with a logo shirt they would actually take it off and wear a white shirt
  provided by someone, or that this will prevent them from going at all?
  I can't even believe we're having this conversation on the list, frankly.
  I'm saying that I find it so ridiculous, that I wouldn't consider going
  at all.
 
  I find the suggestion that I would be given a different shit to wear so
  offensive, that I have to waste time having this conversation to condemn
  it publicly.  I have a lot of pride in OpenStack, and don't want anyone
  to think that everyone finds this sort of requirement acceptable in our
  community.
 I respect this about you, Russell and it is one of the many reasons I am 
 so glad to work with you.
 
 Had this level of pride in OpenStack been prevalent during the Neutron 
 design summit sessions, it wouldn't even have occured to me to mention it.
 
 I hope to attract people with this level of pride in OpenStack to the 
 code sprint and my thought was that eliminating logos would support that 
 goal.
 
 What would you suggest to attract and foster this level of pride in 
 OpenStack in the code sprint attendees?
 
 I will also note, that while you clearly stated the Neutron is being 
 considered for deprecation - t-shirts prevail as an issue on this 
 thread. I consider that rather interesting to observe.

I think you're expressing a similar sentiment to e.g. there should be
more Neutron developers who work on core Neutron rather than just their
drivers. I'm cool with that, and totally agree.

Choosing to pick on people's choice of clothing is just a bizarre way of
expressing that concern, though.

Bear in mind how often I talk about this being a community of
individuals, we should all wear our project hats, that our affiliation
should be secondary to our commitment to our project, ...

Dictating what people can wear to an OpenStack event is not my idea of
what OpenStack is all about. It's not my idea of the mutual respect
you talk about.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Layering olso.messaging usage of config

2013-11-18 Thread Mark McLoughlin
Hi Julien,

On Mon, 2013-11-18 at 11:05 +0100, Julien Danjou wrote:
 Hi Oslo developers,
 
 It seems my latest patch¹ on oslo.messaging scared Mark, so I'll try to
 discuss it a bit on this mailing list as it is more convenient.

Scared, heh :)

 I've created a blueprint² as requested by Mark.

Ok, so it's a ceilometer blueprint and says:

  The goal of this blueprint is to be able to use oslo.messaging
   without using a configuration file/object, while keeping its usage
   possible and not breaking compatibility with OpenStack applications.

Why is that important to ceilometer? Ceilometer heavily uses the RPC
code already and uses the config object.

Is this about allowing Ceilometer to consume from multiple brokers?
Where will the configuration be stored for each broker connection? In
the ceilometer database? Why won't transport URLs be sufficient for that
use case given that we know it works fine for Nova cells?

I'm struggling to care about this until I have some insight into why
it's important. And it's a bit frustrating to have to guess the
rationale for this. Like commit messages, blueprints should be as much
about the why as the what.

 That seems necessary since it will be spread on several patch.

It's not about multiple patches, it's about something which needs to be
designed and thought through in advance.

 Now to the core. oslo.messaging API mixes usage of parameters and
 configuration file object. Such as you have to provide a configuration
 object for basic API usage, even if you don't have a configuration
 object.
 
 It seems to me that having this separation of concerns in oslo.messaging
 would be good idea. My plan is to move out the configuration object out
 of the basic object, like I did in the first patch.
 
 I don't plan to break the configuration handling or so, I just think it
 should be handled in a separate, individually testable part of the code.

As I said in the review, I'm totally fine with the idea of allowing
oslo.messaging to be used without a configuration object ... but I think
the common use case is to use it with a configuration object and I don't
want to undermine the usability of the library in the common use case.

One way of approach this would be to describe how the new API would look
like from the perspective of Nova[1] - i.e. if you are using a config
object, what does the API look like?

The other thing I want to figure out is how we expose the configuration
options in the API in a way that allows us to maintain API compatibility
as we move and rename the config options. That's doable, but needs some
thought.

Thanks,
Mark.

[1] - https://review.openstack.org/39929


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Layering olso.messaging usage of config

2013-11-18 Thread Mark McLoughlin
On Mon, 2013-11-18 at 17:37 +0100, Julien Danjou wrote:
 On Mon, Nov 18 2013, Mark McLoughlin wrote:

  I'm struggling to care about this until I have some insight into why
  it's important. And it's a bit frustrating to have to guess the
  rationale for this. Like commit messages, blueprints should be as much
  about the why as the what.
 
 Sure. I ought to think that having an application that wants to leverage
 oslo.messaging but is not using oslo.config and is retrieving its
 parameter from another way is a good enough argument.

It's a theoretical benefit versus the very practical design an API for
the use cases that are actually important to OpenStack projects.

  As I said in the review, I'm totally fine with the idea of allowing
  oslo.messaging to be used without a configuration object ... but I think
  the common use case is to use it with a configuration object and I don't
  want to undermine the usability of the library in the common use case.
 
 Understood. I know it's already a pain to transition from RPC to
 messaging, and I don't want to add more burden on that transition.

Cool, thanks.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Layering olso.messaging usage of config

2013-11-18 Thread Mark McLoughlin
Hey Doug,

On Mon, 2013-11-18 at 11:29 -0500, Doug Hellmann wrote:
 On Mon, Nov 18, 2013 at 5:05 AM, Julien Danjou jul...@danjou.info wrote:
 
  Hi Oslo developers,
 
  It seems my latest patch¹ on oslo.messaging scared Mark, so I'll try to
  discuss it a bit on this mailing list as it is more convenient.
 
  I've created a blueprint² as requested by Mark. That seems necessary
  since it will be spread on several patch.
 
  Now to the core. oslo.messaging API mixes usage of parameters and
  configuration file object. Such as you have to provide a configuration
  object for basic API usage, even if you don't have a configuration
  object.
 
 
 IIRC, one of the concerns when oslo.messaging was split out was
 maintaining support for existing deployments with configuration files that
 worked with oslo.rpc. We had said that we would use URL query parameters
 for optional configuration values (with the required values going into
 other places in the URL), but that format wouldn't be backwards compatible
 if a deployment already had the other configuration settings listed
 individually.

I hadn't ever considered exposing all configuration options via the URL.
We have a lot of fairly random options, that I don't think you need to
configure per-connection if you have multiple connections in the one
application.

 One way to address that difference is to make the use of oslo.config an
 explicit step, before instantiating a Transport using a URL. Given a URL
 for a transport and an oslo.config object, an application could ask
 oslo.messaging to build a new URL containing all of the settings from the
 config that apply. Something like:
 
 from oslo.config import cfg
 from oslo.messaging import config as om_config
 from oslo.messaging import transport
 
 base_url = cfg.CONF.rpc_url
 full_url = om_config.update_url_from_config(base_url, cfg.CONF)
 
 trans = transport.get_transport(full_url)

I don't think anyone would thanks us for the horrific URLs that would
result from this.

But if we did this, I'd just make it:

  from oslo.config import cfg
  from oslo.messaging import config as om_config

  transport = om_config.get_transport_from_config(cfg.CONF)

 update_url_from_config() would use the base_url to determine the driver,
 load the option definitions for that driver, then pull the values out of
 the config object and construct a new URL starting with the base and
 including the extra settings (thus preserving backwards-compatibility for
 the existing config files).
 
 Applications that do not use oslo.config would just call get_transport().
 
 As an alternative, and to preserve API compatibility with the existing
 release of oslo.messaging, we could just change get_transport() to allow
 the first argument to be None (instead of removing it entirely). When it is
 a valid config object, we would go through the steps we do now to get
 configuration options. When it is None, we would skip those steps.

API compatibility isn't yet important. We haven't done a first release
yet and no projects are using the library.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][db] Changing migrations

2013-11-18 Thread Mark McLoughlin
On Mon, 2013-11-18 at 18:46 +0100, Nikola Đipanov wrote:
 Dear OpenStack devs,
 
 A recent review [1] dragged into spotlight how damaging improper use of
 external code inside migrations can be.
 
 Basically in my mind the incident raises 2 issues that I think we should
 look into:
 
 1) How can we make reviewing changes with db migrations more robust
 
 Since we use sqlalchemy-migrate to version our database, the package's
 documentation [2] states how care needs to be taken when importing code
 inside a db migration script. It seems like this can be taken care of
 with a hacking rule that will fail the patch if anything outside a
 subset of modules is imported and used. I might be missing an angle
 where such an approach could cause issues, so feel free to comment in
 replies. IIUC - this is something we might want to enforce even when/if
 we move to using Alembic for migrations.
 
 2) What are acceptable changes
 
 The patch also raised the question of what is acceptable level of
 changes. The only guidelines I could find are [3] and they seem fuzzy
 enough that we might want to be more specific, or introduce stricter
 testing guidelines.
 
 All comments are more than welcome,
 
 Thanks,
 
 Nikola
 
 [1] https://review.openstack.org/#/c/39929/
 [2]
 https://sqlalchemy-migrate.readthedocs.org/en/v0.7.2/versioning.html#edit-the-change-script
 [3] https://wiki.openstack.org/wiki/DbMigrationChangeGuidelines

Thanks for the link Nikola.

It's pretty much as I guessed in the review. The intent is:

  
https://sqlalchemy-migrate.readthedocs.org/en/v0.7.2/versioning.html#writing-scripts-with-consistent-behavior

  You don’t want your change scripts’ behavior changing

That totally makes sense. You don't want one version of migration 173 to
do something different from another version of migration 173.

One approach to achieving consistent behaviour is to only import library
APIs whose behaviour never changes and never change the migration's
code.

Another approach is to carefully review any changes which have the
potential to cause migration behaviour changes.

Another approach is to add tests for the behaviour so that if you can't
accidentally change the behaviour.

The point is we need to be careful to avoid changes in behaviour. By all
means, let's have guidelines which capture our experience around how
best to do that ... but let's not pick one of those approaches and
enforce it in such a way that leaves no room to - if needed - use a
different approach to achieve the same thing.

This isn't a question of style, where multiple approaches are valid and
we just need to pick one to avoid nitpicking and arbitrary
inconsistency.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][Glance] OSLO update

2013-11-18 Thread Mark McLoughlin
On Mon, 2013-11-18 at 17:24 +, Duncan Thomas wrote:
 Random OSLO updates with no list of what changed, what got fixed etc
 are unlikely to get review attention - doing such a review is
 extremely difficult. I was -2ing them and asking for more info, but
 they keep popping up. I'm really not sure what the best way of
 updating from OSLO is, but this isn't it.

Best practice is to include a list of changes being synced, for example:

  https://review.openstack.org/54660

Every so often, we throw around ideas for automating the generation of
this changes list - e.g. cinder would have the oslo-incubator commit ID
for each module stored in a file in git to tell us when it was last
synced.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Incubation request for Manila

2013-11-18 Thread Mark McLoughlin
Hi,

On Thu, 2013-10-10 at 15:00 +, Swartzlander, Ben wrote:
 Please consider our formal request for incubation status of the Manila 
 project:
 https://wiki.openstack.org/wiki/Manila_Overview

Note that the Manila application was discussed at last week's TC
meeting.

I tried to take some notes here:

  https://etherpad.openstack.org/p/ManilaIncubationApplication

but the full logs are at:

  http://eavesdrop.openstack.org/meetings/tc/2013/tc.2013-11-12-20.01.html

Final discussion and voting on the application is due to happen at
tomorrow's meeting.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Propose project story wiki idea

2013-11-21 Thread Mark McLoughlin
On Thu, 2013-11-21 at 10:43 +0100, Thierry Carrez wrote:
 Stefano Maffulli wrote:
  On 11/19/2013 09:33 PM, Boris Pavlovic wrote:
  The idea of this proposal is that every OpenStack project should have
  story wiki page. It means to publish every week one short message that
  contains most interesting updates for the last week, and high level road
  map for future week. So reading this for 10-15 minutes you can see what
  changed in project, and get better understanding of high level road map
  of the project.
  
  I like the idea.
  
  I have received requests to include high level summaries from all
  projects in the weekly newsletter but it's quite impossible for me to do
  that as I don't have enough understanding of each project to extrapolate
  the significant news from the noise. [...]
 
 This is an interesting point. From various discussions I had with people
 over the last year, the thing the development community is really really
 after is weekly technical news that would cover updates from major
 projects as well as deep dives into new features, tech conference CFPs,
 etc. The reference in the area (and only example I have) is LWN
 (lwn.net) and their awesome weekly coverage of what happens in Linux
 kernel development and beyond.
 
 The trick is, such coverage requires editors with a deep technical
 knowledge, both to be able to determine significant news from marketing
 noise *and* to be able to deep dive into a new feature and make an
 article out of it that makes a good read for developers or OpenStack
 deployers. It's also a full-time job, even if some of those deep-dive
 articles could just be contributed by their developers.
 
 LWN is an exception rather than the rule in the tech press. It would be
 absolutely awesome if we managed to build something like it to cover
 OpenStack, but finding the right people (the right skill set + the will
 and the time to do it) will be, I fear, extremely difficult.
 
 Thoughts ? Volunteers ?

Yeah, I think there's a huge opportunity for something like this. Look
at the volume of interesting stuff that's going on on this list.
Highlighting and summarising some of the more important and interesting
of these discussions in high quality articles would be incredibly
useful.

It will be hard to pull off though. You need good quality writing but,
more importantly, really strong editorial control who understands what
people want to read.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Oslo] Future of Key Distribution Server, Trusted Messaging

2013-11-22 Thread Mark McLoughlin
On Fri, 2013-11-22 at 11:04 +0100, Thierry Carrez wrote:
 Russell Bryant wrote:
  [...]
  I'm not thrilled about the prospect of this going into a new project for
  multiple reasons.
  
   - Given the priority and how long this has been dragging out, having to
  wait for a new project to make its way into OpenStack is not very appealing.
  
   - A new project needs to be able to stand on its own legs.  It needs to
  have a reasonably sized development team to make it sustainable.  Is
  this big enough for that?
 
 Having it in Barbican (and maybe have Barbican join under the identity
 program) would mitigate the second issue. But the first issue stands,
 and I share your concerns.

Yes, I agree. It's disappointing that this change of plans looks like
its going to push out the ability of an OpenStack deployment to be
secured.

If this becomes a Barbican API, then we might be able to get the code
working quickly ... but it will still be some time before Barbican is an
integrated project, and so securing OpenStack will only be possible if
you use this non-integrated project.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Proposal to add Matt Riedemann to nova-core

2013-11-22 Thread Mark McLoughlin
On Fri, 2013-11-22 at 15:53 -0500, Russell Bryant wrote:
 Greetings,
 
 I would like to propose adding Matt Riedemann to the nova-core review team.
 
 Matt has been involved with nova for a long time, taking on a wide range
 of tasks.  He writes good code.  He's very engaged with the development
 community.  Most importantly, he provides good code reviews and has
 earned the trust of other members of the review team.
 
 https://review.openstack.org/#/dashboard/6873
 https://review.openstack.org/#/q/owner:6873,n,z
 https://review.openstack.org/#/q/reviewer:6873,n,z
 
 Please respond with +1/-1, or any further comments.

+1, definitely a great addition to the team

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Proposal to re-add Dan Prince to nova-core

2013-11-26 Thread Mark McLoughlin
On Tue, 2013-11-26 at 14:32 -0500, Russell Bryant wrote:
 Greetings,
 
 I would like to propose that we re-add Dan Prince to the nova-core
 review team.
 
 Dan Prince has been involved with Nova since early in OpenStack's
 history (Bexar timeframe).  He was a member of the nova-core review team
 from May 2011 to June 2013.  He has since picked back up with nova
 reviews [1].  We always say that when people leave nova-core, we would
 love to have them back if they are able to commit the time in the
 future.  I think this is a good example of that.
 
 Please respond with +1s or any concerns.

Nice, definitely agree.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [QA] Triaging Bugs during Review

2013-11-26 Thread Mark McLoughlin
On Tue, 2013-11-26 at 13:06 -0800, Vishvananda Ishaya wrote:
 Hi Everyone,
 
 I tend to follow merges and look for valuable havana backports. A few bug
 fixes have merged recently where the associated bug is untriaged (i.e. the
 severity is listed as 'Unknown'). I assume that reviewers of a bugfix
 patch are viewing the associated bugs. It makes sense for core-reviewers
 to do some related bug triaging work in the process. It also would be
 very helpful if they could be tagged for potential backport at the same time.
 
 So I'm suggesting two things during review:
 
 1. If you are reviewing a patch and the severity of the associated bug is
set as unknown, then you set an appropriate severity[1].
 2. If the bug is important and seems relatively self-contained that you
mark it with the havana-backport-potential tag during review.
 
 These two things should only take a matter of seconds, and will greatly help
 the stable-maintainers team.
 
 I will add these responsibilities to the review checklist[2] unless I hear
 some disagreement here.

FWIW, I totally agree and do this out of habit. Thanks for raising it.

But ... oh wow! We have 103 Nova bugs tagged with
havana-backport-potential?

  https://bugs.launchpad.net/nova/+bugs?field.tag=havana-backport-potential

Are some of these getting backported but the tag isn't being removed?
The thinking originally was that the tag would be removed as soon as a
havana task for the bug was opened.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all project] Treating recently seen recheck bugs as critical across the board

2013-11-26 Thread Mark McLoughlin
On Tue, 2013-11-26 at 12:29 -0800, Joe Gordon wrote:
 On Nov 26, 2013 8:48 AM, Dolph Mathews dolph.math...@gmail.com wrote:
 
 
  On Tue, Nov 26, 2013 at 5:23 AM, Thierry Carrez thie...@openstack.org
 wrote:
 
  Dolph Mathews wrote:
   On Mon, Nov 25, 2013 at 8:12 PM, Robert Collins
   robe...@robertcollins.net mailto:robe...@robertcollins.net wrote:
  
   So my proposal is that we make it part of the base hygiene for a
   project that any recheck bugs being seen (either by
 elastic-recheck or
   manual inspection) be considered critical and prioritised above
   feature work.
  
   I agree with the notion here (that fixing transient failures is
   critically high priority work for the community) -- but marking the bug
   as critical priority is just a subjective abuse of the priority
 field.
   A non-critical bug is not necessarily non-critical work. The critical
   status should be reserved for issues that are actually non-shippable,
   catastrophically breaking issues.
 
  It's a classic bugtracking dilemma where the Importance field is both
  used to describe bug impact and priority... while they don't always
 match.
 
 
  ++
 
 
  That said, the impact of those bugs, considering potential development
  activity breakage, *is* quite critical (they all are timebombs which
  will create future gate fails if not handled at top priority).
 
 
  I generally agree, but I don't think it's fair to say that the impact of
 a transient is universally a single priority, either. Some transient issues
 occur more frequently and therefore have higher impact.
 
 
  So I think marking them Critical + tagging them is not that much of an
  abuse, if we start including the gate impact in our bug Impact
  assessments. That said, I'm also fine with High+Tag, as long as it
  triggers the appropriate fast response everywhere.
 
 
  I'm fine with starting them at High, and elevating to Critical as
 appropriate.
 
  Is the idea here to automatically apply a tag + priority as a result of
 recheck/reverify bug X ? (as long as existing priority isn't overwritten!)
 
 I certainly hope we don't automatically set priority based on raw recheck
 data. We have a second list of bugs that we feed to elastic-recheck this
 list is reviewed for duplicates and include fingerprints see we can better
 assess the bug frequency.  I think the idea is to mark bugs from that list
 as critical.  I also think it should be a manual process. As a bug should
 be reviewed (does it have enough detail etc) before setting it to critical.

[Just to circle back and clarify my €0.02c during the TC and project
meetings tonight]

Any recheck bug which appears regularly in the graphs here:

  http://status.openstack.org/elastic-recheck/

means that a human has looked at it, determined a fingerprint for it, we
have a bunch of details about it and we have data as to it's regularity.
Any such bug is fair game to be marked Critical.

If it is still there a month later, but no-one is making any progress on
it and it's happening pretty irregularly ... then I think we'll see a
desire to move it back from Critical to High again so that the Critical
list isn't cluttered with stuff people are no longer paying close
attention to.

So, yeah - the intent sounds good to me.

Thanks,
Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Improving oslo-incubator update.py

2013-11-26 Thread Mark McLoughlin
On Fri, 2013-11-22 at 12:39 -0500, Doug Hellmann wrote:
 On Fri, Nov 22, 2013 at 4:11 AM, Flavio Percoco fla...@redhat.com wrote:
 
  Greetings,
 
  Based on the recent discussion that came out about not having enough
  information in the commit message when syncing oslo-incubator modules,
  I was thinking that besides encouraging people to write better commit
  messages, we could also improve the script we use to sync those
  modules.
 
  Some of the changes that I've been thinking of:
 
 1) Store the commit sha from which the module was copied from.
 Every project using oslo, currently keeps the list of modules it
 is using in `openstack-modules.conf` in a `module` parameter. We
 could store, along with the module name, the sha of the commit it
 was last synced from:
 
 module=log,commit
 
 or
 
 module=log
 log=commit
 
 
 The second form will be easier to manage. Humans edit the module field and
 the script will edit the others.

How about adding it as a comment at the end of the python files
themselves and leaving openstack-common.conf for human editing?

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Improving oslo-incubator update.py

2013-11-26 Thread Mark McLoughlin
On Fri, 2013-11-22 at 16:24 +, Duncan Thomas wrote:
 On 22 November 2013 14:59, Ben Nemec openst...@nemebean.com wrote:
 
  One other thought I had was to add the ability to split one Oslo sync up
  into multiple commits, either one per module, or even one per Oslo commit
  for some really large module changes (I'm thinking of the 1000 line db sync
  someone mentioned recently).  It would be more review churn, but at least it
  would keep the changes per review down to a more reasonable level.   I'm not
  positive it would be beneficial, but I thought I'd mention it.
 
 Cinder (often but not always me) tends to reject merges that do more
 that one module at a time, because it makes it far harder to review
 and spot problems, so some from of automation of this would be great.

Yes, it's good practice to sync related changes in individual commits,
rather than one big oslo sync.

An example from a previous sync I did in Cinder:

  https://review.openstack.org/#/c/12409/ (cfg)
  https://review.openstack.org/#/c/12411/ (zmq)
  https://review.openstack.org/#/c/12410/ (notifier)
  https://review.openstack.org/#/c/12412/ (misc)

These were all proposed at the same time, but I split related notable
changes into their own commits and then had a misc sync for more
trivial stuff.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to best make User Experience a priority in every project

2013-11-26 Thread Mark McLoughlin
On Wed, 2013-11-20 at 11:06 -0600, Dean Troyer wrote:
 On Wed, Nov 20, 2013 at 9:09 AM, Thierry Carrez thie...@openstack.orgwrote:
 
  However, as was apparent in the Technical Committee meeting discussion
 
 about it yesterday, most of us are not convinced that establishing and
  blessing a separate team is the most efficient way to give UX the
  attention it deserves. Ideally, UX-minded folks would get active
  *within* existing project teams rather than form some sort of
  counter-power as a separate team. In the same way we want scalability
  and security mindset to be present in every project, we want UX to be
  present in every project. It's more of an advocacy group than a
  program imho.
 
 
 Having been working on a cross-project project for a while now I can
 confirm that there is a startling lack of coordination between projects on
 the same/similar tasks.  Oslo was a HUGE step toward reducing that and
 providing a good reference for code and libraries.  There is nothing today
 for the intangible parts that are both user and developer facing such as
 common message (log) formats, common terms (tenant vs project) and so on.
 
 I think the model of the OSSG is a good one.  After reading the log of
 yesterday's meeting I think I would have thrown in that the need from my
 perspective is for a coordination role as much as anything.
 
 The deliverables in the UX Program proposal seem a bit fuzzy to me as far
 as what might go into the repo.  If it is interface specs, those should be
 in either the project code repos docs/ tree or in the docs project
 directly.  Same for a Human Interface Guide (HIG) that both Horizon and OSC
 have (well, I did steal a lot of OSC's guide from Horizon's).

One straightforward example I could imagine from a UX Program is a REST
API design guide which captures the current common patterns between our
current APIs and points the way for common patterns new APIs should aim
to adopt. Same for our CLIs.

Or you could imagine a set of terminology/concept definitions - project
vs tenant anyone? :)

But, I think the basic point is that a group with common interests
should work on producing some concrete deliverables before being asked
to be recognized as an official program.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Improving oslo-incubator update.py

2013-11-27 Thread Mark McLoughlin
On Wed, 2013-11-27 at 11:50 +0100, Flavio Percoco wrote:
 On 26/11/13 22:54 +, Mark McLoughlin wrote:
 On Fri, 2013-11-22 at 12:39 -0500, Doug Hellmann wrote:
  On Fri, Nov 22, 2013 at 4:11 AM, Flavio Percoco fla...@redhat.com wrote:
  1) Store the commit sha from which the module was copied from.
  Every project using oslo, currently keeps the list of modules it
  is using in `openstack-modules.conf` in a `module` parameter. We
  could store, along with the module name, the sha of the commit it
  was last synced from:
  
  module=log,commit
  
  or
  
  module=log
  log=commit
  
 
  The second form will be easier to manage. Humans edit the module field and
  the script will edit the others.
 
 How about adding it as a comment at the end of the python files
 themselves and leaving openstack-common.conf for human editing?
 
 I think having the commit sha will give us a starting point from which
 we could start updating that module from. 

Sure, my only point was about where the commit sha comes from - i.e.
whether it's from a comment at the end of the python module itself or in
openstack-common.conf

 It will mostly help with
 getting a diff for that module and the short commit messages where it
 was modified.
 
 Here's a pseudo-buggy-algorithm for the update process:
 
 (1) Get current sha for $module
 (2) Get list of new commits for $module
 (3) for each commit of $module:
 (3.1) for each modified_module in $commit
 (3.1.1) Update those modules up to $commit (1)(modified_module)
 (3.2) Copy the new file
 (3.3) Update openstack-common with the latest sha
 
 This trusts the granularity and isolation of the patches proposed in
 oslo-incubator. However, in cases like 'remove vim mode lines' it'll
 fail assuming that updating every module is necessary - which is true
 from a git stand point.

This is another variant of the kind of inter-module dependency smarts
that update.py already has ... I'd be inclined to just omit those smarts
and just require the caller to explicitly list the modules they want to
include.

Maybe update.py could include some reporting to help with that choice
like module foo depends on modules bar and blaa, maybe you want to
include them too and commit XXX modified module foo, but also module
bar and blaa, maybe you want to include them too.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] rpc concurrency control rfc

2013-11-27 Thread Mark McLoughlin
Hi,

On Wed, 2013-11-27 at 14:45 +, Edward Hope-Morley wrote:
 Moving this to the ml as requested, would appreciate
 comments/thoughts/feedback.

Thanks, I too would appreciate input from others.

 So, I recently proposed a small patch to the oslo rpc code (initially in
 oslo-incubator then moved to oslo.messaging) which extends the existing
 support for limiting the rpc thread pool so that concurrent requests can
 be limited based on type/method. The blueprint and patch are here:
 
 https://blueprints.launchpad.net/oslo.messaging/+spec/rpc-concurrency-control
 
 The basic idea is that if you have server with limited resources you may
 want restrict operations that would impact those resources e.g. live
 migrations on a specific hypervisor or volume formatting on particular
 volume node. This patch allows you, admittedly in a very crude way, to
 apply a fixed limit to a set of rpc methods. I would like to know
 whether or not people think this is sort of thing would be useful or
 whether it alludes to a more fundamental issue that should be dealt with
 in a different manner.

Just to be clear for everyone what we're talking about. Your patch means
that if an operator sees that requests to the 'foo' and 'bar' RPC
methods for a given service are overwhelming the capacity of the
machine, you can throttle them by adding e.g.

  concurrency_control_enabled = true
  concurrency_control_actions = foo,bar
  concurrency_control_limit = 2

to the service's configuration file.

If you accept the premise of what's required here, I think you really
want to have e.g. a json policy file which can control the concurrency
limit on each method individually:

{
compute: {
baseapi: {
ping: 10
},
: {
foo: 1,
bar: 2
}
}
}

but that starts feeling pretty ridiculous.

My concern is that we're avoiding addressing a more fundamental issue
here. From IRC:

 markmc avoid specific concurrent operations from consuming too many
  system resources and starving other less resource intensive
  actions
 markmc I'd like us to think about whether we can come up with a
  solution that fixes the problem for people, without them
  having to mess with this type of configuration
 markmc but yeah ... if we can't figure out a way of doing that, there
  is an argument for giving operators and interim workaround
 markmc I wouldn't be in favour of an interim fix without first
  exploring the options for a more fundamental fix
 markmc this isn't easily removable later, because once people start
  to rely on it we would need to put it through a deprecation
  period to remove it
 markmc also, an interim solution like this takes away the pressure on
  us to find a more fundamental solution ... and we may wind up
  never doing that


So, I guess my first question is ... what specific RPC methods have you
seen issues with and feel you need to throttle?

Thanks,
Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Summit session wrapup

2013-11-27 Thread Mark McLoughlin
Hi Jarda,

On Wed, 2013-11-27 at 14:39 +0100, Jaromir Coufal wrote:

 I think here is the main point where I disagree and which leads to 
 different approaches. I don't think, that user of TripleO cares *only* 
 about deploying infrastructure without any knowledge where the things 
 go. This is overcloud user's approach - 'I want VM and I don't care 
 where it runs'. Those are self-service users / cloud users. I know we 
 are OpenStack on OpenStack, but we shouldn't go that far that we expect 
 same behavior from undercloud users.

Nice, I think you're getting really close to identifying the conflicting
assumptions/viewpoints here.

What OpenStack - and cloud, in general - does is provide a nice
self-service abstraction between the owners of the underlying resources
and the end-user.

We take away an awful lot of placement control away from the
self-service in order to allow the operator to provide a usable,
large-scale, multi-tenant service.

The difference with TripleO is that we assume the undercloud operator
and the undercloud user are one and the same. At least, that's what I
assume we're designing for. I don't think we're designing for a
situation where there is an undercloud operator serving the needs of
multiple overcloud operators and it's important for the undercloud
operator to have ultimate control over placement.

That's hardly the end of the story here, but it is one useful
distinction that could justify why this case might be different from the
usual application-deployment-on-IaaS case.

 I can tell you various examples of 
 why the operator will care about where the image goes and what runs on 
 specific node.
 
 /One quick example:/
 I have three racks of homogenous hardware and I want to design it the 
 way so that I have one control node in each, 3 storage nodes and the 
 rest compute. With that smart deployment, I'll never know what my rack 
 contains in the end. But if I have control over stuff, I can say that 
 this node is controller, those three are storage and those are compute - 
 I am happy from the very beginning.

It is valid to ask why this knowledge is important to the user in this
case and why it makes them happy. Challenging such assumptions can lead
to design breakthroughs, I'm sure you agree.

e.g. before AWS came along, you could imagine someone trying to shoot
down the entire premise of IaaS with similar arguments.

Or the whole they'd have asked for a faster horse thing.

 Our targeted audience are sysadmins, operators. They hate 'magics'. They 
 want to have control over things which they are doing. If we put in 
 front of them workflow, where they click one button and they get cloud 
 installed, they will get horrified.

 That's why I am very sure and convinced that we need to have ability for 
 user to have control over stuff. What node is having what role. We can 
 be smart, suggest and advice. But not hiding this functionality from 
 user. Otherwise, I am afraid that we can fail.
 
 Furthermore, if we put lots of restrictions (like homogenous hardware) 
 in front of users from the very beginning, we are discouraging people 
 from using TripleO-UI. We are young project and trying to hit as broad 
 audience as possible. If we do flexible enough approach to get large 
 audience interested, solve their problems, we will get more feedback, we 
 will get early adopters, we will get more contributors, etc.
 
 First, let's help cloud operator, who is having some nodes and wants to 
 deploy OpenStack on them. He wants to have control which node is 
 controller, which node is compute or storage. Then we can get smarter 
 and guide.

Yes, I buy this. And I think it's the point worth dwelling on.

It would be quite a bit of work to substantiate the point with hard data
- e.g. doing user testing of mockups with and without placement control
- so we have to at least try to build some consensus without that.

We could do some work on a more detailed description of the persona and
their basic goals. This would clear up whether we're designing for the
case where one persona owns the undercloud and there's another overcloud
operator persona.

We could also look at other tools targeted to similar use cases and see
what they do.

But yeah - my instinct is that all of that would show that we'd be
fighting an uphill battle to persuade our users that this type of magic
is what they want.

...
 === Implementation ===
 
 Above mentioned approach shouldn't lead to reimplementing scheduler. We 
 can still use nova-scheduler, but we can take advantage of extra params 
 (like unique identifier), so that we specify more concretely what goes 
 where.

It's hard to see how what you describe doesn't ultimately mean we
completely by pass the Nova scheduler. Yes, if you request placement on
a specific node, it does still go through the scheduler ... but it
doesn't do any actual scheduling.

Maybe we should separate the discussion/design around control nodes and
resource (i.e. compute/storage) 

Re: [openstack-dev] [Keystone][Oslo] Future of Key Distribution Server, Trusted Messaging

2013-11-29 Thread Mark McLoughlin
Hey

Anyone got an update on this?

The keystone blueprint for KDS was marked approved on Tuesday:

  https://blueprints.launchpad.net/keystone/+spec/key-distribution-server

and a new keystone review was added on Sunday, but it must be a draft
since I can't access it:

   https://review.openstack.org/58124

Thanks,
Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] When should things get added to Oslo.Incubator

2013-12-03 Thread Mark McLoughlin
On Tue, 2013-12-03 at 22:07 +, Joshua Harlow wrote:

 Process for process sake imho has been a problem for oslo.

It's been reiterated many times, but again - the only purpose of
oslo-incubator is as a place to evolve an API until we're ready to make
a commitment to API stability.

It's often easier to start a new API completely standalone, push it to
PyPI and plan for API backwards compatibility from the start. No-one has
ever said that such APIs need to go through oslo-incubator for process
sake.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] When should things get added to Oslo.Incubator

2013-12-03 Thread Mark McLoughlin
On Tue, 2013-12-03 at 22:31 +, Joshua Harlow wrote:
 Sure, no one has said it. But it seems to be implied, otherwise these
 types of discussions wouldn't occur. Right?

You're assuming the Nova objects API is at a point where the maintainers
of it feel ready to commit to API stability.

Mark.

 On 12/3/13 2:25 PM, Mark McLoughlin mar...@redhat.com wrote:
 
 On Tue, 2013-12-03 at 22:07 +, Joshua Harlow wrote:
 
  Process for process sake imho has been a problem for oslo.
 
 It's been reiterated many times, but again - the only purpose of
 oslo-incubator is as a place to evolve an API until we're ready to make
 a commitment to API stability.
 
 It's often easier to start a new API completely standalone, push it to
 PyPI and plan for API backwards compatibility from the start. No-one has
 ever said that such APIs need to go through oslo-incubator for process
 sake.
 
 Mark.
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] When should things get added to Oslo.Incubator

2013-12-03 Thread Mark McLoughlin
On Tue, 2013-12-03 at 22:44 +, Joshua Harlow wrote:
 Sure sure, let me not make that assumption (can't speak for them), but
 even libraries on pypi have to deal with API instability.

Yes, they do ... either by my maintaining stability, bumping their major
version number to reflect an incompatible change ... or annoying the
hell out of their users!

 Just more of suggesting, might as well bite the bullet (if objects folks
 feel ok with this) and just learn to deal with the pypi method for dealing
 with API instability (versions, deprecation...). Since code copying around
 is just creating a miniature version of the same 'learning experience'
 except u lose the other parts (versioning, deprecation, ...) which comes
 along with pypi and libraries.

Yes, if the maintainers of the API are prepared to deal with the demands
of API stability, publishing the API as a standalone library would be
far more preferable.

Failing that, oslo-incubator offers a halfway house which sucks, but not
as as much as the alternative - projects copying and pasting each
other's code and evolving their copies independently.

Mark.

 Anyways, just a thought.
 
 -Josh
 
 On 12/3/13 2:39 PM, Mark McLoughlin mar...@redhat.com wrote:
 
 On Tue, 2013-12-03 at 22:31 +, Joshua Harlow wrote:
  Sure, no one has said it. But it seems to be implied, otherwise these
  types of discussions wouldn't occur. Right?
 
 You're assuming the Nova objects API is at a point where the maintainers
 of it feel ready to commit to API stability.
 
 Mark.
 
  On 12/3/13 2:25 PM, Mark McLoughlin mar...@redhat.com wrote:
  
  On Tue, 2013-12-03 at 22:07 +, Joshua Harlow wrote:
  
   Process for process sake imho has been a problem for oslo.
  
  It's been reiterated many times, but again - the only purpose of
  oslo-incubator is as a place to evolve an API until we're ready to make
  a commitment to API stability.
  
  It's often easier to start a new API completely standalone, push it to
  PyPI and plan for API backwards compatibility from the start. No-one
 has
  ever said that such APIs need to go through oslo-incubator for process
  sake.
  
  Mark.
  
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
 
 



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Olso][DB] Remove eventlet from oslo.db

2013-12-03 Thread Mark McLoughlin
On Mon, 2013-12-02 at 16:02 +0200, Victor Sergeyev wrote:
 Hi folks!
 
 At the moment I and Roman Podoliaka are working on splitting of
 openstack.common.db code into a separate library. And it would be nice to
 drop dependency on eventlet before oslo.db is released.
 
 Currently, there is only one place in oslo.db where we use eventlet -
 wrapping of DB API method calls to be executed by tpool threads. It wraps
 DB API calls to be executed by tpool threads. This is only needed when
 eventlet is used together with DB-API driver implemented as a Python C
 extension (eventlet can't monkey patch C code, so we end up with DB API
 calls blocking all green threads when using Python-MySQLdb). eventlet has a
 workaround known as 'tpool' which is basically a pool of real OS threads
 that can play nicely with eventlet event loop. tpool feature is
 experimental and known to have stability problems. There is a doubt that
 anyone is using it in production at all. Nova API (and probably other API
 services) has an option to prefork the process on start, so that they don't
 need to use tpool when using eventlet together Python-MySQLdb.
 
 We'd really like to drop tpool support from oslo.db, because as a library
 we should not be bound to any particular concurrency model. If a target
 project is using eventlet, we believe, it is its problem how to make it
 play nicely with Python-MySQLdb lib, but not the problem of oslo.db.
 Though, we could put tpool wrapper into another helper module within
 oslo-incubator.
 
 But we would really-really like not to have any eventlet related code in
 oslo.db.
 
 Are you using CONF.database.use_tpool in production? Does the approach with
 a separate tpool wrapper class seem reasonable? Or we can just drop tpool
 support at all, if no one is using it?

Another approach is to put the tpool wrapper class in a separate module
which would be completely optional for users of the library.

For example, you could imagine people who don't want this doing:

  from oslo import db

  dbapi = db.DBAPI()

but if you want the tpool thing, you might do:

  from oslo import db
  from oslo.db import eventlet as db_eventlet

  dbapi = db_eventlet.TpoolWrapper(db.DBAPI())

(I'm just making stuff up, but you get the idea)

The key thing is that eventlet isn't a hard dependency of the library,
but the useful eventlet integration is still available in the library if
you want it.

We did something similar in oslo.messaging, and the issues there were
probably more difficult to deal with.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-tc] [barbican] Curious about oslo.messaging (from Incubation Request for Barbican)

2013-12-04 Thread Mark McLoughlin
On Wed, 2013-12-04 at 05:01 +, John Wood wrote:
 Hello folks,
 
 I was curious if there is an OpenStack project that would be a good
 example to follow as we convert Barbican over to oslo messaging. 
 
 I've been examining existing OpenStack projects such as Ceilometer and
 Keystone to see how they are utilizing oslo messaging. These projects
 appear to be utilizing packages such as 'rpc' and 'notifier' from the
 oslo-incubator project. It seems that the oslo.messaging project's
 structure is different than the messaging structure of oslo-incubator
 (there are newer classes such as Transport now for example). Is there
 an example OpenStack project utilizing the oslo.messaging structure
 that might be better for us to follow?
 
 The RPC approach of Ceilometer in particular seems well suited to
 Barbican's use case, so seems to be a good option for us to follow,
 unless there are better options folks can suggest.

The patch to port Nova might be helpful to you:

  https://review.openstack.org/39929

(Note - the patch is complete and ready to merge, it's just temporarily
blocked on a separate Nova issue being resolved)

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] maintenance policy for code graduating from the incubator

2013-12-05 Thread Mark McLoughlin
On Mon, 2013-12-02 at 11:00 -0500, Doug Hellmann wrote:

 I have updated the Oslo wiki page with these details and would appreciate
 feedback on the wording used there.
 
 https://wiki.openstack.org/wiki/Oslo#Graduation

Thanks Doug, that sounds perfect to me.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][TripleO] Nested resources

2013-12-05 Thread Mark McLoughlin
Hi Kevin,

On Mon, 2013-12-02 at 12:39 -0800, Fox, Kevin M wrote:
 Hi all,
 
 I just want to run a crazy idea up the flag pole. TripleO has the
 concept of an under and over cloud. In starting to experiment with
 Docker, I see a pattern start to emerge.
 
  * As a User, I may want to allocate a BareMetal node so that it is
 entirely mine. I may want to run multiple VM's on it to reduce my own
 cost. Now I have to manage the BareMetal nodes myself or nest
 OpenStack into them.
  * As a User, I may want to allocate a VM. I then want to run multiple
 Docker containers on it to use it more efficiently. Now I have to
 manage the VM's myself or nest OpenStack into them.
  * As a User, I may want to allocate a BareMetal node so that it is
 entirely mine. I then want to run multiple Docker containers on it to
 use it more efficiently. Now I have to manage the BareMetal nodes
 myself or nest OpenStack into them.
 
 I think this can then be generalized to:
 As a User, I would like to ask for resources of one type (One AZ?),
 and be able to delegate resources back to Nova so that I can use Nova
 to subdivide and give me access to my resources as a different type.
 (As a different AZ?)
 
 I think this could potentially cover some of the TripleO stuff without
 needing an over/under cloud. For that use case, all the BareMetal
 nodes could be added to Nova as such, allocated by the services
 tenant as running a nested VM image type resource stack, and then made
 available to all tenants. Sys admins then could dynamically shift
 resources from VM providing nodes to BareMetal Nodes and back as
 needed.
 
 This allows a user to allocate some raw resources as a group, then
 schedule higher level services to run only in that group, all with the
 existing api.
 
 Just how crazy an idea is this?

FWIW, I don't think it's a crazy idea at all - indeed I mumbled
something similar a few times in conversation with random people over
the past few months :)

With the increasing interest in containers, it makes a tonne of sense -
you provision a number of VMs and now you want to carve them up by
allocating containers on them. Right now, you'd need to run your own
instance of Nova for that ... which is far too heavyweight.

It is a little crazy in the sense that it's a tonne of work, though.
There's not a whole lot of point in discussing it too much until someone
shows signs of wanting to implement it :)

One problem is how the API would model this nesting, another problem is
making the scheduler aware that some nodes are only available to the
tenant which owns them but maybe a bigger problem is the security model
around allowing a node managed by an untrusted become a compute node.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-tc] Incubation Request for Barbican

2013-12-05 Thread Mark McLoughlin
On Thu, 2013-12-05 at 23:37 +, Douglas Mendizabal wrote:
 
 I agree that this is concerning. And that what's concerning isn't so
 much that the project did something different, but rather that choice
 was apparently made because the project thought it was perfectly fine
 for them to ignore what other OpenStack projects do and go off and do
 its own thing.
 
 We can't make this growth in the number of OpenStack projects work if
 each project goes off randomly and does its own thing without any
 concern for the difficulties that creates.
 
 Mark.
 
 Hi Mark,
 
 You may have missed it, but barbican has added a blueprint to change our
 queue to use oslo.messaging [1]
 
 I just wanted to clarify that we didn’t choose Celery because we thought
 that “it was perfectly fine to ignore what other OpenStack projects do”.
 Incubation has been one of our goals since the project began.  If you’ve
 taken the time to look at our code, you’ve seen that we have been using
 oslo.config this whole time.  We chose Celery because it was
 
 a) Properly packaged like any other python library, so we could just
 pip-install it.
 b) Well documented
 c) Well tested in production environments
 
 At the time none of those were true for oslo.messaging.  In fact,
 oslo.messgaging still cannot be pip-installed as of today.  Obviously, had
 we know that using oslo.messaging is hard requirement in advance, we would
 have chosen it despite its poor distribution story.

I do sympathise, but it's also true is that all other projects were
using the oslo-incubator RPC code at the time you chose Celery.

I think all the verbiage in this thread about celery is just to
reinforce that we need to be very sure that new projects feel a
responsibility to fit closely in with the rest of OpenStack. It's not
about technical requirements so much as social responsibility.

But look - I think you've reacted well to the concern and hopefully if
it feels like there was an overreaction that you can understand the
broader thing we're trying to get at here.

Thanks,
Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Layering olso.messaging usage of config

2013-12-06 Thread Mark McLoughlin
Hi Julien,

On Mon, 2013-12-02 at 16:45 +0100, Julien Danjou wrote:
 On Mon, Nov 18 2013, Julien Danjou wrote:
 
https://blueprints.launchpad.net/oslo/+spec/messaging-decouple-cfg
 
 So I've gone through the code and started to write a plan on how I'd do
 things:
 
   https://wiki.openstack.org/wiki/Oslo/blueprints/messaging-decouple-cfg
 
 I don't think I missed too much, though I didn't run into all tiny
 details.
 
 Please feel free to tell me if I miss anything obvious, otherwise I'll
 try to start submitting patches, one at a time, to get this into shape
 step by step.

Thanks for writing this up, I really appreciate it.

I would like to spend more time getting to the bottom of what we're
trying to solve here.

If the goal is allow applications to use oslo.messaging without using
oslo.config, then what's driving this? I'm guessing some possible
answers:

  1) I don't want to use a global CONF object

 This is a strawman - I think we all realize the conf object you 
 pass to oslo.messaging doesn't have to be cfg.CONF. Just putting 
 it here to make sure no-one's confused about that.

  2) I don't want to have configuration files or command line options in
 order to use oslo.messaging

 But, even now, you don't actually have to parse the command line or
 any config files. See e.g. https://gist.github.com/markmc/7823230

  3) Ok, but I want to be able to specify values for things like 
 rpc_conn_pool_size without using any config files.

 We've talked about allowing the use of query parameters for stuff 
 like this, but I don't love that. I think I'd restrict query 
 parameters to those options which are fundamental to how you 
 connect to a given transport.

 We could quite easily provide any API which would allow 
 constructing a ConfigOpts object with a bunch of options set and 
 without anyone having to use config files. Here's a PoC of how
 that might look:

   https://gist.github.com/markmc/7823420

 (Now, if your reaction is OMG, you're using temporary config
 files on disk, that's awful then just bear with me an ignore the 
 implementation details of get_config_from_dict(). We could very 
 easily make oslo.config support a mode like this without needing
 to ever write anything to disk)

 The point of this example is that we could add an oslo.messaging
 API which takes a dict of config values and you never even know
 that oslo.config is involved.

  4) But I want the API to be explicit about what config options are 
 supported by the API

 This could be something useful to discuss, because right now the 
 API hides configuration options rather than encoding them into the 
 API. This is to give us a bit more flexibility about changing 
 these over time (e.g. keeping backwards compat for old options for 
 a short time than other aspects of the API).

 But actually, I'm assuming this isn't what you're thinking since 
 your patch adds an free-form executor_kwargs dict.

  5) But I want to avoid any dependency on oslo.config

 This could be fundamentally what we're talking about here, but I 
 struggle to understand it - oslo.config is pretty tiny and it only 
 requires argparse, so if it's just an implementation detail that 
 you don't even notice if you're not using config files then what 
 exactly is the problem?


Basically, my thinking is that something like this example:

  https://gist.github.com/markmc/7823420

where you can use oslo.messaging with just a dict of config values
(rather than having to parse config files) should handle any reasonable
concern that I've understood so far ... without having to change much at
all.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Layering olso.messaging usage of config

2013-12-06 Thread Mark McLoughlin
On Fri, 2013-12-06 at 15:41 +0100, Julien Danjou wrote:
 On Fri, Dec 06 2013, Mark McLoughlin wrote:
 
 Hi Mark,
 
  If the goal is allow applications to use oslo.messaging without using
  oslo.config, then what's driving this? I'm guessing some possible
  answers:
 
5) But I want to avoid any dependency on oslo.config
 
 I think that's the more important one to me.
 
   This could be fundamentally what we're talking about here, but I 
   struggle to understand it - oslo.config is pretty tiny and it only 
   requires argparse, so if it's just an implementation detail that 
   you don't even notice if you're not using config files then what 
   exactly is the problem?
 
  Basically, my thinking is that something like this example:
 
https://gist.github.com/markmc/7823420
 
  where you can use oslo.messaging with just a dict of config values
  (rather than having to parse config files) should handle any reasonable
  concern that I've understood so far ... without having to change much at
  all.
 
 I definitely agree with your arguments. There's a large number of
 technical solutions that can be used to bypass the usage of oslo.config
 and make it work with whatever you're using..
 
 I just can't stop thinking that a library shouldn't impose any use of a
 configuration library. I can pick any library on PyPI, and, fortunately,
 most of them don't come with a dependency on the favorite configuration
 library of their author or related project, and its usage spread all
 over the code base.
 
 While I do respect the fact that this is a library to be consumed mainly
 in OpenStack (and I don't want to break that), I think we're also trying
 to not be the new Zope and contribute in a sane way to the Python
 ecosystem. And I think oslo.messaging doesn't do that right.
 
 Now if the consensus is to leave it that way, I honestly won't fight it
 over and over. As Mark proved, there's a lot of way to circumvent the
 oslo.config usage anyway.

Ok, let's say oslo.messaging didn't use oslo.config at all and just took
a free-form dict of configuration values. Then you'd have this
separation whereby you can write code to retrieve those values from any
number of possible configuration sources and pass them down to
oslo.messaging. I think that's what you're getting at?

However, what you lose with that is a consistent way of defining a
schema for those configuration options in oslo.messaging. Should a given
option be an int, bool or a list? What should it's default be? etc. etc.
That stuff would live in the integration layer that maps from
oslo.config to a dict, even though it's totally useful when you just
supply a dict.

I guess there's two sides to oslo.config - the option schemas and the
code to retrieve values from various sources (command line, config files
or overrides/defaults). I think the option schemas is a useful
implementation detail in oslo.messaging, even if the values don't come
from the usual oslo.config sources.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][messaging] Further improvements and refactoring

2014-06-11 Thread Mark McLoughlin
Hi,

On Tue, 2014-06-10 at 15:47 +0400, Dina Belova wrote:
 Dims,
 
 
 No problem with creating the specs, we just want to understand if the
 community is OK with our suggestions in general :)
 If so, I'll create the appropriate specs and we'll discuss them :)

Personally, I find it difficult to understand the proposals as currently
described and how they address the performance problems you say you see.

The specs process should help flesh out your ideas so they are more
understandable. On the other hand, it's pretty difficult to have an
abstract conversation about code re-factoring. So, some combination of
proof-of-concept patches and specs will probably work best.

Mark.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][ceilometer][glance][all] Loading clients from a CONF object

2014-06-11 Thread Mark McLoughlin
On Wed, 2014-06-11 at 16:57 +1200, Steve Baker wrote:
 On 11/06/14 15:07, Jamie Lennox wrote:
  Among the problems cause by the inconsistencies in the clients is that
  all the options that are required to create a client need to go into the
  config file of the service. This is a pain to configure from the server
  side and can result in missing options as servers fail to keep up.
 
  With the session object standardizing many of these options there is the
  intention to make the session be loadable directly from a CONF object. A
  spec has been proposed to this for nova-specs[1] to outline the problem
  and the approach in more detail. 
 
  The TL;DR version is that I intend to collapse all the options to load a
  client down such that each client will have one ini section that looks
  vaguely like: 
 
  [cinder]
  cafile = '/path/to/cas'
  certfile = 'path/to/cert'
  timeout = 5
  auth_name = v2password
  username = 'user'
  password = 'pass'
  
  This list of options is then managed from keystoneclient, thus servers
  will automatically have access to new transport options, authentication
  mechanisms and security fixes as they become available.
 
  The point of this email is to make people aware of this effort and that
  if accepted into nova-specs the same pattern will eventually make it to
  your service (as clients get updated and manpower allows). 
 
  The review containing the config option names is still open[2] so if you
  wish to comment on particulars, please take a look.
 
  Please leave a comment on the reviews or reply to this email with
  concerns or questions. 
 
  Thanks 
 
  Jamie
 
  [1] https://review.openstack.org/#/c/98955/
  [2] https://review.openstack.org/#/c/95015/
 Heat already needs to have configuration options for every client, and
 we've gone with the following pattern:
 http://git.openstack.org/cgit/openstack/heat/tree/etc/heat/heat.conf.sample#n612
 
 Do you have any objection to aligning with what we already have?,
 specifically:
 [clients_clientname]
 ca_file=...
 cert_file=...
 key_file=...

Sounds like there's a good case for an Oslo API for creating client
objects from configuration.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   3   >