Re: [openstack-dev] [all][tc] TC Candidates: what does an OpenStack user look like?

2017-10-15 Thread James Bottomley
On Thu, 2017-10-12 at 12:51 -0400, Zane Bitter wrote:
> So my question to the TC candidates (and incumbent TC members, or
> anyone else, if they want to answer) is: what does the hypothetical
> OpenStack user that is top-of-mind in your head look like? Who are
> _you_ building OpenStack for?

There's a fundamental misconception in the way you just asked the
question: in any open source project the question "who am I building
this for?" and "who is my target end user?" aren't necessarily the
same.

The honest answer to "who am I building this open source project for?"
should always be "me".  There's nothing magical about this: all open
source/free software projects are driven by developer enthusiasm.  If
the reason you're in the project isn't something inside yourself (like
fascination with some aspect of the code or need to use it personally)
then you're unlikely to be a good contributor.  The principle is
actually universal: having been an engineering manager in industry I
know that if someone is only in the project for the paycheque then I
need to replace them ASAP with someone who's actually fascinated by
some aspect of the project because the productivity of the latter will
be way higher.  It's the most annoying aspect of Engineering Project
Management: engineers aren't fungible resources, they have enthusiasms
that have to be engaged.

There's a corollary to this that allows you to test the health of your
project: "If I weren't being paid to do this, would I still do it?".
 The majority answer for a healthy project should be "yes".  There's no
industrial counterpart here because if they don't pay you, you don't
get access to the code base.

The question of who is the end user is usually either "me" because you
have a use for the project or more likely "I don't know" because you
care mostly about the engineering aspects.  That's not to say that some
contributors can't get fascinated by user problems because it does
happen; however, it's not usually the majority.

User base tends to come about because of goal alignment: once a project
has a reasonable number of committers, feature addition becomes more a
matter of negotiation, but these negotiations tend to produce better
code and a set of common goals (the aligned goals of the contributors).
 Industrial contributors are attracted if some of the project goals
align reasonably with business goals and it looks like contribution
from the industry partner could achieve further alignment.  One of the
key goals of Industry is to get paid by consumers, so the Industrial
contributors tend to bring along the users (Again Industry does this by
canvassing end user requirements and seeing what the alignment with the
project is and whether it could be improved.  They don't do this for
"community" they do it because they make more money if the alignment is
better).  By the way, this pragmatic goal alignment without necessarily
sharing any philosophical belief in "code freedom" is the main
difference between open source and free software.

That's not to say every successful open source/free software project
has to have a large user base.  The Linux Desktop would be a classic
example here: excluding mobile, it's a complete failure in terms of
world domination of user base.  However, in terms of "built by geeks
for geeks" it's very much alive and healthy ... just look at the OS
running on laptops at any Linux conference, for instance.

James


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-31 Thread James Bottomley
On Tue, 2016-08-30 at 03:08 +, joehuang wrote:
> Hello, Jay,
> 
> Sorry, I don't know why my mail-agent(Microsoft Outlook Web App) did 
> not carry the thread message-id information in the reply.  I'll check 
> and avoid to create a new thread for reply in existing thread.

It's a common problem with outlook.  Microsoft created their own
threading standards for email which are adopted by no-one.  Whenever
you get these headers in your email:

Thread-topic: 
Thread-index: 

And not these:

In-reply-to:
References: 

It usually means exchange has decided the other end is a microsoft
entity and it doesn't need to use the internet standard reply types. 

Unfortunately, this isn't fixable in outlook because Exchange (the MTA)
not outlook (the MUA) does the threading.  There are some thoughts
floating around the internet on how to fix exchange; if you're lucky
and you have exchange 2003, this might fix it:

https://support.microsoft.com/en-us/kb/908027

James



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Project mascots update

2016-08-05 Thread James Bottomley
On Thu, 2016-08-04 at 17:09 +1000, Mike Carden wrote:
> On Thu, Aug 4, 2016 at 4:26 PM, Antoni Segura Puimedon <
> toni+openstac...@midokura.com> wrote:
> 
> > 
> > It would be really awesome if, in true OSt and OSS spirit this work
> > happened in an OpenStack repository with an open, text based format 
> > like SVG. This way people could contribute and review.
> > 
> > 
> I am strongly in favour of images being stored in open formats. Right 
> now the most widely supported open formats are PNG and SVG. Let's 
> make sure that as often as possible, we all store non-photographic 
> images in formats like these.

As someone who acts as web monkey for various conference websites,
could I just say please use SVG.  Scalable formats are so much easier
for website designers to work with and pngs have a habit of looking
ugly when you're forced to scale them (which inevitably happens when
you have a bunch and you're trying to get them to look uniform).

James



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-04 Thread James Bottomley
On Thu, 2016-08-04 at 10:10 +0200, Thierry Carrez wrote:
> Devdatta Kulkarni wrote:
> > As current PTL of one of the projects that has the team:single
> > -vendor tag, I have following thoughts/questions on this issue.
> 
> In preamble I'd like to reiterate that the proposal is not on the 
> table at this stage -- this is just a discussion to see whether it 
> would be a good thing or a bad thing.

I think this is fair enough, plus the idea that the tagging only
triggers review not an automatic eviction is reasonable.  However, I do
have a concern about what you said below.

> > - Is the need for periodically deciding membership in the big tent
> > primarily stemming from the question of managing resources (for the
> > future design summits and cross-project work)?
> 
> No, it's not the primary reason. As I explained elsewhere in the 
> thread, it's more that (from an upstream open source project 
> perspective) OpenStack is not a useful vehicle for open source 
> projects that are and will always be driven by a single vendor. The 
> value we provide (through our governance, principles and infra 
> systems) is in enabling open collaboration between organizations. A 
> project that will clearly always stay single-vendor (for one reason 
> or another) does not get or bring extra technical value by being 
> developed within "OpenStack" (i.e. under the Technical Committee
> oversight).

I don't believe this to be consistent with the OpenStack mission
statement:

to produce the ubiquitous Open Source Cloud Computing platform that 
will meet the needs of public and private clouds regardless of size,
by 
being simple to implement and massively scalable.

OpenStack chooses to implement this mission statement by insisting on
Openness via the four opens and by facilitating a collaborative
environment for every project. I interpret the above to mean any
OpenStack project must be open and must bring technical benefit to
public and private clouds of any size; so I don't think a statement
that even if a project satisfies your openness requirements, the fact
that it must also derive technical benefit from the infrastructure
you've put in place can be supported by any construction of the above
mission statement.

The other thing that really bothers me is that it changes the
assessment of value to OpenStack from being extrinsic (brings technical
benefit to public and private cloud) to being intrinsic (must derive
technical benefit from our infrastructure) and I find non-extrinsic
measures suspect because they can lead to self-perpetuation.

James


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-01 Thread James Bottomley
On Mon, 2016-08-01 at 13:43 -0400, Sean Dague wrote:
> On 08/01/2016 12:24 PM, James Bottomley wrote:
> > Making no judgments about the particular exemplars here, I would 
> > just like to point out that one reason why projects exist with very
> > little diversity is that they "just work".  Usually people get 
> > involved when something doesn't work or they need something changed 
> > to work for them.  However, people do have a high tolerance for 
> > "works well enough" meaning that a project can be functional, 
> > widely used and not attracting diverse contributors.  A case in 
> > point for this type of project in the non-openstack world would be 
> > openssl but there are many others.
> 
> I think openssl is a good example of what we are actually trying to
> avoid. Over time that project boiled down to just a couple of people.
> Which seemed ok, because everything seemed to be working fine, but 
> only because no one was pushing on it too hard. Then folks did, and 
> we realized that there was kind of a house of cards here, that's
> required special intervention to address some of the issues found.

The original problem was lack of security audits leading to heartbleed
mistakes.  Now that that's been remedied by investment from the CII,
the project is still very monoclonal and run by a small group ... and
still just as essential.

> Keeping a diverse community up front helps mitigate some of this. 
> It's not a silver bullet by any means, but it does help ensure that 
> the goals of the project aren't only the goals of a single product 
> team inside a single entity.

The point I'm making is that Company led projects tend to be much
better connected with the end user base (because companies want
customers) which, ipso facto, means they tend to fall into the "good
enough" bucket and fail to attract many more outside contributions.

James



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-01 Thread James Bottomley
On Mon, 2016-08-01 at 11:38 -0400, Doug Hellmann wrote:
> Excerpts from Adrian Otto's message of 2016-08-01 15:14:48 +:
> > I am struggling to understand why we would want to remove projects
> > from our big tent at all, as long as they are being actively
> > developed under the principles of "four opens". It seems to me that
> > working to disqualify such projects sends an alarming signal to our
> > ecosystem. The reason we made the big tent to begin with was to set
> > a tone of inclusion. This whole discussion seems like a step
> > backward. What problem are we trying to solve, exactly?
> > 
> > If we want to have tags to signal team diversity, that's fine. We
> > do that now. But setting arbitrary requirements for big tent
> > inclusion based on who participates definitely sounds like a
> > mistake.
> 
> Membership in the big tent comes with benefits that have a real
> cost born by the rest of the community. Space at PTG and summit
> forum events is probably the one that's easiest to quantify and to
> point to as something limited that we want to use as productively
> as possible. If 90% of the work of a project is being done by a
> single company or organization (our current definition for
> single-vendor), and that doesn't change after 18 months, then I
> would take that as a signal that the community isn't interested
> enough in the project to bear the associated costs.
> 
> I'm interested in hearing other reasons that we should keep these
> sorts of projects, though. I'm not yet ready to propose the change
> to the policy myself.

Making no judgments about the particular exemplars here, I would just
like to point out that one reason why projects exist with very little
diversity is that they "just work".  Usually people get involved when
something doesn't work or they need something changed to work for them.
 However, people do have a high tolerance for "works well enough"
meaning that a project can be functional, widely used and not
attracting diverse contributors.  A case in point for this type of
project in the non-openstack world would be openssl but there are many
others.

James



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Big tent? (Related to Plugins for all)

2016-07-20 Thread James Bottomley
On Wed, 2016-07-20 at 20:01 +, Fox, Kevin M wrote:
> I would argue Linus, yes. As he's constantly stepping up when a
> subsistem tries and breaks something for a user or creates a user
> facing mess and says, no, subsystem, no. breaking userspace is
> unacceptable, or, we're not adding support for an api we have to
> support forever thats very poorly designed.

Those are all vetos.  He doesn't compel one subsystem to accept the
patches of another, for instance.

>  Yes, he defers a lot to subsystem maintainers, as they have
> generally gotten the message of paying close attention to that kind
> of thing over time, and he hasn't had to speak up so much anymore.
> The rest really is best left up to the subsystems. But someone has to
> keep an eye on the big picture. The users of the whole thing. Users
> care about the linux kernel as a whole, and less so about any given
> subsystem.

He says "don't build this" (veto) he doesn't say "do build that"
(compulsion).  The problem I've heard a lot of people describing on
this thread is the latter: difficulty of getting one group to pay
attention to the needs of another.  Your "overarching Architectural
group with some power to define what a user is" is something like this.

The only power in Linux is the power to say "no".  The only way an
individual or a group builds acceptance for their own patches is on
their own.  Architectural decisions in this model are driven locally
not globally.

James


> Thanks,
> Kevin
> 
> From: James Bottomley [james.bottom...@hansenpartnership.com]
> Sent: Wednesday, July 20, 2016 12:42 PM
> To: OpenStack Development Mailing List (not for usage questions);
> Clint Byrum
> Subject: Re: [openstack-dev] [tc][all] Big tent? (Related to Plugins
> for all)
> 
> On Wed, 2016-07-20 at 18:18 +, Fox, Kevin M wrote:
> > I wish it was so simple. Its not.
> > 
> > There is a good coding practice:
> > "The code is done, not when there is nothing more to add, but
> > nothing
> > more to remove"
> > 
> > Some of the more mature projects are in this mode of thinking now.
> > (which is mostly good, really). They don't want to add features
> > unless they see it as a benefit to their users. This is the
> > problem,
> > there is a disconnect between the view of Project X's users, and
> > greater view of OpenStack Users.
> > 
> > Even accepting the smallest of patches to work towards the goal is
> > unacceptable to projects if they believe it is not a helpful
> > feature
> > to their perceived user base, or helpful enough to them to justify
> > adding more code to their project. Or the feeling that "just push
> > it
> > to a new project or a different one is better". This fractured view
> > of OpenStack users at the project level is preventing progress on
> > OpenStack as a whole.
> > 
> > Only an overarching Architectural group with some power to define
> > what a user is, or the TC can address those sorts of issues.
> 
> I'll concede this requirement if you can point out to me who this
> group
> is for the Linux Kernel.  If you're tempted to say "Linus", it's most
> certainly not: while he does care about some architectural decisions,
> he emphatically avoids most of them, which leaves the subsystem
> maintainers (some equivalence to openstack projects/PTLs) doing this
> on
> a case by case basis.
> 
> James
> 
> > Thanks,
> > Kevin
> > 
> > From: James Bottomley [james.bottom...@hansenpartnership.com]
> > Sent: Wednesday, July 20, 2016 9:57 AM
> > To: OpenStack Development Mailing List (not for usage questions);
> > Clint Byrum
> > Subject: Re: [openstack-dev] [tc][all] Big tent? (Related to
> > Plugins
> > for all)
> > 
> > On Wed, 2016-07-20 at 16:08 +, Fox, Kevin M wrote:
> > > +1 to the finding of a middle ground.
> > 
> > Thanks ... I have actually been an enterprise architect (I just
> > keep
> > very quiet about it when talking Open Source).
> > 
> > > The problem I've seen with your suggested OpenSource solution is
> > > the
> > > current social monetary system of OpenStack makes it extremely
> > > difficult.
> > > 
> > > Each project currently prints its own currency. Reviews. It takes
> > > quite a few Reviews (time/effort) on a project to gain enough
> > > currency that you get payed attention to. And, one project
> > > doesn't
> > > honor another projects 

Re: [openstack-dev] [tc][all] Big tent? (Related to Plugins for all)

2016-07-20 Thread James Bottomley
On Wed, 2016-07-20 at 18:18 +, Fox, Kevin M wrote:
> I wish it was so simple. Its not.
> 
> There is a good coding practice:
> "The code is done, not when there is nothing more to add, but nothing
> more to remove"
> 
> Some of the more mature projects are in this mode of thinking now.
> (which is mostly good, really). They don't want to add features
> unless they see it as a benefit to their users. This is the problem,
> there is a disconnect between the view of Project X's users, and
> greater view of OpenStack Users.
> 
> Even accepting the smallest of patches to work towards the goal is
> unacceptable to projects if they believe it is not a helpful feature
> to their perceived user base, or helpful enough to them to justify
> adding more code to their project. Or the feeling that "just push it
> to a new project or a different one is better". This fractured view
> of OpenStack users at the project level is preventing progress on
> OpenStack as a whole.
> 
> Only an overarching Architectural group with some power to define
> what a user is, or the TC can address those sorts of issues.

I'll concede this requirement if you can point out to me who this group
is for the Linux Kernel.  If you're tempted to say "Linus", it's most
certainly not: while he does care about some architectural decisions,
he emphatically avoids most of them, which leaves the subsystem
maintainers (some equivalence to openstack projects/PTLs) doing this on
a case by case basis.

James

> Thanks,
> Kevin
> 
> From: James Bottomley [james.bottom...@hansenpartnership.com]
> Sent: Wednesday, July 20, 2016 9:57 AM
> To: OpenStack Development Mailing List (not for usage questions);
> Clint Byrum
> Subject: Re: [openstack-dev] [tc][all] Big tent? (Related to Plugins
> for all)
> 
> On Wed, 2016-07-20 at 16:08 +, Fox, Kevin M wrote:
> > +1 to the finding of a middle ground.
> 
> Thanks ... I have actually been an enterprise architect (I just keep
> very quiet about it when talking Open Source).
> 
> > The problem I've seen with your suggested OpenSource solution is
> > the
> > current social monetary system of OpenStack makes it extremely
> > difficult.
> > 
> > Each project currently prints its own currency. Reviews. It takes
> > quite a few Reviews (time/effort) on a project to gain enough
> > currency that you get payed attention to. And, one project doesn't
> > honor another projects currency.
> 
> OK, I accept your analogy, even though I would view currency as the
> will to create and push patches.
> 
> The problem you describe: getting the recipients to listen and accept
> your patches, is also a common one.  The first essential is simple
> minimal patches because they're hard to reject.
> 
> Once you've overcome the reject barrier, there's the indifference one
> (no-one says no, but no-one says yes).
> 
> Overcoming indifference involves partly knowing who to bother and
> when
> (In openstack, it's quite easy since you know who the core reviewers
> are) and also building a consensus for the patch; usually this
> involves
> finding other people who need the feature and getting them to pipe up
> (if you can't find other projects, then you can get potential users
> to
> do this) even better if they help you write the patches.  Ideally,
> you
> build your consensus before you actually push the patch set. 
>  Sometimes
> building consensus involves looking beyond your particular need to a
> bigger one that would satisfy you but also pulls other people in.
> 
> > When these sorts of major cross project things need to be done
> > though, you need to have folks with capital in many different
> > projects and its very difficult to amass that much.
> > 
> > There is no OpenStack level currency (other then being elected as a
> > TC member) that gets one project to stop and take the time to
> > carefully consider what someone is saying when bringing up cross
> > project issues.
> > 
> > Without some shared currency, then the problem becomes, the person
> > invested in getting a solution, can propose and prototype and
> > implement the feature all they want (often several times), but it
> > never actually is accepted into the projects because a project:
> >  a) doesn't take the time to really understand the problem, feeling
> > its trivial and just not accepting any reviews
> >  b) doesn't take the time to really understand the problem, so
> > minimize it in their mind to a 'you should just use existing thing
> > X...' or a heavy hand

Re: [openstack-dev] [tc][all] Big tent? (Related to Plugins for all)

2016-07-20 Thread James Bottomley
On Wed, 2016-07-20 at 21:24 +0300, Duncan Thomas wrote:
> On 20 July 2016 at 19:57, James Bottomley <
> james.bottom...@hansenpartnership.com> wrote:
> 
> > 
> > OK, I accept your analogy, even though I would view currency as the
> > will to create and push patches.
> > 
> > The problem you describe: getting the recipients to listen and
> > accept
> > your patches, is also a common one.  The first essential is simple
> > minimal patches because they're hard to reject.
> > 
> > Once you've overcome the reject barrier, there's the indifference
> > one
> > (no-one says no, but no-one says yes).
> > 
> > [snip]
> 
> The trouble with drive-by architecture patches (or large feature 
> patches of any kind)

OK, there's an assumption here: that the patch is large.

>  is that it is often better *not* to merge them if you don't think
> the contributor is  going to stick around for a while. This changes
> are usually intrusive, and have repercussions that take time to
> discover. It's often difficult to keep a change clean when the
> original author isn't around to review the follow-on work.

I understand (and do agree with) the Maintenance problem.  However, if
you're trying to change architecture, and you do it by small patches,
it looks easily reversible and is a lot easier to gain acceptance for. 
 The key is to think small not big (the latter being the temptation for
architecture) even if the end result will eventually be big.

Let me give you an example from ancient history.  The biggest
architectural change I pushed into Linux was switching from bus
specific to generic device APIs.  I did it at the time because I had a
SCSI device in the PA-RISC architecture that could sit on a variety of
internal busses and I didn't want a huge driver with five sets of
partially duplicated code for five different busses.

Here's the first patch (both dated 21 Dec 2002)

generic device DMA API

27 files changed, 750 insertions(+), 155 deletions(-)

And the second

allow pci primary busses to have parents in the device model

2 files changed, 14 insertions(+), 5 deletions(-)

The first is large because I'm actually introducing yet another
parallel DMA API (50% of it is documentation).  It looks OK because all
our other DMA APIs were bus specific.

The second is the key change because it allows cascades of bus
hierarchies of different types.

These two patches are easy to gain acceptance for because they're just
introducing new APIs and not altering existing stuff.  They're small
and easily revertible if I happened to disappear (or they proved not to
work out in practice).  They were, however, enough to allow me to
convert the PA-RISC architecture to the new model and write the driver
I wanted for the 53c700 chip.

The end point we're at today, where practically every device API in
Linux uses generic devices, is no longer revertible.  If it were done
as one patch, it would touch about 80% of all the files in Linux and be
larger than the kernel itself.  However, I didn't do that: I pushed the
minimum viable patch set that would support what I wanted to do (plus
the use cases for a couple of other interested parties I picked up
along the way).

James


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Big tent? (Related to Plugins for all)

2016-07-20 Thread James Bottomley
On Wed, 2016-07-20 at 16:08 +, Fox, Kevin M wrote:
> +1 to the finding of a middle ground.

Thanks ... I have actually been an enterprise architect (I just keep
very quiet about it when talking Open Source).

> The problem I've seen with your suggested OpenSource solution is the
> current social monetary system of OpenStack makes it extremely
> difficult.
> 
> Each project currently prints its own currency. Reviews. It takes
> quite a few Reviews (time/effort) on a project to gain enough
> currency that you get payed attention to. And, one project doesn't
> honor another projects currency.

OK, I accept your analogy, even though I would view currency as the
will to create and push patches.

The problem you describe: getting the recipients to listen and accept
your patches, is also a common one.  The first essential is simple
minimal patches because they're hard to reject.

Once you've overcome the reject barrier, there's the indifference one
(no-one says no, but no-one says yes).

Overcoming indifference involves partly knowing who to bother and when
(In openstack, it's quite easy since you know who the core reviewers
are) and also building a consensus for the patch; usually this involves
finding other people who need the feature and getting them to pipe up
(if you can't find other projects, then you can get potential users to
do this) even better if they help you write the patches.  Ideally, you
build your consensus before you actually push the patch set.  Sometimes
building consensus involves looking beyond your particular need to a
bigger one that would satisfy you but also pulls other people in.

> When these sorts of major cross project things need to be done
> though, you need to have folks with capital in many different
> projects and its very difficult to amass that much.
> 
> There is no OpenStack level currency (other then being elected as a
> TC member) that gets one project to stop and take the time to
> carefully consider what someone is saying when bringing up cross
> project issues.
> 
> Without some shared currency, then the problem becomes, the person
> invested in getting a solution, can propose and prototype and
> implement the feature all they want (often several times), but it
> never actually is accepted into the projects because a project:
>  a) doesn't take the time to really understand the problem, feeling
> its trivial and just not accepting any reviews
>  b) doesn't take the time to really understand the problem, so
> minimize it in their mind to a 'you should just use existing thing
> X...' or a heavy handily propose alternate solutions that really
> aren't solutions.
>  c) they decide its better handled by another project and stall/block
> reviews, trying to push the implementation to go elsewhere. When
> multiple projects decide this, the ball just keeps getting bounced
> around without any solution for years.
> 
> The status quo is not working here. The current governance structure
> doesn't support progress.

What you'll find you've described above is a process that doesn't
support drive by coders at all and, by extension therefore, doesn't
welcome newcomers, which is a big problem, but one I thought OpenStack
was tackling?

The bounce problem is annoying but not insuperable.  It mostly occurs
where there's overlap.  Often the best method for coping is to field
the bounce: produce the patch for the other project.  If it's smaller
and neater, perhaps the bounce was correct.  If it's bigger and uglier,
get the other project to say so and you now have a solid reason to go
back and say "we tried what you suggested and here's why it didn't
work".  Morally, you're now on higher ground because incorrect
technical advice is a personal failure for whoever bounced you (make
sure to capitalise on it if they try another bounce).

James


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Big tent? (Related to Plugins for all)

2016-07-20 Thread James Bottomley
On Wed, 2016-07-20 at 11:58 +0200, Julien Danjou wrote:
> On Tue, Jul 19 2016, Clint Byrum wrote:
> 
> > Perhaps if we form and start working together as a group, we can 
> > disect why nothing happened, build consensus on the most important 
> > thing to do next, and actually fix some architectural problems.

Architecture is often blamed for lack of interlock but most people
forget that the need for interlock often isn't appreciated until after
the components are built.  This is why a lot of people embrace agile
methodology (an excuse for not seeing the problem a priori).

Conversely, architects who try to foresee all interlocks often end up
with a combinatorial explosion that makes it a huge burden simply to
begin the project (and then often get sidelined as 'powerpoint
engineers').

The trick is to do something in the middle: try to foresee and build
the most common interlocks, but sidestep the combinatorial explosion by
building something simple enough to adapt to any interlock requirement
that arises after completion.

> >  The social structure that teams have is a huge part of the
> > deadlock we find ourselves in with certain controversial changes.
> > The idea is to unroll the dependency loop and start _somewhere_
> > rather than where a lot of these efforts die: starting
> > _everywhere_.
> 
> I agree with your analysis, but I fail to see how e.g. a group of 
> people X stating that Y should work this way in Cinder is going to 
> achieve any change if nobody from Cinder is in X from the beginning.
> 
> This is basically what seems to happen in many [working] groups as 
> far as I can see.

So this is where the Open Source method takes over.  Change is produced
by those people who most care about it because they're invested.  To
take your Cinder example, you're unlikely to find them within Cinder
because any project has inertia that resists change.  It takes the
energy of the outside group X to force the change to Y, but this means
that X often gets to propose, develop and even code Y.  Essentially
they become drive by coders for Cinder.  This is where Open Source
differs from Enterprise because you have the code and the access, you
can do this.  However, you have to remember the inertia problem and
build what you're trying to do as incrementally as possible: the larger
the change, the bigger the resistance to it.  It's also a good test of
the value of the change: if group X can't really be bothered to code it
(and Cinder doesn't want it) then perhaps there's not really enough
value in it anyway and it shouldn't happen.

This latter observation is also an improvement over the enterprise
methods because enterprise architects do often mandate interlocks that
later turn out to be unnecessary (or at least of a lot less value than
was initially thought).

I suppose the executive summary of the above is that I don't think
you're describing a bug, I think you're describing a feature.

James


signature.asc
Description: This is a digitally signed message part
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][nova-docker] Retiring nova-docker project

2016-07-07 Thread James Bottomley
On Thu, 2016-07-07 at 21:28 -0500, Edward Leafe wrote:
> On Jul 7, 2016, at 8:33 PM, Joshua Harlow 
> wrote:
> > 
> > That's sad, how can we fix the fact that users/deployments have
> > gone off into their own silos and may be running their own forks;
> > what went wrong (besides some of the obvious stuff that I think I
> > know about, that others probably don't) that resulted in this
> > happening?
> > 
> > Seems like something we can learn from by reflecting on this and
> > trying to find a path forward (or maybe we just accept there isn't
> > one, idk).
> 
> I think admitting that it was a conceptual mis-match from the start
> would be the first step. Containers are not VMs.

That's not a correct statement: they certainly can be orchestrated like
VMs if constructed like them: that's actually what operating system
containers are.  However, container systems are also amenable to being
constructed very differently from VMs because of the granular nature of
the virtualisations.

>  Attempts to treat them as such are always going to be poor
> compromises.

I think this reflects the persistent failure OpenStack has had in
trying to encompass the entire container space.  The problem is that
they're not monolithic, like VMs for which, whether it's ESX, KVM, Xen
or HyperV, one VM looks very like another.  The homogeneous nature of
VMs makes it natural to think containers can be pidgeonholed in the
same way ... the problem is that really, they can't.  Some container
systems absolutely can be orchestrated by nova and some can't.

Currently there is no one OpenStack system that can orchestrate the
full power of containers as presented by the raw Linux API.  I don't
think this is a particular problem because there's no industrial
consumer begging for that full power.  Right at the moment, containers
are consumed as a collection of disparate use cases and, reflecting
this, OpenStack has a variety of systems corresponding somewhat to the
consumption models.  However, being use case specific, none of them can
orchestrate an arbitrary "container" system, and, of course, any new
use case that arises likely doesn't fit with any of the current models.
 Reality is being messy again, I'm afraid.

> Containers have an important place in OpenStack; it's just not in
> Nova.

Following this theory, the next move would be trying to remove the lxc
and vz container drivers from nova?  I really don't think that's a good
idea because the fit is quite useful for a significant set of use
cases.

James


> 
> -- Ed Leafe
> 
> 
> 
> 
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: This is a digitally signed message part
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-29 Thread James Bottomley
On Mon, 2016-02-29 at 17:48 -0500, Anita Kuno wrote:
> On 02/29/2016 05:34 PM, James Bottomley wrote:
[...]
> > While I accept there is potentially a gaming problem in all forms 
> > of Open Source (we see this in the kernel with the attempt to boost
> > patch counts with trivial changes), I'd be hesitant to characterise
> > people who only submit a single patch as gamers because there's a 
> > lot of drive by patching that goes on in the long tail of any 
> > project.  The usual reason for this is everything works great apart 
> > from one thing, which the person gets annoyed enough over to 
> > investigate and patch.  I've done it myself in a lot of Open Source 
> > projects.  Once your patch is in, you've no need to submit another 
> > because everything is now working as you wished and your goal was 
> > to fix the problem not become further involved in the development 
> > side of things.  I suspect if you look in the long tail of 
> > OpenStack you'll find a lot of user and operator patches for
> > precisely this reason.
> 
> I think you are missing the point of my explanation to the question I
> was asked.
> 
> I am interested in mutually beneficial interactions.
> 
> I am not interested in unbalanced or one sided interactions.
> 
> Sorry I was unclear earlier.

Well, now I'm confused.  I was responding specifically to this
statement:

> So some folks are doing the minimum to get a summit pass rather than 
> being part of the cohort that has their first patch to OpenStack as a 
> means of offering their second patch to OpenStack.

Is that not who you meant by "gamers"? because to me it sounds like an
an expectation that people who aren't gamers would submit more than one
patch and, indeed, become part of the developer base.  I wanted to
explain why there's a significant set of people who legitimately only
submit a single patch and who won't really ever become developers.

James


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-29 Thread James Bottomley
On Mon, 2016-02-29 at 15:57 -0500, Anita Kuno wrote:
> On 02/29/2016 03:10 PM, Eoghan Glynn wrote:
> > 
> > > > Current thinking would be to give preferential rates to access 
> > > > the main summit to people who are present to other events (like 
> > > > this new separated contributors-oriented event, or Ops 
> > > > midcycle(s)). That would allow for a wider definition of 
> > > > "active community member" and reduce gaming.
> > > > 
> > > 
> > > I think reducing gaming is important. It is valuable to include 
> > > those folks who wish to make a contribution to OpenStack, I have
> > > confidence the next iteration of entry structure will try to more 
> > > accurately identify those folks who bring value to OpenStack.
> > 
> > There have been a couple references to "gaming" on this thread, 
> > which seem to imply a certain degree of dishonesty, in the sense of
> > bending the rules.
> > 
> > Can anyone who has used the phrase clarify:
> > 
> >  (a) what exactly they mean by gaming in this context
> > 
> > and:
> > 
> >  (b) why they think this is a clear & present problem demanding a
> >  solution?
> > 
> > For the record, landing a small number of patches per cycle and 
> > thus earning an ATC summit pass as a result is not, IMO at least,
> > gaming.
> > 
> > Instead, it's called *contributing*.
> > 
> > (on a small scale, but contributing none-the-less).
> > 
> > Cheers,
> > Eoghan
> > 
> > ___
> > ___
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsu
> > bscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> 
> Sure I can tell you what I mean.
> 
> In Vancouver I happened to be sitting behind someone who stated "I'm
> just here for the buzz." Which is lovely for that person. The problem 
> is that the buzz that person is there for is partially created by me 
> and I create it and mean to offer it to people who will return it in 
> kind, not just soak it up and keep it to themselves.

Sorry about that; it does sound like a thing a sales or marketing
person would say.

> Now I have no way of knowing who this person is and how they arrived
> at the event. But the numbers for people offering one patch to
> OpenStack (the bar for a summit pass) is significantly higher than
> the curve of people offering two, three or four patches to OpenStack
> (patches that are accepted and merged). So some folks are doing the
> minimum to get a summit pass rather than being part of the cohort
> that has their first patch to OpenStack as a means of offering their
> second patch to OpenStack.

Which does sound like the ATC inducement is working.  If you don't want
it to encourage people to submit patches then it shouldn't be offered.

> I consider it an honour and a privilege that I get to work with so 
> many wonderful people everyday who are dedicated to making open 
> source clouds available for whoever would wish to have clouds. I'm 
> more than a little tired of having my energy drained by folks who 
> enjoy feeding off of it while making no effort to return beneficial
> energy in kind.
> 
> So when I use the phrase gaming, this is the dynamic to which I 
> refer.

While I accept there is potentially a gaming problem in all forms of
Open Source (we see this in the kernel with the attempt to boost patch
counts with trivial changes), I'd be hesitant to characterise people
who only submit a single patch as gamers because there's a lot of drive
by patching that goes on in the long tail of any project.  The usual
reason for this is everything works great apart from one thing, which
the person gets annoyed enough over to investigate and patch.  I've
done it myself in a lot of Open Source projects.  Once your patch is
in, you've no need to submit another because everything is now working
as you wished and your goal was to fix the problem not become further
involved in the development side of things.  I suspect if you look in
the long tail of OpenStack you'll find a lot of user and operator
patches for precisely this reason.

James


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-26 Thread James Bottomley
On Fri, 2016-02-26 at 17:24 +, Daniel P. Berrange wrote:
> On Fri, Feb 26, 2016 at 08:55:52AM -0800, James Bottomley wrote:
> > On Fri, 2016-02-26 at 16:03 +, Daniel P. Berrange wrote:
> > > On Fri, Feb 26, 2016 at 10:39:08AM -0500, Rich Bowen wrote:
> > > > 
> > > > 
> > > > On 02/22/2016 10:14 AM, Thierry Carrez wrote:
> > > > > Hi everyone,
> > > > > 
> > > > > TL;DR: Let's split the events, starting after Barcelona.
> > > > > 
> > > > > 
> > > > > 
> > > > > Comments, thoughts ?
> > > > 
> > > > Thierry (and Jay, who wrote a similar note much earlier in 
> > > > February, and Lauren, who added more clarity over on the
> > > > marketing 
> > > > list, and the many, many of you who have spoken up in this
> > > > thread
> > > > ...),
> > > > 
> > > > as a community guy, I have grave concerns about what the long
> > > > -term
> > > > effect of this move would be. I agree with your reasons, and
> > > > the
> > > > problems, but I worry that this is not the way to solve it.
> > > > 
> > > > Summit is one time when we have an opportunity to hold
> > > > community up 
> > > > to the folks that think only product - to show them how
> > > > critical it 
> > > > is that the people that are on this mailing list are doing the 
> > > > awesome things that they're doing, in the upstream, in
> > > > cooperation 
> > > > and collaboration with their competitors.
> > > > 
> > > > I worry that splitting the two events would remove the
> > > > community 
> > > > aspect from the conference. The conference would become more 
> > > > corporate, more product, and less project.
> > > > 
> > > > My initial response was "crap, now I have to go to four events 
> > > > instead of two", but as I thought about it, it became clear
> > > > that 
> > > > that wouldn't happen. I, and everyone else, would end up
> > > > picking 
> > > > one event or the other, and the division between product and
> > > > project would deepen.
> > > > 
> > > > Summit, for me specifically, has frequently been at least as
> > > > much 
> > > > about showing the community to the sales/marketing folks in my
> > > > own
> > > > company, as showing our wares to the customer.
> > > 
> > > I think what you describe is a prime reason for why separating
> > > the
> > > events would be *beneficial* for the community contributors. The
> > > conference has long ago become so corporate focused that its
> > > session
> > > offers little to no value to me as a project contributor. What
> > > you
> > > describe as a benefit of being able to put community people
> > > infront
> > > of business people is in fact a significant negative for the
> > > design
> > > summit productivity. It causes key community contributors to be 
> > > pulled out of important design sessions to go talk to business 
> > > people, making the design sessions significantly less productive.
> > 
> > It's Naïve to think that something is so sacrosanct that it will be
> > protected come what may.  Everything eventually has to justify 
> > itself to the funders.  Providing quid pro quo to sales and 
> > marketing helps enormously with that justification and it can be 
> > managed so it's not a huge drain on productive time.  OpenStack may 
> > be the new shiny now, but one day it won't be and then you'll need 
> > the support of the people you're currently disdaining.
> > 
> > I've said this before in the abstract, but let me try to make it
> > specific and personal: once the kernel was the new shiny and money 
> > was poured all over us; we were pure and banned management types 
> > from the kernel summit and other events, but that all changed when 
> > the dot com bust came.  You're from Red Hat, if you ask the old 
> > timers about the Ottawa Linux Symposium and allied Kernel Summit I 
> > believe they'll recall that in 2005(or 6) the Red Hat answer to a 
> > plea to fund travel was here's $25 a head, go and find a floor to 
> > crash on.  As the wrangler for the new Linux Plumbers Conference I 
> > had to come up with all sorts

Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-26 Thread James Bottomley
On Fri, 2016-02-26 at 16:03 +, Daniel P. Berrange wrote:
> On Fri, Feb 26, 2016 at 10:39:08AM -0500, Rich Bowen wrote:
> > 
> > 
> > On 02/22/2016 10:14 AM, Thierry Carrez wrote:
> > > Hi everyone,
> > > 
> > > TL;DR: Let's split the events, starting after Barcelona.
> > > 
> > > 
> > > 
> > > Comments, thoughts ?
> > 
> > Thierry (and Jay, who wrote a similar note much earlier in 
> > February, and Lauren, who added more clarity over on the marketing 
> > list, and the many, many of you who have spoken up in this thread
> > ...),
> > 
> > as a community guy, I have grave concerns about what the long-term
> > effect of this move would be. I agree with your reasons, and the
> > problems, but I worry that this is not the way to solve it.
> > 
> > Summit is one time when we have an opportunity to hold community up 
> > to the folks that think only product - to show them how critical it 
> > is that the people that are on this mailing list are doing the 
> > awesome things that they're doing, in the upstream, in cooperation 
> > and collaboration with their competitors.
> > 
> > I worry that splitting the two events would remove the community 
> > aspect from the conference. The conference would become more 
> > corporate, more product, and less project.
> > 
> > My initial response was "crap, now I have to go to four events 
> > instead of two", but as I thought about it, it became clear that 
> > that wouldn't happen. I, and everyone else, would end up picking 
> > one event or the other, and the division between product and
> > project would deepen.
> > 
> > Summit, for me specifically, has frequently been at least as much 
> > about showing the community to the sales/marketing folks in my own
> > company, as showing our wares to the customer.
> 
> I think what you describe is a prime reason for why separating the
> events would be *beneficial* for the community contributors. The
> conference has long ago become so corporate focused that its session
> offers little to no value to me as a project contributor. What you
> describe as a benefit of being able to put community people infront
> of business people is in fact a significant negative for the design
> summit productivity. It causes key community contributors to be 
> pulled out of important design sessions to go talk to business 
> people, making the design sessions significantly less productive.

It's Naïve to think that something is so sacrosanct that it will be
protected come what may.  Everything eventually has to justify itself
to the funders.  Providing quid pro quo to sales and marketing helps
enormously with that justification and it can be managed so it's not a
huge drain on productive time.  OpenStack may be the new shiny now, but
one day it won't be and then you'll need the support of the people
you're currently disdaining.

I've said this before in the abstract, but let me try to make it
specific and personal: once the kernel was the new shiny and money was
poured all over us; we were pure and banned management types from the
kernel summit and other events, but that all changed when the dot com
bust came.  You're from Red Hat, if you ask the old timers about the
Ottawa Linux Symposium and allied Kernel Summit I believe they'll
recall that in 2005(or 6) the Red Hat answer to a plea to fund travel
was here's $25 a head, go and find a floor to crash on.  As the
wrangler for the new Linux Plumbers Conference I had to come up with
all sorts of convoluted schemes for getting Red Hat to fund developer
travel most of which involved embarrassing Brian Stevens into approving
it over the objections of his managers.  I don't want to go into detail
about how Red Hat reached this situation; I just want to remind you
that it happened before and it could happen again.

I really suggest you listen to what your community manager says because
he's the one who is actually looking out for your interests.  I
guarantee (and history shows) there will come a time when you'll regret
ignoring him.

James


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-25 Thread James Bottomley
On Thu, 2016-02-25 at 12:40 +0100, Thierry Carrez wrote:
> Qiming Teng wrote:
> > [...]
> > Week 1:
> >Wednesday-Friday: 3 days Summit.
> >  * Primarily an event for marketing, sales, CTOs, architects,
> >operators, journalists, ...
> >  * Contributors can decide whether they want to attend this.
> >Saturday-Sunday:
> >  * Social activities: contributors meet-up, hang outs ...
> > 
> > Week 2:
> >Monday-Wednesday: 3 days Design Summit
> >  * Primarily an event for developers.
> >  * Operators can hold meetups during these days, or join
> > project
> >design summits.
> > 
> > If you need to attend both events, you don't need two trips.
> > Scheduling
> > both events by the end of a release cycle can help gather more
> > meaningful feedbacks, experiences or lessons from previous releases
> > and
> > ensure a better plan for the coming release.
> > 
> > If you want to attend just the main Summit or only the Design
> > Summit,
> > you can plan your trip accordingly.
> 
> This was an option we considered. The main objection was that we are 
> pretty burnt out and ready to go home when comes Friday on a single
> -week event, so the prospect of doing two consecutive weeks looked a 
> bit like madness (especially considering ancillary events like 
> upstream training, the board meeting etc. which tend to happen on the 
> weekend before summit already). It felt like a good way to reduce our 
> productivity and not make the most of the limited common time 
> together. Furthermore it doesn't solve the issue of suboptimal timing 
> as described in my original email.
> 
> The benefit is that for people attending both events, you indeed save 
> on pure flight costs. But since you have to cover for conference 
> hotel rooms and food over the weekend and otherwise compensate 
> employees for being stuck there over the weekend, the gain is not 
> that significant...

We did actually try to do this for Kernel Summit, Plumbers and Linux
Con in 2012.  Once we tried to schedule it, we got a huge back reaction
from the sponsors, the attendees and the speakers mostly about having
to be away over the weekend.  Eventually we caved to pressure and
overlaid all three events in the same week, which lead to another set
of complaints ...

The lesson I took away from this is never schedule a set of events
longer than a week and I still have the scars to remind me.

James


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] solving API case sensitivity issues

2016-02-24 Thread James Bottomley
On Wed, 2016-02-24 at 11:40 -0500, Sean Dague wrote:
> On 02/24/2016 11:28 AM, James Bottomley wrote:
> > On Wed, 2016-02-24 at 07:48 -0500, Sean Dague wrote:
> > > We have a specific bug around aggregrate metadata setting in Nova
> > > which
> > > exposes a larger issue with our mysql schema.
> > > https://bugs.launchpad.net/nova/+bug/1538011
> > > 
> > > On mysql the following will explode with a 500:
> > > 
> > > > nova aggregate-create agg1
> > > > nova aggregate-set-metadata agg1 abc=1
> > > > nova aggregate-set-metadata agg1 ABC=2
> > > 
> > > mysql (by default) treats abc == ABC. However the python code
> > > does
> > > not.
> > > 
> > > We have a couple of options:
> > > 
> > > 1) make the API explicitly case fold
> > > 
> > > 2) update the mysql DB to use latin_bin collation for these
> > > columns
> > > 
> > > 3) make this a 400 error because duplicates were found
> > > 
> > > 
> > > Options 1 & 2 make all OpenStack environments consistent
> > > regardless
> > > of
> > > backend.
> > > 
> > > Option 2 is potentially expensive TABLE alter.
> > > 
> > > Option 3 gets rid of the 500 error, however at the risk that the
> > > behavior for this API is different depending on DB backend. Which
> > > is
> > > less than ideal.
> > > 
> > > 
> > > My preference is slightly towards #1. It's taken a long time for 
> > > someone to report this issue, so I think it's an edge case, and 
> > > people weren't think about this being case sensitive. It has the
> > > risk 
> > > of impacting someone on an odd db platform that has been using
> > > that
> > > feature.
> > > 
> > > There are going to be a few other APIs to clean up in a similar
> > > way. 
> > > I don't think this comes in under a microversion because of how
> > > deep 
> > > in the db api layer this is, and it's just not viable to keep
> > > both
> > > paths.
> > 
> > This is actually one of the curses wished on us by REST.  Since the
> > intent is to use web requests for the API, the API name must follow
> > the
> > case sensitivity rules for URL matching (case insensitive). 
> 
> Um... since when are URLs generically case insensitive? The host
> portion
> is - https://tools.ietf.org/html/r
> https://tools.ietf.org/html/rfc3986#section-3.2.2c3986#section-3.2.2 
> and the scheme
> portion is - https://tools.ietf.org/html/rfc3986#section-3.1 however
> nothing about the PATH specifies that it should or must be (in
> section 3.3)

Heh, OK, I'm out of date.  When we first argued over this, Microsoft
required case insensitive matching for the path component because IIS
was doing lookups on vfat filesystems which are naturally case
insensitive.  If that's been excised from the standard, I'm happy to
keep it in the dustbin of history.

> While it's a little off topic, this is the 2nd time in a month it 
> came up, so I'd like to know if there is a reference for the case
> insensitive pov.

I checked; it looks to be implementation specific.  So php, for
instance, does case sensitive

/index.php != /Index.php

But drupal does case insensitive

/node/6 == /Node/6 == /NoDe/6

So all in all, a bit of a mess.

James


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] solving API case sensitivity issues

2016-02-24 Thread James Bottomley
On Wed, 2016-02-24 at 07:48 -0500, Sean Dague wrote:
> We have a specific bug around aggregrate metadata setting in Nova
> which
> exposes a larger issue with our mysql schema.
> https://bugs.launchpad.net/nova/+bug/1538011
> 
> On mysql the following will explode with a 500:
> 
> > nova aggregate-create agg1
> > nova aggregate-set-metadata agg1 abc=1
> > nova aggregate-set-metadata agg1 ABC=2
> 
> mysql (by default) treats abc == ABC. However the python code does
> not.
> 
> We have a couple of options:
> 
> 1) make the API explicitly case fold
> 
> 2) update the mysql DB to use latin_bin collation for these columns
> 
> 3) make this a 400 error because duplicates were found
> 
> 
> Options 1 & 2 make all OpenStack environments consistent regardless
> of
> backend.
> 
> Option 2 is potentially expensive TABLE alter.
> 
> Option 3 gets rid of the 500 error, however at the risk that the
> behavior for this API is different depending on DB backend. Which is
> less than ideal.
> 
> 
> My preference is slightly towards #1. It's taken a long time for 
> someone to report this issue, so I think it's an edge case, and 
> people weren't think about this being case sensitive. It has the risk 
> of impacting someone on an odd db platform that has been using that
> feature.
> 
> There are going to be a few other APIs to clean up in a similar way. 
> I don't think this comes in under a microversion because of how deep 
> in the db api layer this is, and it's just not viable to keep both
> paths.

This is actually one of the curses wished on us by REST.  Since the
intent is to use web requests for the API, the API name must follow the
case sensitivity rules for URL matching (case insensitive).  However,
the rest of the parameters can be case sensitive.  That means that if
your column name maps to an API, it must be case insensitive, but if it
maps to a data input it may be case sensitive.

I think option 1 is the best course, but someone will have to take a
look and make sure there are no APIs that suddenly have case
insensitivity rules for data inputs that aren't expressed currently.

James



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] unconstrained growth, why?

2016-02-17 Thread James Bottomley
On Wed, 2016-02-17 at 13:25 -0500, Jay Pipes wrote:
> On 02/17/2016 09:28 AM, Doug Hellmann wrote:
> > Are people confused about what OpenStack is because they're looking
> > for a single turn-key system from a vendor? Because they don't know
> > what features they want/need? Or are we just doing a bad job of
> > communicating the product vs. kit nature of the project?
> 
> I think we are doing a bad job of communicating the product vs. kit 
> nature of OpenStack.

I think that might because OpenStack is both an upstream and
effectively a distribution (which you're attempting to control via
trademarks and defcore).  What you're seeing is natural schizophrenia
because it's really hard to think of upstream and distribution at the
same time (upstream tends to think of pure implementations and distros
tend to think about what their specific customers need and problems).

I'm not saying don't do this ... effectively Mozilla does this as well,
but it is the source of the confusion, I think.

James


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-15 Thread James Bottomley
On Mon, 2016-02-15 at 04:36 -0500, Eoghan Glynn wrote:
> 
> > > Honestly I don't know of any communication between two cores at a 
> > > +2 party that couldn't have just as easily happened surrounded by
> > > other contributors. Nor, I hope, does anyone put in the 
> > > substantial reviewing effort required to become a core in order 
> > > to score a few free beers and see some local entertainment. 
> > > Similarly for the TC, one would hope that dinner doesn't figure 
> > > in the system incentives that drives folks to throw their hat
> > > into the ring.
> > 
> > Heh, you'd be surprised.
> > 
> > I don't object to the proposal, just the implication that there's
> > something wrong with parties for specific groups: we did abandon 
> > the speaker party at Plumbers because the separation didn't seem to 
> > be useful and concentrated instead on doing a great party for
> > everyone.
> > 
> > > In any case, I've derailed the main thrust of the discussion 
> > > here, which I believe could be summed up by:
> > > 
> > >   "let's dial down the glitz a notch, and get back to basics"
> > > 
> > > That sentiment I support in general, but I'd just be more 
> > > selective as to which social events should be first in line to be 
> > > culled in order to create a better atmosphere at summit.
> > > 
> > > And I'd be far more concerned about getting the choice of 
> > > location, cadence, attendees, and format right, than in questions 
> > > of who drinks with whom.
> > 
> > OK, so here's a proposal, why not reinvent the Cores party as a 
> > Meet the Cores Party instead (open to all design summit attendees)?
> >  Just make sure it's advertised in a way that could only possibly 
> > appeal to design summit attendees (so the suits don't want to go), 
> > use the same buget (which will necessitate a dial down) and it 
> > becomes an inclusive event that serves a useful purpose.
> 
> Sure, I'd be totally agnostic on the branding as long as the widest
> audience is invited ... e.g. all ATCs, or even all summit attendees.

If you make it ATC only, you've just restricted it to no newcomers,
which is exclusive again.  I think it wants to be open to all design
summit attendees regardless of ATC status.  Since there's no separate
badge, I think you keep the OpenStack summit attendees out by judicious
advertising.  Perhaps, say, only advertise on openstack-dev@ and
posters at the actual design summit?

James


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-12 Thread James Bottomley
On Fri, 2016-02-12 at 13:26 -0500, Eoghan Glynn wrote:
> 
> > > > > [...]
> > > > >   * much of the problem with the lavish parties is IMO
> > > > > related to
> > > > > the
> > > > > *exclusivity* of certain shindigs, as opposed to devs
> > > > > socializing at
> > > > > summit being inappropriate per se. In that vein, I think
> > > > > the
> > > > > cores
> > > > > party sends the wrong message and has run its course,
> > > > > while
> > > > > the TC
> > > > > dinner ... well, maybe Austin is the time to show some
> > > > > leadership
> > > > > on that? ;)
> > > > 
> > > > Well, Tokyo was the time to show some leadership on that --
> > > > there
> > > > was no "TC dinner" there :)
> > > 
> > > Excellent, that is/was indeed a positive step :)
> > > 
> > > For the cores party, much as I enjoyed the First Nation cuisine
> > > in
> > > Vancouver or the performance art in Tokyo, IMO it's probably time
> > > to
> > > draw a line under that excess also, as it too projects a notion
> > > of
> > > exclusivity that runs counter to building a community.
> > 
> > Are you sure you're concentrating on the right problem? 
> >  Communities
> > are naturally striated in terms of leadership.  In principle,
> > there's
> > nothing wrong with "exclusive" events that appear to be rewarding
> > the
> > higher striations, especially if it acts as an incentive to people
> > to
> > move up.  It's only actually "elitist" if you reward the top and
> > there's no real way to move up there from the bottom.  You also
> > want to
> > be careful about being pejorative; after all the same principle
> > would
> > apply to the Board Dinner as well.
> > 
> > I think the correct question to ask would be "does the cash spent
> > on
> > the TC party provide a return on investment either as an incentive
> > to
> > become a TC or to facililtate communications among TC members?". 
> >  If
> > you answer no to that, then eliminate it.
> 
> Well the cash spent on those two events is not my concern at all, as
> both are privately sponsored by an OpenStack vendor as opposed to 
> being paid for by the Foundation (IIUC). So in that sense, it's not 
> like the events are consuming "community funds" for which I'm 
> demanding an RoI. Vendor's marketing dollars, so the return is their
> own concern.
> 
> Neither am I against partying devs in general, seems like a useful
> ice-breaker at summit, just like at most other tech conferences.
> 
> My objection, FWIW, is simply around the "Upstairs, Downstairs" feel
> to such events (or if you're not old enough to have watched the BBC
> in the 1970s, maybe Downton Abbey would be more familiar).

Well, I'm old enough to remember it, yes.  One of the ironies the
series was pointing out was that the social striations usually got
mirrored more strongly downstairs than upstairs (Hudson was a more
Jealous guardian of Mr Bellamy's social status than the latter was).

> Honestly I don't know of any communication between two cores at a +2
> party that couldn't have just as easily happened surrounded by other
> contributors. Nor, I hope, does anyone put in the substantial 
> reviewing effort required to become a core in order to score a few 
> free beers and see some local entertainment. Similarly for the TC, 
> one would hope that dinner doesn't figure in the system incentives 
> that drives folks to throw their hat into the ring.

Heh, you'd be surprised.

I don't object to the proposal, just the implication that there's
something wrong with parties for specific groups: we did abandon the
speaker party at Plumbers because the separation didn't seem to be
useful and concentrated instead on doing a great party for everyone.

> In any case, I've derailed the main thrust of the discussion here,
> which I believe could be summed up by:
>
>   "let's dial down the glitz a notch, and get back to basics"
> 
> That sentiment I support in general, but I'd just be more selective
> as to which social events should be first in line to be culled in
> order to create a better atmosphere at summit.
> 
> And I'd be far more concerned about getting the choice of location,
> cadence, attendees, and format right, than in questions of who drinks
> with whom.

OK, so here's a proposal, why not reinvent the Cores party as a Meet
the Cores Party instead (open to all design summit attendees)?  Just
make sure it's advertised in a way that could only possibly appeal to
design summit attendees (so the suits don't want to go), use the same
buget (which will necessitate a dial down) and it becomes an inclusive
event that serves a useful purpose.

James


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-12 Thread James Bottomley
On Fri, 2016-02-12 at 07:45 -0500, Eoghan Glynn wrote:
> 
> > > [...]
> > >   * much of the problem with the lavish parties is IMO related to
> > > the
> > > *exclusivity* of certain shindigs, as opposed to devs
> > > socializing at
> > > summit being inappropriate per se. In that vein, I think the
> > > cores
> > > party sends the wrong message and has run its course, while
> > > the TC
> > > dinner ... well, maybe Austin is the time to show some
> > > leadership
> > > on that? ;)
> > 
> > Well, Tokyo was the time to show some leadership on that -- there 
> > was no "TC dinner" there :)
> 
> Excellent, that is/was indeed a positive step :)
> 
> For the cores party, much as I enjoyed the First Nation cuisine in 
> Vancouver or the performance art in Tokyo, IMO it's probably time to 
> draw a line under that excess also, as it too projects a notion of 
> exclusivity that runs counter to building a community.

Are you sure you're concentrating on the right problem?  Communities
are naturally striated in terms of leadership.  In principle, there's
nothing wrong with "exclusive" events that appear to be rewarding the
higher striations, especially if it acts as an incentive to people to
move up.  It's only actually "elitist" if you reward the top and
there's no real way to move up there from the bottom.  You also want to
be careful about being pejorative; after all the same principle would
apply to the Board Dinner as well.

I think the correct question to ask would be "does the cash spent on
the TC party provide a return on investment either as an incentive to
become a TC or to facililtate communications among TC members?".  If
you answer no to that, then eliminate it.

James


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-08 Thread James Bottomley
On Sun, 2016-02-07 at 15:07 -0500, Jay Pipes wrote:
> Hello all,
> 
> tl;dr
> =
> 
> I have long thought that the OpenStack Summits have become too 
> commercial and provide little value to the software engineers 
> contributing to OpenStack.
> 
> I propose the following:
> 
> 1) Separate the design summits from the conferences
> 2) Hold only a single OpenStack conference per year
> 3) Return the design summit to being a low-key, low-cost working 
> event
[...]

So having seen the other side from the Linux Foundation, here's the
problem with this (note I hate 2000+ people parties and huge queues to
get badges and I really appreciate smaller events but ...).

First, I don't actually think there's a problem: The design summits are
usually well separated from the big conference (I mean, in Vancouver it
was even in a separate building).  If you don't want to see the
powerpoint sessions, you really don't have to go to them.  I like this
setup because I often speak at the OpenStack summit, so the design
summit provides a nice quiet space to meet more like minded people and
to escape from the huge press of the OpenStack summit itself.

Secondly, someone has to pay for a separated design summit.  OpenStack
is awash with cash at the moment, so perhaps this is less of an issue
now, but if we have a sudden IT economic crisis, it will become much
more of one.  Planning ahead is always useful.

Thirdly, one of the draws to the OpenStack summit is the ability to
interact with the people who build it.  Not only for the sales and
marketing meetings you hate, but also for outreach: people unsure if
OpenStack is the thing for them to get involved in also come and talk
to developers (their employers often pay because the summit OpenStack
is a big thing and the potential developers can often quietly sneak off
to the design summit).  If you take that away, you'll damage both the
OpenStack summit and some of the outreach communications efforts.

Just by way of comparison, our two big Linux Summits, the Kernel Summit
and LSF/MM, generate enough cash to stand on their own, but we usually
co-locate them with other Linux Foundation events for precisely the
reasons above.

James


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-08 Thread James Bottomley
On Mon, 2016-02-08 at 12:50 -0500, Jay Pipes wrote:
> On 02/08/2016 11:56 AM, James Bottomley wrote:
> > On Mon, 2016-02-08 at 09:43 -0500, Jay Pipes wrote:
> > > On 02/08/2016 09:03 AM, Fausto Marzi wrote:
> > > > The OpenStack Summit is a great thing as it is now. It creates
> > > > big
> > > > momentum, it's a strong motivator for the engineers (as enjoy
> > > > our
> > > > time
> > > > there)
> > > 
> > > I disagree with you on this. The design summits are intended to
> > > be
> > > working events, not conference parties.
> > 
> > Having chaired and helped organise the Linux Plumbers conference 
> > for the last 8 years, I don't agree with this.  Agreeable social 
> > events are actually part of the conference process.  Someone who 
> > didn't dare contradict the expert in a high pressure lecture room 
> > environment may feel more confident to have a quiet discussion of 
> > their ideas over a beer/wine at a social event.
> 
> James, I very much respect you, your opinion, and your experience in 
> the Linux community. However, you are mixing things up here, possibly
> because you never attended the early design summits.

Well, OK, you got me there: my earliest design summit was San Diego, I
think.

> The OpenStack design summits started out life as small, social, 
> working events. There were no "high pressure lecture room" 
> environments. There was no presentations or PowerPoint lectures at 
> all. The seating was arranged in fishbowl setups and not everyone
> -facing-front.
> 
> The original design summits were the very definition of "discussion 
> of ... ideas at a social event". They have become quite the opposite.

OK, that very much describes the early Linux events as well.  IDC went
off and did LinuxWorld and we had tiny kernel summits.  Unfortunately,
the world has moved on for both communities ... they're no longer as
tiny as they once were and however much we try, we can't put the genie
of small, intimate gatherings back in the real world bottle.  The
question for each community is how to scale in a way that appeals to
the core while not excluding newcomers.

> > Part of the function of a conference in a remote community is to
> > let people meet each other and get to know the person on the other
> > end of
> > the fairly impersonal web id.  It also helps defuse community 
> > squabbles
> > and hostility: it's easier to be nasty to someone you've never met 
> > and
> > who your only interaction with is via a ritual communication 
> > mechanism.
> 
> What you describe above is *not* what is happening at the OpenStack 
> Summits. It's become a show, a marketing event where it's more 
> difficult to have personal meetups with community members because 
> there's way too many people and way too much emphasis on parties and
> schwag.

OK, I think we will always disagree on this.  As I've said, I find the
co-located design summit space to be a lot less hectic than the
OpenStack Summit space, so I do think the design summit doesn't suffer
too much contamination.

> It is the original design summits that allowed people to actually 
> meet each other and get to know the person on the other end of the 
> web id. It is now the mid-cycle events that truly encourage this 
> behaviour, because the design summits have become too tied to the 
> OpenStack marketing event.

Don't shoot all the marketers.  You may not need them now, as they
scramble to jump on the bandwagon, but there may come a time when you
do ...

> >> This isn't the intent of design summits. It's not intended to be a
> >> company team building event.
> >
> > Hey, if that's how you have to sell it to your boss ...
> 
> I don't need to sell anything to my boss. I need to make 
> recommendations  to them on what will be the most cost-effective and 
> productive spend of a limited budget for engineering. And my 
> recommendation leans more and more towards smaller, focused, working 
> events instead of the OpenStack summits for all the reasons I have
> written in this thread.

My point wasn't that *you* have to do this.  It was that a lot of
others might have to.  Not every company is as community savvy as
Mirantis.  I've also been an OpenSource evangelist at various companies
for a while now.  The hardest thing is carving out a travel budget for
the engineers to go off and meet each other because various elements in
the management chain regard this as a boondoggle and a potential drain
on working time.  I've ended up in the ridiculous situation where my
CTO office owned the entirety o

Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-08 Thread James Bottomley
On Mon, 2016-02-08 at 09:43 -0500, Jay Pipes wrote:
> On 02/08/2016 09:03 AM, Fausto Marzi wrote:
> > The OpenStack Summit is a great thing as it is now. It creates big
> > momentum, it's a strong motivator for the engineers (as enjoy our
> > time
> > there)
> 
> I disagree with you on this. The design summits are intended to be 
> working events, not conference parties.

Having chaired and helped organise the Linux Plumbers conference for
the last 8 years, I don't agree with this.  Agreeable social events are
actually part of the conference process.  Someone who didn't dare
contradict the expert in a high pressure lecture room environment may
feel more confident to have a quiet discussion of their ideas over a
beer/wine at a social event.

Part of the function of a conference in a remote community is to let
people meet each other and get to know the person on the other end of
the fairly impersonal web id.  It also helps defuse community squabbles
and hostility: it's easier to be nasty to someone you've never met and
who your only interaction with is via a ritual communication mechanism.

>  > and the Companies are happy too with the business related side. I
> > see it also as the most successful Team building activity,
> > Community and
> > Company wide.
> 
> This isn't the intent of design summits. It's not intended to be a 
> company team building event.

Hey, if that's how you have to sell it to your boss ...

>  > For Companies, the costs to send engineers to the Summit
> > or to a dedicated Design event are exactly the same.
> 
> This is absolutely not the case. Sending engineers to expensive 
> conference hotels for a full week or more is more expensive than 
> sending engineers to small hotels in smaller cities for shorter 
> amounts of focused time.

How real is this?  Vancouver was a really expensive place, but a lot of
people who were deeply concerned about cost managed to find cheaper
hotels even there.  You can always (or usually) find the option for the
cost conscious if you look.  One of the advantages of large hub cities
is cheaper airfafe, which is usually a slightly more significant
component than accommodation.  Once you start looking at "smaller"
cities with only a couple of airlines serving them, you'll find the
travel costs sky rocket.

>  > Besides, many Companies send US based employees only to the US
> Summit, and EU
> > based only to the other side. The OpenStack Summit is probably the 
> > most advanced and successful OpenSource event, if you take out of 
> > it the engineering side, it won't be the same.
> 
> I don't see the OpenStack Summit as being an advanced event. It has 
> become a vendor-driven suit-fest, IMHO.

Well, if we disdain its content and pull all the engineers away, that's
certainly a self fulfilling prophecy.  Why not make it our mission to
try and give a more technical talk at the OpenStack summit itself?  I
have ... I think most of the audience actually enjoyed it even if there
were a few suit types who found themselves in the wrong session.  The
design summits are very strictly focussed.  It's actually harder to
give more general technical talks there than it is at the summit
because of the severity of focus.

> > I think, the issue here is that we need to have a better and more
> > productive way to work together. Probably the motivation behind a
> > separate design summit and also this discussion is focused to 
> > improve that, as we see that face to face is effective. Maybe this 
> > is the limitation we need to resolve, rather than changing an 
> > amazing event.
> 
> All I want is to be more productive. In my estimation, the Summits 
> have become vastly less productive than they used to be. Mid-cycles 
> are generally much more productive and much more cost-effective 
> because they don't have the distraction of the Summit party
> atmosphere.

"... because thou art virtuous, there should be no more cakes and ale?"
... you're implying that we all party and forget work because of a
"party atmosphere".  This doesn't accord with my experiences at all.  I
may be less usual than most, but Vancouver was a foodie town ... I
spent all the evenings out to dinner with people I don't normally meet
... I skipped every party including the super special VIP ones (which,
admittedly, I'd intended to go to).  Tokyo was about the same because I
had a lot of people to say "hello" to and it's fun going out for a
Japanese experience.  People who go to the summit to party probably
aren't going to make much of a contribution in a separated design
summit anyway and people who don't can do just as well in either
atmosphere.

> As someone who is responsible for recommending which Mirantis
> engineers go to which events, I strongly favor sending more engineers
> to more focused events at the expense of sending fewer engineers to
> the expensive and unfocused OpenStack Summits.

As long as they mostly go to the associated design summit they're going
to a focussed event.

James



Re: [openstack-dev] [Nova] sponsor some LVM development

2016-01-22 Thread James Bottomley
On Fri, 2016-01-22 at 21:49 +0100, Premysl Kouril wrote:
> On 22 Jan 2016 17:43, "James Bottomley" <
> james.bottom...@hansenpartnership.com> wrote:
> > The 3x difference in the benchmarks would seem to indicate a local
> > tuning or configuration problem, because it's not what most people 
> > see.  What the current benchmarks seem to show is about a 1-5%
> > difference between the directio and the direct to block paths 
> > depending on fstype, how its tuned, ioscheduler and underlying
> > device.
> 
> I will try to find problem in our configuration and re-run our 
> benchmarks. By chance do you have some more information (and possibly
> configuration) about the bechmarks you are mentioning?

Well, what I'd actually do (and don't say I put you up to this) is send
an email to linux-fsde...@vger.kernel.org alleging a 60% performance
drop on O_DIRECT under whatever filesystem it is, along with the
figures you publish.  I bet this will get a swift response from the
lead developer for that filesystem and a demand to help you tune your
system better ...

James


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Stabilization cycles: Elaborating on the idea to move it forward

2016-01-22 Thread James Bottomley
On Fri, 2016-01-22 at 13:54 +0100, Thierry Carrez wrote:
> Zane Bitter wrote:
> > [...] Honestly, it
> > sounds like the kind of thing you come up with when you've given
> > up.
> 
> I tend to agree with that... I think healthy projects should
> naturally 
> come up with bursts of feature addition and bursts of repaying
> technical 
> debt. That is why I prefer not to be too prescriptive here.
> 
> If some people are under the impression that pausing feature addition
> in 
> order to focus on stability is discouraged, we should fix that (and I
> think this thread is a great way of making it clearer). But I would
> be 
> reluctant to start standardizing what a "stabilization period" is, or
> starting to impose them. As a lot of people said, ideally you would
> add 
> features and repay technical debt continuously. Imposing specific 
> periods of stabilization prevents reaching that ideal state.
> 
> So in summary: yes to expressing that it is fine to have them, no to 
> being more prescriptive about them.

Experience with Linux shows that deliberate stabilisation cycles, where
you refuse to accept features simply don't work: everyone gets
frustrated and development suffers.  You get lots of features disguised
as bug fixes and a lot of fighting when you try to police the bug fixes
to extract the hidden features which saps effort.

I still think what OpenStack actually needs is simply a longer
stabilisation time.  Right now, in the 6 month cycle, there are about
five months of development beginning with the design summit and one
month of -rc stabilisation.  In today's model, to extend stabilisation
you have to steal time from feature development, which again causes a
lot of argument.  To fix this, I'd turn that around and make it one
month of feature merging followed by five months of -rc stabilisation. 
 Essentially following the merge window pattern that makes the Linux
Kernel work so well.

The behaviour changes that would have to happen would be that there
would need to be a next-openstack tree that features get landed in
which would be where stuff was pulled from during the merge window
provided they pass the usual continuous tests.  Probably the design
summit would have to be just after the end of the merge window (and the
beginning of the -rc cycle) to be effective at discussing all the new
features and any feature that failed to make it to the merge window
would get held off to the next release.  This gives an effective five
month window to land features for the next release plus a five month
stabilisation cycle while keeping a 6 month release cycle.  The cost
for doing this is largely borne by feature developers who have to plan
which merge window to aim for.

James


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] sponsor some LVM development

2016-01-22 Thread James Bottomley
On Fri, 2016-01-22 at 14:58 +0100, Premysl Kouril wrote:
> Hi Matt, James,
> 
> any thoughts on the below notes?

To be honest, not really.  You've repeated stage two of the Oracle
argument: wheel out benchmarks and attack alleged "complexity".  I
don't really have a great interest in repeating a historical argument. 
 Oracle didn't get it either until they released the feature and ran
into the huge management complexity of raw devices in the field, so if
you have the resources to repeat the experiment and see if you get
different results, be my guest.

The lesson I took from the Oracle affair all those years ago is that
it's far harder to replace well understood and functional file
interfaces with new ones (mainly because of the tooling and historical
understanding that comes with the old ones) than it is to gain
performance in existing interfaces.

The 3x difference in the benchmarks would seem to indicate a local
tuning or configuration problem, because it's not what most people see.
 What the current benchmarks seem to show is about a 1-5% difference
between the directio and the direct to block paths depending on fstype,
how its tuned, ioscheduler and underlying device.

James

> Best Regards,
> Prema
> On 19 Jan 2016 20:47, "Premysl Kouril" 
> wrote:
> 
> > Hi James,
> > 
> > 
> > > 
> > > You still haven't answered Anita's question: when you say
> > > "sponsor" do
> > > you mean provide resources to existing developers to work on your
> > > feature or provide new developers.
> > > 
> > 
> > I did, I am copy-pasting my response to Anita here again:
> > 
> > Both. We are first trying this "Are you asking for current Nova
> > developers to work on this feature?" and if we won't find anybody
> > we
> > will start with "your company interested in having your developers
> > interact with Nova developers"
> > 
> > 
> > > 
> > > Heh, this is history repeating itself from over a decade ago when
> > > Oracle would have confidently told you that Linux had to have raw
> > > devices because that's the only way a database will perform. 
> > >  Fast
> > > forward to today and all oracle databases use file backends.
> > > 
> > > Simplicity is also in the eye of the beholder.  LVM has a very
> > > simple
> > > naming structure whereas filesystems have complex hierarchical
> > > ones.
> > >  Once you start trying to scale to millions of instances, you'll
> > > find
> > > there's quite a management penalty for the LVM simplicity.
> > 
> > We won't definitely have millions instances on hypervisors but we
> > can
> > certainly have applications demanding million IOPS (in sum) from
> > hypervisor in near future.
> > 
> > > 
> > > >  It seems from our benchmarks that LVM behavior when
> > > > processing many IOPs (10s of thousands) is more stable than if
> > > > filesystem is used as backend.
> > > 
> > > It sounds like you haven't enabled directio here ... that was the
> > > solution to the oracle issue.
> > 
> > 
> > If you mean O_DIRECT mode then we had than one during our
> > benchmarks.
> > Here is our benchmark setup and results:
> > 
> > testing box configuration:
> > 
> >   CPU: 4x E7-8867 v3 (total of 64 physical cores)
> >   RAM: 1TB
> >   Storage: 12x enteprise class SSD disks (each disk 140 000/120 000
> > IOPS read/write)
> > disks connected via 12Gb/s SAS3 lanes
> > 
> >   So we are using big boxes which can run quite a lot of VMs.
> > 
> >   Out of the disks we create linux md raid (we did raid5 and
> > raid10)
> > and do some fine tuning:
> > 
> > 1) echo 8 > /sys/block/md127/md/group_thread_cnt - this increases
> > parallelism for raid5
> > 2) we boot kernel with scsi_mod.use_blk_mq=Y to active block io
> > multi
> > queueing
> > 3) we increase size of caching (for raid5)
> > 
> >  On that raid we either create LVM group or filesystem depending if
> > we
> > are testing LVM nova backend or file-based nova backend.
> > 
> > 
> > On this hypervisor we run nova/kvm and we provision 10-20 VMs and
> > we
> > run benchmark tests from these VMs and we are trying to saturate IO
> > on
> > hypervisor.
> > 
> > We use following command running inside the VMs:
> > 
> > fio --randrepeat=1 --ioengine=libaio --direct=1 -gtod_reduce=1
> > --name=test1 --bs=4k --iodepth=256 --size=20G --numjobs=1
> > --readwrite=randwrite
> > 
> > So you can see that in the guest OS we use --direct=1 which causes
> > the
> > test file to be opened with O_DIRECT. Actually I am now not sure
> > but
> > if using file-based backend then I hope that the virtual disk is
> > automatically opened with O_DIRECT and that it is done by
> > libvirt/qemu
> > by default without any explicit configuration.
> > 
> > Anyway, with this we have following results:
> > 
> > If we use file-based backend in Nova, ext4 filesystem and RAID5
> > then
> > in 8 parallel VMs we were able to achieve ~3000 IOPS per machine
> > which
> > means in total about 32000 IOPS.
> > 
> > If we use LVM-based backend,RAID5, 8 parallel VMs, we achieve
> > ~11000
> >

Re: [openstack-dev] [Nova] sponsor some LVM development

2016-01-19 Thread James Bottomley
On Tue, 2016-01-19 at 13:40 +0100, Premysl Kouril wrote:
> Hi Matt,
> 
> thanks for letting me know, we will definitely do reach you out if we
> start some activity in this area.

You still haven't answered Anita's question: when you say "sponsor" do
you mean provide resources to existing developers to work on your
feature or provide new developers.

> To answer your question: main reason for LVM is simplicity and
> performance.

Heh, this is history repeating itself from over a decade ago when
Oracle would have confidently told you that Linux had to have raw
devices because that's the only way a database will perform.  Fast
forward to today and all oracle databases use file backends.

Simplicity is also in the eye of the beholder.  LVM has a very simple
naming structure whereas filesystems have complex hierarchical ones. 
 Once you start trying to scale to millions of instances, you'll find
there's quite a management penalty for the LVM simplicity.

>  It seems from our benchmarks that LVM behavior when
> processing many IOPs (10s of thousands) is more stable than if
> filesystem is used as backend.

It sounds like you haven't enabled directio here ... that was the
solution to the oracle issue.

>  Also a filesystem generally is heavier
> and more complex technology than LVM and we wanted to stay really as
> simple as possible on the IO datapath - to make everything
> (maintaining, tuning, configuring) easier.

And this was precisely the Oracle argument.  The reason it foundered is
that most FS complexity goes to manage the data structures ... the I/O
path can still be made short and fast, as DirectIO demonstrates.  Then
the management penalty you pay (having to manage all the data
structures that the filesystem would have managed for you) starts to
outweigh any minor performance advantages.

James

> Do you see this as reasonable argumentation? Do you see some major
> benefits of file-based backend over the LVM one?
> 
> Cheers,
> Prema
> 
> On Tue, Jan 19, 2016 at 12:18 PM, Matthew Booth 
> wrote:
> > Hello, Premysl,
> > 
> > I'm not working on these features, however I am working in this
> > area of code
> > implementing the libvirt storage pools spec. If anybody does start
> > working
> > on this, please reach out to coordinate as I have a bunch of
> > related
> > patches. My work should also make your features significantly
> > easier to
> > implement.
> > 
> > Out of curiosity, can you explain why you want to use LVM
> > specifically over
> > the file-based backends?
> > 
> > Matt
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Proposal: copyright-holders file in each project, or copyright holding forced to the OpenStack Foundation

2016-01-17 Thread James Bottomley
On Fri, 2016-01-15 at 15:38 +, Daniel P. Berrange wrote:
> On Fri, Jan 15, 2016 at 08:48:21PM +0800, Thomas Goirand wrote:
> > This isn't the first time I'm calling for it. Let's hope this time,
> > I'll
> > be heard.
> > 
> > Randomly, contributors put their company names into source code.
> > When
> > they do, then effectively, this tells that a given source file
> > copyright
> > holder is whatever is claimed, even though someone from another
> > company
> > may have patched it.
> > 
> > As a result, we have a huge mess. It's impossible for me, as a
> > package
> > maintainer, to accurately set the copyright holder names in the
> > debian/copyright file, which is a required by the Debian FTP
> > masters.
> 
> I don't think OpenStack is in a different situation to the vast
> majority of open source projects I've worked with or seen. Except
> for those projects requiring copyright assignment to a single
> entity, it is normal for source files to contain an unreliable
> random splattering of Copyright notices. This hasn't seemed to
> create a blocking problem for their maintenance in Debian. Loooking
> at the debian/copyright files I see most of them have just done a
> grep for the 'Copyright' statements & included as is - IOW just
> ignored the fact that this is essentially worthless info and included
> it regardless.
> 
> > I see 2 ways forward:
> > 1/ Require everyone to give-up copyright holding, and give it to
> > the
> > OpenStack Foundation.
> > 2/ Maintain a copyright-holder file in each project.
> 
> 3/ Do nothing, just populate debian/copyright with the random
>set of 'Copyright' lines that happen to be the source files,
>as appears to be common practice across many debian packages
> 
>eg the kernel package
> 
> http://metadata.ftp-master.debian.org/changelogs/main/l/linux/lin
> ux_3.16.7-ckt11-1+deb8u3_copyright
> 
> "Copyright: 1991-2012 Linus Torvalds and many others"
> 
>if its good enough for the Debian kernel package, it should be
>good enough for openstack packages too IMHO.

This is what I'd vote for.  It seems to be enough to satisfy the debian
policy on copyrights and it means nothing has to change in Openstack.

James


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Proposal: copyright-holders file in each project, or copyright holding forced to the OpenStack Foundation

2016-01-17 Thread James Bottomley
On Fri, 2016-01-15 at 20:48 +0800, Thomas Goirand wrote:
> This isn't the first time I'm calling for it. Let's hope this time, 
> I'll be heard.
> 
> Randomly, contributors put their company names into source code. When
> they do, then effectively, this tells that a given source file 
> copyright holder is whatever is claimed, even though someone from 
> another company may have patched it.
> 
> As a result, we have a huge mess. It's impossible for me, as a 
> package maintainer, to accurately set the copyright holder names in 
> the debian/copyright file, which is a required by the Debian FTP
> masters.

The debian copyright policy merely requires the debian/copyright file
to aggregate the stated copyright of the project ... it doesn't require
the project to keep complete and accurate records or Debian to
manufacture them:

https://www.debian.org/doc/debian-policy/ch-docs.html#s-copyrightfile

Traditionally the git repository is the complete record of who changed
the file.  However, legally, not every change might be considered
copyrightable so most open source projects leave it up to authors
whether they want to add a copyright annotation or not.

Just simply aggregating what's stated in the files is enough to satisfy
the debian policy.

> I see 2 ways forward:
> 1/ Require every one to give-up copyright holding, and give it to the
> OpenStack Foundation.

Good grief, I thought we'd got beyond the days of copyright assignment
... it might simplify your administration, but it would greatly
increase the burden of someone to maintain the files of the necessary
assignments.

> 2/ Maintain a copyright-holder file in each project.

How is that different from letting people decide if they want to add
their copyrights to the header of the file ... in other words, how will
it make the situation different from today?

> The later is needed if we want to do things correctly. Leaving the
> possibility for everyone to just write (c) MyCompany LLC randomly in 
> the source code doesn't cut it. Expecting that a package maintainer
> should double-guess copyright holding just by reading the email 
> addresses of "git log" output doesn't work either.
> 
> Please remember that a copyright holder has nothing to do with the
> license, neither with the author of some code. So please do *not*
> take
> over this thread, and discuss authorship or licensing.
> 
> Whatever we choose, I think we should ban having copyright holding
> text
> within our source code. While licensing is a good idea, as it is
> accurate, the copyright holding information isn't and it's just
> missleading.
> 
> If I was the only person to choose, I'd say let's go for 1/, but
> probably managers of every company wont agree.
> 
> Some thoughts anyone?

I don't think there's anything broken here, so I'd vote for not trying
to fix it ...

James



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] "Open Containers" group on LinkedIn

2016-01-09 Thread James Bottomley
On Sat, 2016-01-09 at 18:11 +0530, Nitin Agarwal wrote:
> Hello Everyone,
> 
> A very Happy New Year 2016 !!
> 
> I have started a new group "Open Containers" on LinkedIn to provide a
> common platform to all the Containers and Docker enthusiasts and
> passionate
> people. In this group, we will be discussing about the Linux and
> Docker
> containers runtimes as well as Docker development, orchestration,
> deployment, monitoring, networking and security tools.
> 
> Join the group here: https://www.linkedin.com/groups/8451196

For discussions, could you just set up a regular mailing list?  The
advantages over linked-in for open source are: open platform, indexed
by the search engines and easy subscriptions.  If you want an existing
one with a reasonable email interface and social attachments, google
groups is OK.

James



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Testing concerns around boot from UEFI spec

2016-01-07 Thread James Bottomley
On Thu, 2016-01-07 at 16:55 -0800, Yuhong Bao wrote:
> > > I read the patent and it looks like UEFI or for that matter any
> > > non
> > > -Windows implementation of FAT would probably not infringe on the
> > > patent.
> > 
> > Well, I'm not going to give you a legal opinion. However, most
> > people
> > think this patent covers the long vs short filenames conversions
> > used
> > by vfat. The UEFI implementation definitely implements the long vs
> > short name conversions for FAT/VFAT compatibility.
> > 
> > James
> 
> I actually read the claims in the patent and my point is that it
> mostly only covers the INT 21h interface in Win9x,
> which UEFI or for that matter Linux don't use.

Um, that's the first two independent claims.  The long filename stuff
begins at claim 4.

James



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Testing concerns around boot from UEFI spec

2016-01-07 Thread James Bottomley
On Thu, 2016-01-07 at 18:03 +, Yuhong Bao wrote:
> James Bottomley  writes:
> > As you can see, they're mostly expired (in the US) but the last one
> > will expire in 2020 (if I calculate the date correctly).
> If you are referring to US6286013, 

That's the latest expiring one, yes.

> I read the patent and it looks like UEFI or for that matter any non
> -Windows implementation of FAT would probably not infringe on the
> patent.

Well, I'm not going to give you a legal opinion.  However, most people
think this patent covers the long vs short filenames conversions used
by vfat.  The UEFI implementation definitely implements the long vs
short name conversions for FAT/VFAT compatibility. 

James


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

2015-12-16 Thread James Bottomley
On Wed, 2015-12-16 at 22:48 +, Adrian Otto wrote:
> On Dec 16, 2015, at 2:25 PM, James Bottomley <
> james.bottom...@hansenpartnership.com james.bottom...@hansenpartnership.com>> wrote:
> 
> On Wed, 2015-12-16 at 20:35 +, Adrian Otto wrote:
> Clint,
> 
> On Dec 16, 2015, at 11:56 AM, Tim Bell  tim.b...@cern.ch>> wrote:
> 
> -Original Message-
> From: Clint Byrum [mailto:cl...@fewbar.com]
> Sent: 15 December 2015 22:40
> To: openstack-dev  openstack-dev@lists.openstack.org>>
> Subject: Re: [openstack-dev] [openstack][magnum] Quota for Magnum
> Resources
> 
> Hi! Can I offer a counter point?
> 
> Quotas are for _real_ resources.
> 
> No. Beyond billable resources, quotas are a mechanism for limiting
> abusive use patterns from hostile users.
> 
> Actually, I believe this is the wrong way to look at it.  You're
> confusing policy and mechanism.  Quotas are policy on resources.  The
> mechanisms by which you implement quotas can also be used to limit
> abuse by hostile users, but that doesn't mean that this limitation
> should be part of the quota policy.
> 
> I’m not convinced. Cloud operators already use quotas as a mechanism
> for limiting abuse (intentional or accidental). They can be
> configured with a system wide default, and can be set to a different
> value on a per-tenant basis. It would be silly to have a second
> mechanism for doing the same thing we already use quotas for.
> Quotas/limits can also be queried by a user so they can determine why
> they are getting a 4XX Rate Limit responses when they try to act on
> resources too rapidly.

I think we might be talking a bit past each other.  My definition of
"real" is end user visible.  So in the fork bomb example below the end
user visible (and billable) panel just gives a choice for "memory". 
 The provider policy divides this into user memory and kernel memory,
usually in a fixed ratio and then imposes that on the cgroup.

> The idea of hard coding system wide limits into the system is making
> my stomach turn. If you wanted to change the limit you’d need to edit
> the production system’s configuration, and restart the API services.
> Yuck! That’s why we put quotas/limits into OpenStack to begin with,
> so that we had a sensible, visible, account-level configurable place
> to configure limits.

I don't believe anyone advocated for hard coding.  I was just saying
that the view that Quota == Real End User Visible resource limits is a
valid way of looking at things because it forces you to think about
what the end user sees.  The fact that the service provided uses the
mechanism for abuse prevention is also valid, but you wouldn't usually
want the end user to see it.  Even in a private cloud, you'll have this
distinction between end user and cloud administrator.  Conversely,
taking the mechanistic view that anything you can do with the mechanism
constitutes a quota and should be exposed pushes the issue up to the
UI/UX layer to sort out.

Perhaps this whole thing is just a semantic question of does quota mean
mechanism or policy.  I think the latter, but I suppose it's possible
to take the view it's the former ... in which case we just need more
precision.

James

> Adrian
> 
> 
> For instance, in Linux, the memory limit policy is implemented by the
> memgc.  The user usually sees a single figure for "memory" but inside
> the cgroup, that memory is split into user and kernel.  Kernel memory
> limiting prevents things like fork bombs because you run out of your
> kernel memory limit creating task structures before you can bring
> down
> the host system.  However, we don't usually expose the kernel/user
> split or the fact that the kmem limit mechanism can prevent fork and
> inode bombs.
> 
> James
> 
> The rate at which Bays are created, and how many of them you can
> have in total are important limits to put in the hands of cloud
> operators. Each Bay contains a keypair, which takes resources to
> generate and securely distribute. Updates to and Deletion of bays
> causes a storm of activity in Heat, and even more activity in Nova.
> Cloud operators should have the ability to control the rate of
> activity by enforcing rate controls on Magnum resources before they
> become problematic further down in the control plane. Admission
> controls are best managed at the entrance to a system, not at the
> core.
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

2015-12-16 Thread James Bottomley
On Wed, 2015-12-16 at 20:35 +, Adrian Otto wrote:
> Clint,
> 
> > On Dec 16, 2015, at 11:56 AM, Tim Bell  wrote:
> > 
> > > -Original Message-
> > > From: Clint Byrum [mailto:cl...@fewbar.com]
> > > Sent: 15 December 2015 22:40
> > > To: openstack-dev 
> > > Subject: Re: [openstack-dev] [openstack][magnum] Quota for Magnum
> > > Resources
> > > 
> > > Hi! Can I offer a counter point?
> > > 
> > > Quotas are for _real_ resources.
> 
> No. Beyond billable resources, quotas are a mechanism for limiting 
> abusive use patterns from hostile users.

Actually, I believe this is the wrong way to look at it.  You're
confusing policy and mechanism.  Quotas are policy on resources.  The
mechanisms by which you implement quotas can also be used to limit
abuse by hostile users, but that doesn't mean that this limitation
should be part of the quota policy.

For instance, in Linux, the memory limit policy is implemented by the
memgc.  The user usually sees a single figure for "memory" but inside
the cgroup, that memory is split into user and kernel.  Kernel memory
limiting prevents things like fork bombs because you run out of your
kernel memory limit creating task structures before you can bring down
the host system.  However, we don't usually expose the kernel/user
split or the fact that the kmem limit mechanism can prevent fork and
inode bombs.

James

>  The rate at which Bays are created, and how many of them you can
> have in total are important limits to put in the hands of cloud
> operators. Each Bay contains a keypair, which takes resources to
> generate and securely distribute. Updates to and Deletion of bays
> causes a storm of activity in Heat, and even more activity in Nova.
> Cloud operators should have the ability to control the rate of
> activity by enforcing rate controls on Magnum resources before they
> become problematic further down in the control plane. Admission
> controls are best managed at the entrance to a system, not at the
> core.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Testing concerns around boot from UEFI spec

2015-12-16 Thread James Bottomley
On Fri, 2015-12-04 at 08:46 -0500, Sean Dague wrote:
> On 12/04/2015 08:34 AM, Daniel P. Berrange wrote:
> > On Fri, Dec 04, 2015 at 07:43:41AM -0500, Sean Dague wrote:
> > > That seems weird enough that I'd rather push back on our Platinum
> > > Board
> > > member to fix the licensing before we let this in. Especially as
> > > this
> > > feature is being drive by Intel.
> > 
> > As copyright holder, Intel could choose to change the license of
> > their
> > code to make it free software avoiding all the problems. None the
> > less,
> > as above, I don't think this is a blocker for inclusion of the
> > feature
> > in Nova, nor our testing of it.

Actually, it's a bit over simplified to claim this.  The origins of
this clause are in the covenants not to sue in the FAT spec:

http://download.microsoft.com/download/1/6/1/161ba512-40e2-4cc9-843a-92
3143f3456c/fatgen103.doc

It's clause 1(e).  The reason for the clause is a complex negotiation
over the UEFI spec (Microsoft committed to a royalty free
implementation and UEFI needed to use FAT for backward compatibility
with older BIOS).  The problem is that the litigation history no longer
supports claiming the patents are invalid:

http://en.swpat.org/wiki/Microsoft_FAT_patents

As you can see, they're mostly expired (in the US) but the last one
will expire in 2020 (if I calculate the date correctly).  No
corporation (including Intel) can safely release a driver under a
licence that doesn't respect the FAT covenant not to sue without being
subject to potential accusations of contributory infringement.  So,
you're right, Intel could release the FAT 32 driver under a non
-restricted licence as you say but only if they effectively take on
liability for potential infringement for every downstream user ...
amazingly enough they don't want to do that.  Red Hat could do the
same, of course: just strip the additional restrictions clause; Intel
won't enforce it; then Red Hat would take on all the liability ...

The FAT driver is fully separated from the EDKII source:

https://github.com/tianocore/tianocore.github.io/wiki/Edk2-fat-driver

So it can easily be replaced.  The problem is how when every UEFI
driver or update comes on a FAT32 format system.

> That's fair. However we could also force having this conversation 
> again, and pay it forward to the larger open source community by 
> getting this ridiculous licensing fixed. We did the same thing with 
> some other libraries in the past.

The only way to "fix" the licence is to either get Microsoft to extend
the covenant not to sue to all open source projects (I suppose not
impossible given they're making friendlier open source noises) or wait
for the patents to expire.

James


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Keystone] The unbearable lightness of specs

2015-06-24 Thread James Bottomley
On Wed, 2015-06-24 at 17:25 +0100, Daniel P. Berrange wrote:
> On Wed, Jun 24, 2015 at 11:52:37AM -0400, Adam Young wrote:
> > On 06/24/2015 06:28 AM, Nikola Đipanov wrote:
> > >Gerrit and our spec template are a horrible tool for
> > >discussing design.
> > This is the heart of the problem.
> > 
> > 
> > I think that a proper RFE description in the bug tracker is the best place
> > to start.  Not a design of the solution, but a statement of the problem.
> > 
> > Then, the rest of the discussion should take place in the code. Keystoen has
> > the Docs right in the code, as do, I think, every other project.  Don't sign
> > off on a patch for a major feature unless the docs have been updated to
> > explain that feature.  It will keep us from bike shedding about Database
> > schemas.
> 
> What you are describing is sounds like the situation that existed before
> the specs concept was introduced. We had a statement of problem in the
> blueprint, and then people argued over the design in the code reviews.
> It really didn't work at all - code reviews are too late in the workflow
> to start discussions around the design, as people are already invested
> in dev work at that point and get very upset when you then tell them
> to throw away their work. Which happened repeatedly. You could say that
> the first patch submitted to the code repository should simply be a doc
> file addition, that describes the feature proposal and we should discuss
> that before then submitting code patches, but then that's essentially
> just the specs again, but with the spec doc in the main nova.git instead
> of nova-specs.git.

I don't think you meant this as a don't do it, just an experience point:
you're saying *we* couldn't make the process work, but that doesn't mean
that the process itself (specless code reviews) can't be made to work.
It works fine for a lot of open source projects, so the process must be
workable.  However, what the experience has shown is that there's a
bottleneck which isn't removed simply by removing the spec process.

On the spec question, the reason projects like the Linux Kernel are so
code driven is that with code it's harder to block a submission on
esoteric grounds, whereas with no code it is easier to argue endlessly
about minutiae.  I think this might be the reason for the "lightness"
Nikola complains about: the less you put in a spec, the less reason
people have to weigh in on it and delay its approval.  Perhaps an
appropriate question is: "is that fear well founded or just anecdotal?"

If I look at what the above says about the main issue that doesn't get
solved by removing the spec process, it's review pressure: how do you
increase the throughput of approval/rejection.  Note I didn't advocate
more reviews: The metric for success should be time to resolution
(accept/reject), not the number of reviews, if we ask everyone to double
their time spent reviewing and the net result of this is that the
average number of reviews to resolution doubles, nothing is gained.  So
perhaps it's time to actually measure the number of reviews to
resolution and look at ways to reduce this metric.  That will make the
available current review bandwidth go further and reduce the time to
actual resolution without anyone having to do more work.  The low
hanging fruit in this might be the obvious patches: obvious accept
(simple bug fix) or obvious reject (bogus code) should go through after
one review: do they?  Perhaps one other is wasting core time: after a
couple of failed reviews by people who could +2 the thing, is it time to
reject?  And perhaps, finally, there should be a maximum number of
reviews before an automatic reject?

Sorry for derailing the argument, but I do still see review bandwidth as
a key issue behind all the problems.

James



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [fuel] more collaboration request

2015-06-12 Thread James Bottomley
On Fri, 2015-06-12 at 13:05 -0700, Dmitry Borodaenko wrote:
> On Fri, Jun 12, 2015 at 08:33:56AM -0700, James Bottomley wrote:
> > On Fri, 2015-06-12 at 02:43 -0700, Dmitry Borodaenko wrote:
> > > On Thu, Jun 11, 2015 at 11:43:09PM -0400, Emilien Macchi wrote:
> > > > What about code history and respect of commit ownership?
> > > > I'm personally wondering if it's fair to copy/paste several thousands of
> > > > lines of code from another Open-Source project without asking to the
> > > > community or notifying the authors before. I know it's Open-Source and
> > > > Apache 2.0 but well... :-)
> > > 
> > > Being able to copy code without having to ask for permission is exactly
> > > what Free Software (and more recently, Open Source) is for.
> > 
> > Copy and Paste forking of code into compatibly licenced code (without
> > asking permission) is legally fine as long as you observe the licence,
> > but it comes with a huge technical penalty:
> > 
> >  1. Copy and Paste forks can't share patches: a patch for one has to
> > be manually applied to the other.  The amount of manual
> > intervention required grows as the forks move out of sync.
> >  2. Usually one fork gets more attention than the other, so the
> > patch problem of point 1 eventually causes the less attended
> > fork to become unmaintainable (or if not patched, eventually
> > unusable).
> >  3. In the odd chance both forks receive equal attention, you're
> > still expending way over 2x the effort you would have on a
> > single code base.
> > 
> > There's no rule of thumb for this: we all paste snippets (pieces of code
> > of up to around 10 lines or so).  Sometimes these snippets contain
> > errors and suddenly hundreds of places need fixing.   The way around
> > this problem is to share code, either by inclusion, modularity or
> > linking.  The reason we paste snippets is because sharing for them is
> > enormous effort.  However, as the size of the paste grows, so does the
> > fork penalty and it becomes advantageous to avoid it and the effort of
> > sharing the code looks a lot less problematic.
> > 
> > Even in the case where the fork is simply "patch the original for bug
> > fixes and some new functionality", the fork penalty rules apply.
> > 
> > The data that supports all of this came from Red Hat and SUSE.  The end
> > of the 2.4 kernel release cycle for them was a disaster with patch sets
> > larger than the actual kernel itself.  Sorting through the resulting
> > rubble is where the "upstream first" policy actually came from.
> 
> Thanks for the excellent summary of the technical penalties incurred by
> straying too far from upstream.

You're welcome.

> It's funny how after years of trying to convince Fuel developers to put
> more effort into collaboration with upstream, in this thread I managed
> to come across as if I were arguing the opposite. To reiterate, I
> understand and support the practical reasons to reduce the gap between
> Fuel and Puppet OpenStack, and I believe that practical reasons are a
> much better way to motivate Fuel developers to collaborate than arguing
> whether what Fuel team has done in the past was fair or wrong.

I agree; recriminations never solve anything but just to close out on
the topic of authorship and commit history, since I think there's been
some misunderstandings there as well:

The licence is the ultimate arbiter of what you absolutely *have* to do
to remain in compliance.  The licence governs only the code, not the
commit history, so under the licence, you're free to flush all the
commit history with no legal consequence from the terms of the licence.

However, the commit history is vital to obtaining the provenance of the
code.  If there's ever a question about who authored what part of the
code (or worse, who copied it wrongly from a different project, as in
the SCO suit against Linux) you need the commit history to establish the
chain of conveyance into the code.  If we lose this, the protection of
the OpenStack CLA and ICLA will be lost as well (along with any patent
grants that may have been captured) because they rely on knowing where
the code came from.  So in legal hygiene and governance terms, you're
not free to flush the commit history without setting up the project for
provenance problems on down the road.

James



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [fuel] more collaboration request

2015-06-12 Thread James Bottomley
On Fri, 2015-06-12 at 02:43 -0700, Dmitry Borodaenko wrote:
> On Thu, Jun 11, 2015 at 11:43:09PM -0400, Emilien Macchi wrote:
> > What about code history and respect of commit ownership?
> > I'm personally wondering if it's fair to copy/paste several thousands of
> > lines of code from another Open-Source project without asking to the
> > community or notifying the authors before. I know it's Open-Source and
> > Apache 2.0 but well... :-)
> 
> Being able to copy code without having to ask for permission is exactly
> what Free Software (and more recently, Open Source) is for.

Copy and Paste forking of code into compatibly licenced code (without
asking permission) is legally fine as long as you observe the licence,
but it comes with a huge technical penalty:

 1. Copy and Paste forks can't share patches: a patch for one has to
be manually applied to the other.  The amount of manual
intervention required grows as the forks move out of sync.
 2. Usually one fork gets more attention than the other, so the
patch problem of point 1 eventually causes the less attended
fork to become unmaintainable (or if not patched, eventually
unusable).
 3. In the odd chance both forks receive equal attention, you're
still expending way over 2x the effort you would have on a
single code base.

There's no rule of thumb for this: we all paste snippets (pieces of code
of up to around 10 lines or so).  Sometimes these snippets contain
errors and suddenly hundreds of places need fixing.   The way around
this problem is to share code, either by inclusion, modularity or
linking.  The reason we paste snippets is because sharing for them is
enormous effort.  However, as the size of the paste grows, so does the
fork penalty and it becomes advantageous to avoid it and the effort of
sharing the code looks a lot less problematic.

Even in the case where the fork is simply "patch the original for bug
fixes and some new functionality", the fork penalty rules apply.

The data that supports all of this came from Red Hat and SUSE.  The end
of the 2.4 kernel release cycle for them was a disaster with patch sets
larger than the actual kernel itself.  Sorting through the resulting
rubble is where the "upstream first" policy actually came from.

James



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-03 Thread James Bottomley
On Wed, 2015-06-03 at 17:45 +0300, Boris Pavlovic wrote:
> James B.
> 
> One more time.
> Everybody makes mistakes and it's perfectly OK.
> I don't want to punish anybody and my goal is to make system
> that catch most of them (human mistakes) no matter how it is complicated.

I'm not saying never do systems to catch human mistakes, I'm saying it's
a tradeoff: you have to assess what the consequence of the caught
mistake is vs how much bother is it to implement and maintain the system
that would have caught the mistake (and how much annoyance does it
cause).  Complexity kills, whether in code or in systems, so I don't
think it's right to say we do the system "no matter how complicated".

In this case, the benefit looks to be small, because the system we have
today already copes with mistakes by cores and the implementation and
maintenance cost in both gerrit code and maintaining the maps looks to
be high.  So, in my book, it's a bad tradeoff.

James



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-03 Thread James Bottomley
On Wed, 2015-06-03 at 09:29 +0300, Boris Pavlovic wrote:
> *- Why not just trust people*
> 
> People get tired and make mistakes (very often).
> That's why we have blocking CI system that checks patches,
> That's why we have rule 2 cores / review (sometimes even 3,4,5...)...
> 
> In ideal work Lieutenants model will work out of the box. In real life all
> checks like:
> person X today has permission to do Y operation should be checked
> automatically.
> 
> This is exactly what I am proposing.

This is completely antithetical to the open source model.  You have to
trust people, that's why the project has hierarchies filled with more
trusted people.  Do we trust people never to make mistakes?  Of course
not; everyone's human, that's why there are cross checks.  It's simply
not possible to design a system where all the possible human mistakes
are eliminated by rules (well, it's not possible to imagine: brave new
world and 1984 try looking at something like this, but it's impossible
to build currently in practise).

So, before we build complex checking systems, the correct question to
ask is: what's the worst that could happen if we didn't?  In this case,
two or more of your lieutenants accidentally approve a patch not in
their area and no-one spots it before it gets into the build.
Presumably, even though it's not supposed to be their areas, they
reviewed the patch and found it OK.  Assuming the build isn't broken,
everything proceeds as normal.  Even if there was some subtle bug in the
code that perhaps some more experienced person would spot, eventually it
gets found and fixed.

You see the point?  This is roughly equivalent to what would happen
today if a core made a mistake in a review ... it's a normal consequence
we expect to handle.  If it happened deliberately then the bad
Lieutenant eventually gets found and ejected (in the same way a bad core
would).  The bottom line is there's no point building a complex
permission system when it wouldn't really improve anything and it would
get in the way of flexibility.

James



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova-docker] Status update

2015-05-11 Thread James Bottomley
On Sat, 2015-05-09 at 16:55 +, Adrian Otto wrote:
> I will also mention that it’s natural to be allergic to the idea of
> nested virtualization. We all know that creating multiple levels of
> hardware virtualization leads to bad performance outcomes. However,
> "nested containers" do not carry that same drawback, because the
> actual overhead of a Linux cgroup and Kernel Namespeaces are much
> lighter than a hardware virtualization. There are cases where having a
> container-in-container setup gives compelling benefits. That’s why
> I’ll argue vigorously for both Nova and Magnum to be able to produce
> container instances both at the machine level, and allow Magnum to
> produce "nested containers” to produce better workload consolidation
> density. in a setup with no hypervisors at all.

Actually, it's really important to emphasise that the reason nesting is
nasty for hypervisors is because of the HW emulation.  Every layer
requires HW emulation and a kernel to run it, leading to rapid
performance degradation.  Containers, on the other hand, are a naturally
nesting technology since the hierarchy is simply a management construct
within the OS itself (hence no performance degradation once you're
already using containers).  The traditional way to think about it is, as
you say, the container in OS use case emulating the container on
hypervisor use case.

In practical terms, it's becoming impossible to deploy a modern
operating system in a container without nesting.  The reason is the
world's first containerised application (no, not docker): systemd.  Any
OS in a container running systemd already has a partially nested
hierarchy.  This will only magnify as the number of containerised
applications grows.

The point I'm trying to make is that container nesting isn't a nice side
effect, it's an essential requirement ... and one people who deploy
virtualization need to become used to thinking about.

James



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] cross project communication: periodic developer newsletter?

2015-05-06 Thread James Bottomley
On Wed, 2015-05-06 at 11:54 +0200, Thierry Carrez wrote:
> Hugh Blemings wrote:
> > +2
> > 
> > I think asking LWN if they have the bandwidth and interest to do this
> > would be ideal - they've credibility in the Free/Open Source space and a
> > proven track record.  Nice people too.
> 
> On the bandwidth side, as a regular reader I was under the impression
> that they struggled with their load already, but I guess if it comes
> with funding that could be an option.
> 
> On the interest side, my past tries to invite them to the OpenStack
> Summit so that they could cover it (the way they cover other
> conferences) were rejected, so I have doubts in that area as well.
> 
> Anyone having a personal connection that we could leverage to pursue
> that option further ?

Sure, be glad to.

I've added Jon to the cc list (if his openstack mail sorting scripts
operate like mine, that will get his attention).

I already had a preliminary discussion with him: lwn.net is interested
but would need to hire an extra person to cover the added load.  That
makes it quite a big business investment for them.

James





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] cross project communication: periodic developer newsletter?

2015-05-05 Thread James Bottomley
On Tue, 2015-05-05 at 10:39 -0700, Joshua Harlow wrote:
> > James Bottomley <mailto:james.bottom...@hansenpartnership.com>
> > May 5, 2015 at 9:53 AM
> > On Tue, 2015-05-05 at 10:45 +0200, Thierry Carrez wrote:
> >> Joe Gordon wrote:
> >>> [...]
> >>> To tackle this I would like to propose the idea of a periodic developer
> >>> oriented newsletter, and if we agree to go forward with this, hopefully
> >>> the foundation can help us find someone to write newsletter.
> >> I've been discussing the idea of a "LWN" for OpenStack for some time,
> >> originally with Mark McLoughlin. For those who don't know it, LWN
> >> (lwn.net) is a source of quality tech reporting on Linux in general (and
> >> the kernel in particular). It's written by developers and tech reporters
> >> and funded by subscribers.
> >>
> >> An LWN-like OpenStack development newsletter would provide general
> >> status, dive into specific features, report on specific
> >> talks/conferences, summarize threads etc. It would be tremendously
> >> useful to the development community.
> >>
> >> The issue is, who can write such content ? It is a full-time job to
> >> produce authored content, you can't just copy (or link to) content
> >> produced elsewhere. It takes a very special kind of individual to write
> >> such content: the person has to be highly technical, able to tackle any
> >> topic, and totally connected with the OpenStack development community.
> >> That person has to be cross-project and ideally have already-built
> >> legitimacy.
> >
> > Here, you're being overly restrictive.  Lwn.net isn't staffed by top
> > level kernel maintainers (although it does solicit the occasional
> > article from them).  It's staffed by people who gained credibility via
> > their insightful reporting rather than by their contributions.  I see no
> > reason why the same model wouldn't work for OpenStack.
> >
> > There is one technical difference: in the kernel, you can get all the
> > information from the linux-kernel (and other mailing list) firehose if
> > you're skilled enough to extract it.  With OpenStack, openstack-dev
> > isn't enough so you have to do other stuff as well, but that's more or
> > less equivalent to additional research.
> >
> >>   It's basically the kind of profile every OpenStack company
> >> is struggling and fighting to hire. And that rare person should not
> >> really want to spend that much time developing (or become CTO of a
> >> startup) but prefer to write technical articles about what happens in
> >> OpenStack development. I'm not sure such a person exists. And a
> >> newsletter actually takes more than one such person, because it's a lot
> >> of work (even if not weekly).
> >
> > That's a bit pessimistic: followed to it's logical conclusion it would
> > say that lwn.net can't exist either ... which is a bit of a
> > contradiction.
> >
> >> So as much as I'd like this to happen, I'm not convinced it's worth
> >> getting excited unless we have clear indication that we would have
> >> people willing and able to pull it off. The matter of who pays the bill
> >> is secondary -- I just don't think the profile exists.
> >>
> >> For the matter, I tried to push such an idea in the past and couldn't
> >> find anyone to fit the rare profile I think is needed to succeed. All
> >> the people I could think of had other more interesting things to do. I
> >> don't think things changed -- but I'd love to be proven wrong.
> >
> > Um, I assume you've thought of this already, but have you tried asking
> > lwn.net?  As you say above, they already fit the profile.  Whether they
> > have the bandwidth is another matter, but I believe their Chief Editor
> > (Jon Corbet) may welcome a broadening of the funding base, particularly
> > if the OpenStack foundation were offering seed funding for the
> > endeavour.
> 
> +1 to that, although lwn.net is partially subscriber only (yes I'm a 
> subscriber); so if say we had a 'openstack section' there (just like 
> there is a kernel section, or a security section, or a distributions 
> section...) how would that work? It'd be neat to have what we do on 
> lwn.net vs having a openstack clone/similar thing to lwn.net (because 
> IMHO we already make our

Re: [openstack-dev] [all] cross project communication: periodic developer newsletter?

2015-05-05 Thread James Bottomley
On Tue, 2015-05-05 at 10:45 +0200, Thierry Carrez wrote:
> Joe Gordon wrote:
> > [...]
> > To tackle this I would like to propose the idea of a periodic developer
> > oriented newsletter, and if we agree to go forward with this, hopefully
> > the foundation can help us find someone to write newsletter.
> 
> I've been discussing the idea of a "LWN" for OpenStack for some time,
> originally with Mark McLoughlin. For those who don't know it, LWN
> (lwn.net) is a source of quality tech reporting on Linux in general (and
> the kernel in particular). It's written by developers and tech reporters
> and funded by subscribers.
> 
> An LWN-like OpenStack development newsletter would provide general
> status, dive into specific features, report on specific
> talks/conferences, summarize threads etc. It would be tremendously
> useful to the development community.
> 
> The issue is, who can write such content ? It is a full-time job to
> produce authored content, you can't just copy (or link to) content
> produced elsewhere. It takes a very special kind of individual to write
> such content: the person has to be highly technical, able to tackle any
> topic, and totally connected with the OpenStack development community.
> That person has to be cross-project and ideally have already-built
> legitimacy.

Here, you're being overly restrictive.  Lwn.net isn't staffed by top
level kernel maintainers (although it does solicit the occasional
article from them).  It's staffed by people who gained credibility via
their insightful reporting rather than by their contributions.  I see no
reason why the same model wouldn't work for OpenStack.

There is one technical difference: in the kernel, you can get all the
information from the linux-kernel (and other mailing list) firehose if
you're skilled enough to extract it.  With OpenStack, openstack-dev
isn't enough so you have to do other stuff as well, but that's more or
less equivalent to additional research.

>  It's basically the kind of profile every OpenStack company
> is struggling and fighting to hire. And that rare person should not
> really want to spend that much time developing (or become CTO of a
> startup) but prefer to write technical articles about what happens in
> OpenStack development. I'm not sure such a person exists. And a
> newsletter actually takes more than one such person, because it's a lot
> of work (even if not weekly).

That's a bit pessimistic: followed to it's logical conclusion it would
say that lwn.net can't exist either ... which is a bit of a
contradiction.

> So as much as I'd like this to happen, I'm not convinced it's worth
> getting excited unless we have clear indication that we would have
> people willing and able to pull it off. The matter of who pays the bill
> is secondary -- I just don't think the profile exists.
> 
> For the matter, I tried to push such an idea in the past and couldn't
> find anyone to fit the rare profile I think is needed to succeed. All
> the people I could think of had other more interesting things to do. I
> don't think things changed -- but I'd love to be proven wrong.

Um, I assume you've thought of this already, but have you tried asking
lwn.net?  As you say above, they already fit the profile.  Whether they
have the bandwidth is another matter, but I believe their Chief Editor
(Jon Corbet) may welcome a broadening of the funding base, particularly
if the OpenStack foundation were offering seed funding for the
endeavour.

James



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fixing the Nova Core Reviewer Frustration [was Re: [Nova] PTL Candidacy]

2015-04-07 Thread James Bottomley
On Tue, 2015-04-07 at 14:24 -0600, Chris Friesen wrote:
> On 04/07/2015 01:35 PM, James Bottomley wrote:
> 
> > If I look at the history, I also see some reviewers dropping out once
> > their concerns and review comments have been addressed (after giving a
> > +1), so the other thing I'd suggest is that instead of erasing the
> > review history on each patch submission, it carries over (at least the
> > -1 and +1) so you don't have to wait a while for a consensus to form
> > (reviewers would, of course, be able to alter their votes at any time).
> > The pressure is thus on the submitter to make the changes to switch
> > every original -1 to a +1.
> 
> How would we deal with the scenario where someone leaves a -1 with a comment 
> and 
> then never goes back to check if their concerns were dealt with?

Same way we deal with it now: eventually the comment gets ignored.  The
submitter should be very motivated to try to get the original commenter
to reply, but there should be an eventual escalation process to
eventually stop the -1 from mattering.

James



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fixing the Nova Core Reviewer Frustration [was Re: [Nova] PTL Candidacy]

2015-04-07 Thread James Bottomley
On Tue, 2015-04-07 at 18:12 +, Tim Bell wrote:
> > -Original Message-
> > From: James Bottomley [mailto:james.bottom...@hansenpartnership.com]
> > Sent: 07 April 2015 19:03
> > To: Michael Still
> > Cc: OpenStack Development Mailing List (not for usage questions)
> > Subject: [openstack-dev] Fixing the Nova Core Reviewer Frustration [was Re:
> > [Nova] PTL Candidacy]
> > 
> > On Tue, 2015-04-07 at 11:27 +1000, Michael Still wrote:
> > > Additionally, we have consistently asked for non-cores to help cover
> > > the review load. It doesn't have to be a core that notices a problem
> > > with a patch -- anyone can do that. There are many people who do help
> > > out with non-core reviews, and I am thankful for all of them. However,
> > > I keep meeting people who complain about review delays, but who don't
> > > have a history of reviewing themselves. That's confusing and
> > > frustrating to me.
> > 
> > I can understand why you're frustrated, but not why you're surprised:
> > the process needs to be different.  Right now the statement is that for a 
> > patch
> > series to be accepted it has to have a positive review from a core plus one 
> > other,
> > however the "one other" can be a colleague, so it's easy.  The problem, as 
> > far as
> > submitters see it, is getting that Core Reviewer.  That's why so much frenzy
> > (which contributes to your
> > frustration) goes into it.  And why all the complaining which annoys you.
> > 
> > To fix the frustration, you need to fix the process:  Make the cores more 
> > of a
> > second level approver rather than a front line reviewer and I predict the 
> > frenzy
> > to get a core will go down and so will core frustration.  Why not require a 
> > +1
> > from one (or even more than one) independent (for some useful value of
> > independent) reviewer before the cores will even look at it?  That way the 
> > cores
> > know someone already thought the patch was good, so they're no longer being
> > pestered to review any old thing and the first job of a submitter becomes 
> > to find
> > an independent reviewer rather than go bother a core.
> > 
> 
> If I take a case that we were very interest in
> (https://review.openstack.org/#/c/129420/) for nested project quota,
> we seemed to need two +2s from core reviewers on the spec. 
> 
> There were many +1s but these did not seem to result in an increase in
> attention to get the +2s. Initial submission of the spec was in
> October but we did not get approval till the end of January.
> 
> Unfortunately, we were unable to get the code into the right shape
> after the spec approval to make it into Kilo.
>
> One of the issues for the academic/research sector is that there is a
> significant resource available from project resources but these are
> time limited. Thus, if a blueprint and code commit cannot be completed
> within the window for the project, the project ends and resources to
> complete are no longer available. Naturally, rejections on quality
> grounds such as code issues or lack of test cases is completely
> reasonable but the latency time can extend the time to delivery
> significantly.
> 
> Luckily, in this case, the people concerned are happy to continue to
> completion (and the foundation is sponsoring the travel for the summit
> too) but this would not always be the case.

So I think you're telling me that my proposed process would elongate the
submission window and this would cause more stuff to drop because of the
fixed timeframes people have to get stuff in? I think I have to admit to
that, but would like to propose some refinements based on the analysis
below.

There were 35 patch sets in this review spanning a time period from 18
October to 21 Jan, when it was included.  Up to patch set 17, the
non-core reviews were mostly -1.  From 17-27 there was some back and
forth satisfying various reviewer comments and updating the blueprint.
And from 28-34 the comments were getting more positive.  35 was the true
OK point (except that gerrit kicked it out for a merge problem) and 36
was the accepted code.

Under the process I proposed, the cores wouldn't have got involved until
somewhere around patch set 18-20; I don't think this would cause
anything to lengthen (patch set 20 was Dec 13).  However, looking at the
history, the refinements I would suggest are no -1s to pass to the
cores, so they wouldn't actually get involved until patch set 26.

If I look at the history, I also see some reviewers dropping out once
their concerns and review comments h

Re: [openstack-dev] Fixing the Nova Core Reviewer Frustration

2015-04-07 Thread James Bottomley
On Tue, 2015-04-07 at 13:35 -0400, Anita Kuno wrote:
> On 04/07/2015 01:02 PM, James Bottomley wrote:
> > On Tue, 2015-04-07 at 11:27 +1000, Michael Still wrote:
> >> Additionally, we have consistently asked for non-cores to help cover
> >> the review load. It doesn't have to be a core that notices a problem
> >> with a patch -- anyone can do that. There are many people who do help
> >> out with non-core reviews, and I am thankful for all of them. However,
> >> I keep meeting people who complain about review delays, but who don't
> >> have a history of reviewing themselves. That's confusing and
> >> frustrating to me.
> > 
> > I can understand why you're frustrated, but not why you're surprised:
> > the process needs to be different.  Right now the statement is that for
> > a patch series to be accepted it has to have a positive review from a
> > core plus one other, however the "one other" can be a colleague, so it's
> > easy.  The problem, as far as submitters see it, is getting that Core
> > Reviewer.  That's why so much frenzy (which contributes to your
> > frustration) goes into it.  And why all the complaining which annoys
> > you.
> > 
> > To fix the frustration, you need to fix the process:  Make the cores
> > more of a second level approver rather than a front line reviewer and I
> > predict the frenzy to get a core will go down and so will core
> > frustration.  Why not require a +1 from one (or even more than one)
> > independent (for some useful value of independent) reviewer before the
> > cores will even look at it?  That way the cores know someone already
> > thought the patch was good, so they're no longer being pestered to
> > review any old thing and the first job of a submitter becomes to find an
> > independent reviewer rather than go bother a core.
> > 
> > James
> > 
> > 
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> Hi James:
> 
> Since this is now an open thread and no longer has anything to do with
> Nova PTL candidacy or anyone else's PTL candidacy, I'm going to jump in
> here with a recommendation.
> 
> Since you are familiar with Gerrit yourself and have a merged patch,
> perhaps you can spend some time between now and summit and review as
> many Nova (or the project of your choice) patches as you can to learn
> what life is like from the reviewers point of view.

Thanks for the suggestion.  However, I didn't make my initial
recommendation based on not having some insight into what's going on.
I'm SCSI Subsystem maintainer of Linux, meaning I act like an OpenStack
core for all the patches that go into this subsystem.

We recently hit a crisis point in Linux last year where I simply
couldn't review all the patches and someone else took over to help out.
He instituted a process whereby no patch got consideration until it had
at least one other review and even though he's stepped back again, I
find that adhering to this process brings my workload back to being
manageable because I can just tell anyone bothering me about a patch to
go away and find a reviewer.  Once it has a reviewer (provided I trust
them), I merely need to glance over it to verify no problems before
including it in the git tree.

I'm basing my recommendation directly on how this process has helped me
continue to do my job in Linux.

Now, I think it's fair game to argue about whether this would, or would
not be applicable to OpenStack and whether the benefits we saw in Linux
would be fully realized in a different environment.  I do, though, think
it's slightly unwise to dismiss out of hand experience gained in other
projects, unless you truly believe OpenStack has nothing to learn from
anyone else?

James

> If you find it supportive to do so please help yourself to this blog
> post I wrote about reviewing an OpenStack patch:
> http://anteaya.info/blog/2013/03/21/reviewing-an-openstack-patch/
> 
> Thanks James,
> Anita.
> 
> https://review.openstack.org/#/q/reviewer:%22James+Bottomley+%253Cjejbcan1%2540hansenpartnership.com%253E%22,n,z
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fixing the Nova Core Reviewer Frustration [was Re: [Nova] PTL Candidacy]

2015-04-07 Thread James Bottomley
On Tue, 2015-04-07 at 11:27 +1000, Michael Still wrote:
> Additionally, we have consistently asked for non-cores to help cover
> the review load. It doesn't have to be a core that notices a problem
> with a patch -- anyone can do that. There are many people who do help
> out with non-core reviews, and I am thankful for all of them. However,
> I keep meeting people who complain about review delays, but who don't
> have a history of reviewing themselves. That's confusing and
> frustrating to me.

I can understand why you're frustrated, but not why you're surprised:
the process needs to be different.  Right now the statement is that for
a patch series to be accepted it has to have a positive review from a
core plus one other, however the "one other" can be a colleague, so it's
easy.  The problem, as far as submitters see it, is getting that Core
Reviewer.  That's why so much frenzy (which contributes to your
frustration) goes into it.  And why all the complaining which annoys
you.

To fix the frustration, you need to fix the process:  Make the cores
more of a second level approver rather than a front line reviewer and I
predict the frenzy to get a core will go down and so will core
frustration.  Why not require a +1 from one (or even more than one)
independent (for some useful value of independent) reviewer before the
cores will even look at it?  That way the cores know someone already
thought the patch was good, so they're no longer being pestered to
review any old thing and the first job of a submitter becomes to find an
independent reviewer rather than go bother a core.

James



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] bug expiration

2015-04-02 Thread James Bottomley
On Thu, 2015-04-02 at 07:03 -0400, Sean Dague wrote:
> On 04/02/2015 06:54 AM, James Bottomley wrote:
> > On Thu, 2015-04-02 at 06:45 -0400, Sean Dague wrote:
> >> On 04/02/2015 06:33 AM, James Bottomley wrote:
> >>> On Thu, 2015-04-02 at 11:32 +0200, Thierry Carrez wrote:
> >>>> Sean Dague wrote:
> >>>>> I just spent a chunk of the morning purging out some really old
> >>>>> Incomplete bugs because about 9 months ago we disabled the auto
> >>>>> expiration bit in launchpad -
> >>>>> https://bugs.launchpad.net/nova/+configure-bugtracker
> >>>>>
> >>>>> This is a manually grueling task, which by looking at these bugs, no one
> >>>>> else is doing. I'd like to turn that bit back on so we can actually get
> >>>>> attention focused on actionable bugs.
> >>>>>
> >>>>> Any objections here?
> >>>>
> >>>> No objection, just a remark:
> >>>>
> >>>> One issue with auto-expiration is that it usually results in the
> >>>> following story:
> >>>>
> >>>> 1. Someone reports bug
> >>>> 2. Triager notices NEW bug, asks reporter for details, sets INCOMPLETE
> >>>> 3. Reporter provides details
> >>>> 4. Triager does not notice reply on bug since they ignore INCOMPLETE
> >>>> 5. Bug expires after n months and disappears forever
> >>>> 6. Reporter is frustrated and won't ever submit issues again
> >>>>
> >>>> The problem is of course at step 4, not at step 5. Auto-expiration is
> >>>> very beneficial if your bug triaging routine includes checking Launchpad
> >>>> for "INCOMPLETE bugs with an answer" regularly. If nobody does this very
> >>>> boring task, then auto-expiration can be detrimental.
> >>>>
> >>>> Is anyone in Nova checking for "INCOMPLETE bugs with an answer" ? That's
> >>>> task 4 in https://wiki.openstack.org/wiki/BugTriage ...
> >>>
> >>> This actually looks to be a problem in the workflow to me.
> >>>
> >>> The OpenStack Incomplete/Confirmed seem to map roughly to the bugzilla
> >>> Need Info/Open states.  The difference is that in bugzilla, a reporter
> >>> can clear the Need Info flag.  This is also what needs to happen in
> >>> OpenStack (so the reporter doesn't need to wait on anyone looking at
> >>> thier input to move the bug on).
> >>>
> >>> I propose allowing the reporter to move the bug to Confirmed when they
> >>> supply the information making it incomplete.  If the triager thinks this
> >>> is wrong, they can set it back to incomplete again.  This has the net
> >>> effect that Incomplete needs no real review, it marks bugs the reporter
> >>> doesn't care enough about to reply... and these can be auto expired.
> >>>
> >>> This would make the initial state diagram
> >>>
> >>>
> >>> +---+Review +--+
> >>> |New|-->|Incomplete|
> >>> +---+   +--+
> >>>   | ^   |
> >>>   |Still Needs Info |   | Reporter replies
> >>>   | |   v
> >>>   | Review  +-+
> >>>   +>|Confirmed|
> >>> +-+
> >>>
> >>>
> >>> James
> >>
> >> Reporters can definitely move it back to New, which is the expected
> >> flow, that means it gets picked up again on the next New bug sweep.
> >> That's Step #1 in triaging (for Nova we've agressively worked to keep
> >> that very near 0). I don't remember if they can also move it into
> >> Confirmed themselves if they aren't in the nova-bugs group, though that
> >> is an open group.
> >>
> >> Mostly the concern is people that don't understand the tools or bug
> >> flow. So they respond and leave in Incomplete. Or it's moved to
> >> Incomplete and they never respond because they don't understand that
> >> more info is needed. These things sit there for a year, and then there
> >> is some whiff of a real problem in them, but no path forward with that
> >> information.
> > 
> > But we have automation: the system can move it to Confirmed when they
> &g

Re: [openstack-dev] [nova] bug expiration

2015-04-02 Thread James Bottomley
On Thu, 2015-04-02 at 06:45 -0400, Sean Dague wrote:
> On 04/02/2015 06:33 AM, James Bottomley wrote:
> > On Thu, 2015-04-02 at 11:32 +0200, Thierry Carrez wrote:
> >> Sean Dague wrote:
> >>> I just spent a chunk of the morning purging out some really old
> >>> Incomplete bugs because about 9 months ago we disabled the auto
> >>> expiration bit in launchpad -
> >>> https://bugs.launchpad.net/nova/+configure-bugtracker
> >>>
> >>> This is a manually grueling task, which by looking at these bugs, no one
> >>> else is doing. I'd like to turn that bit back on so we can actually get
> >>> attention focused on actionable bugs.
> >>>
> >>> Any objections here?
> >>
> >> No objection, just a remark:
> >>
> >> One issue with auto-expiration is that it usually results in the
> >> following story:
> >>
> >> 1. Someone reports bug
> >> 2. Triager notices NEW bug, asks reporter for details, sets INCOMPLETE
> >> 3. Reporter provides details
> >> 4. Triager does not notice reply on bug since they ignore INCOMPLETE
> >> 5. Bug expires after n months and disappears forever
> >> 6. Reporter is frustrated and won't ever submit issues again
> >>
> >> The problem is of course at step 4, not at step 5. Auto-expiration is
> >> very beneficial if your bug triaging routine includes checking Launchpad
> >> for "INCOMPLETE bugs with an answer" regularly. If nobody does this very
> >> boring task, then auto-expiration can be detrimental.
> >>
> >> Is anyone in Nova checking for "INCOMPLETE bugs with an answer" ? That's
> >> task 4 in https://wiki.openstack.org/wiki/BugTriage ...
> > 
> > This actually looks to be a problem in the workflow to me.
> > 
> > The OpenStack Incomplete/Confirmed seem to map roughly to the bugzilla
> > Need Info/Open states.  The difference is that in bugzilla, a reporter
> > can clear the Need Info flag.  This is also what needs to happen in
> > OpenStack (so the reporter doesn't need to wait on anyone looking at
> > thier input to move the bug on).
> > 
> > I propose allowing the reporter to move the bug to Confirmed when they
> > supply the information making it incomplete.  If the triager thinks this
> > is wrong, they can set it back to incomplete again.  This has the net
> > effect that Incomplete needs no real review, it marks bugs the reporter
> > doesn't care enough about to reply... and these can be auto expired.
> > 
> > This would make the initial state diagram
> > 
> > 
> > +---+Review +--+
> > |New|-->|Incomplete|
> > +---+   +--+
> >   | ^   |
> >   |Still Needs Info |   | Reporter replies
> >   | |   v
> >   | Review  +-+
> >   +>|Confirmed|
> > +-+
> > 
> > 
> > James
> 
> Reporters can definitely move it back to New, which is the expected
> flow, that means it gets picked up again on the next New bug sweep.
> That's Step #1 in triaging (for Nova we've agressively worked to keep
> that very near 0). I don't remember if they can also move it into
> Confirmed themselves if they aren't in the nova-bugs group, though that
> is an open group.
> 
> Mostly the concern is people that don't understand the tools or bug
> flow. So they respond and leave in Incomplete. Or it's moved to
> Incomplete and they never respond because they don't understand that
> more info is needed. These things sit there for a year, and then there
> is some whiff of a real problem in them, but no path forward with that
> information.

But we have automation: the system can move it to Confirmed when they
reply.  The point is to try to make the states and timeouts self
classifying.  If incomplete means no-one cared enough about this bug to
supply requested information, then it's a no brainer candidate for
exipry.  The question I was asking is "could the states be set up so
this happens" and I believe the answer based on the above workflow is
"yes".

Now if it sits in Confirmed because the triager didn't read the supplied
information, it's not a candidate for expiry, it's a candidate for
kicking someone's arse.

The fundamental point is to make the states align with time triggered
actionable consequences.  That's what I believe the problem with the
current workflow is.  Someone has to look at each bug to determine what
Incomplete actually means which I'd view as unbelievably painful for
that person (or group of people).

James



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] bug expiration

2015-04-02 Thread James Bottomley
On Thu, 2015-04-02 at 11:32 +0200, Thierry Carrez wrote:
> Sean Dague wrote:
> > I just spent a chunk of the morning purging out some really old
> > Incomplete bugs because about 9 months ago we disabled the auto
> > expiration bit in launchpad -
> > https://bugs.launchpad.net/nova/+configure-bugtracker
> > 
> > This is a manually grueling task, which by looking at these bugs, no one
> > else is doing. I'd like to turn that bit back on so we can actually get
> > attention focused on actionable bugs.
> > 
> > Any objections here?
> 
> No objection, just a remark:
> 
> One issue with auto-expiration is that it usually results in the
> following story:
> 
> 1. Someone reports bug
> 2. Triager notices NEW bug, asks reporter for details, sets INCOMPLETE
> 3. Reporter provides details
> 4. Triager does not notice reply on bug since they ignore INCOMPLETE
> 5. Bug expires after n months and disappears forever
> 6. Reporter is frustrated and won't ever submit issues again
> 
> The problem is of course at step 4, not at step 5. Auto-expiration is
> very beneficial if your bug triaging routine includes checking Launchpad
> for "INCOMPLETE bugs with an answer" regularly. If nobody does this very
> boring task, then auto-expiration can be detrimental.
> 
> Is anyone in Nova checking for "INCOMPLETE bugs with an answer" ? That's
> task 4 in https://wiki.openstack.org/wiki/BugTriage ...

This actually looks to be a problem in the workflow to me.

The OpenStack Incomplete/Confirmed seem to map roughly to the bugzilla
Need Info/Open states.  The difference is that in bugzilla, a reporter
can clear the Need Info flag.  This is also what needs to happen in
OpenStack (so the reporter doesn't need to wait on anyone looking at
thier input to move the bug on).

I propose allowing the reporter to move the bug to Confirmed when they
supply the information making it incomplete.  If the triager thinks this
is wrong, they can set it back to incomplete again.  This has the net
effect that Incomplete needs no real review, it marks bugs the reporter
doesn't care enough about to reply... and these can be auto expired.

This would make the initial state diagram


+---+Review +--+
|New|-->|Incomplete|
+---+   +--+
  | ^   |
  |Still Needs Info |   | Reporter replies
  | |   v
  | Review  +-+
  +>|Confirmed|
+-+


James



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-31 Thread James Bottomley
On Fri, 2015-03-27 at 17:01 +, Tim Bell wrote:
> From the stats 
> (http://superuser.openstack.org/articles/openstack-user-survey-insights-november-2014),
> 
> 
> -43% of production clouds use OVS (the default for Neutron)
> 
> -30% of production clouds are Nova network based
> 
> -15% of production clouds use linux bridge
> 
> There is therefore a significant share of the OpenStack production
> user community who are interested in a simple provider network linux
> bridge based solution.
>  
> I think it is important to make an attractive cloud solution  where
> deployers can look at the balance of function and their skills and
> choose the appropriate combinations.
> 
> Whether a simple network model should be the default is a different
> question to should there be a simple option. Personally, one of the
> most regular complaints I get is the steep learning curve for a new
> deployment. If we could make it so that people can do it as a series
> of steps (by making an path to add OVS) rather than a large leap, I
> think that would be attractive.

To be honest, there's a technology gap between the LinuxBridge and OVS
that cannot be filled.  We've found, since we sell technology to hosting
companies, that we got an immediate back reaction when we tried to
switch from a LinuxBridge to OVS in our Cloud Server product.  The
specific problem is that lots of hosting providers have heavily scripted
iptables and traffic control rules on the host side (i.e. on the bridge)
for controlling client networks which simply cannot be replicated in
OVS.  Telling the customers to rewrite everything in OpenFlow causes
incredulity and threats to pull the product.  No currently existing or
planned technology is there to fill this gap (the closest was google's
plan to migrate iptables rules to openflow, which died).  Our net
takeaway is that we have to provide both options for the foreseeable
future (scripting works to convert some use cases, but by no means
all ... and in our case not even a majority).

So the point of this for OpenStack is seeing this as a choice between
LinuxBridge and OVS is going to set up a false dichotomy.  Realistically
the future network technology has to support both, at least until the
trailing edge becomes more comfortable with SDN.

Moving neutron to ML2 instead of L2 helps isolate neutron from the
bridge technology, but it doesn't do anything to help the customer who
is currently poking at L2 to implement specific policies because they
have to care what the bridge technology is.  Telling the customer not to
poke the bridge isn't an option because they see the L2 plane as their
primary interface to diagnose and fix network issues ... which is why
they care about the bridge technology.

James



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Re-evaluating the suitability of the 6 month release cycle

2015-03-04 Thread James Bottomley
On Wed, 2015-03-04 at 11:19 +0100, Thierry Carrez wrote:
> James Bottomley wrote:
> > On Tue, 2015-03-03 at 11:59 +0100, Thierry Carrez wrote:
> >> Second it's at a very different evolution/maturity point (20 years old
> >> vs. 0-4 years old for OpenStack projects).
> > 
> > Yes, but I thought I covered this in the email: you can see that at the
> > 4 year point in its lifecycle, the kernel was behaving very differently
> > (and in fact more similar to OpenStack).  The question I thought was
> > still valid is whether anything was learnable from the way the kernel
> > evolved later.  I think the key issue, which you seem to have in
> > OpenStack is that the separate develop/stabilise phases caused
> > frustration to build up in our system which (nine years later) led the
> > kernel to adopt the main branch stabilisation with overlapping subsystem
> > development cycle.
> 
> I agree with you: the evolution the kernel went through is almost a
> natural law, and I know we won't stay in the current model forever. I'm
> just not sure we have reached the level of general stability that makes
> it possible to change *just now*. I welcome brainstorming and discussion
> on future evolutions, though, and intend to lead a cross-project session
> discussion on that in Vancouver.

OK, I'll be in Vancouver, so happy to come and give input from
participating in the kernel process (for a bit longer than I care to
admit to ...).

One interesting thing might be to try and work out where roughly
OpenStack is on the project trajectory.  It's progressing much more
rapidly than the kernel (by 4 years in, the kernel didn't even have
source control), so the release crisis it took the kernel 12 years to
reach might be a bit closer than people imagine.

James


James



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Re-evaluating the suitability of the 6 month release cycle

2015-03-03 Thread James Bottomley
On Tue, 2015-03-03 at 11:59 +0100, Thierry Carrez wrote:
> James Bottomley wrote:
> > Actually, this is possible: look at Linux, it freezes for 10 weeks of a
> > 12 month release cycle (or 6 weeks of an 8 week one).  More on this
> > below.
> 
> I'd be careful with comparisons with the Linux kernel. First it's a
> single bit of software, not a collection of interconnected projects.

Well, we do have interconnection: the kernel on it's own doesn't do
anything without a userspace.  The theory was that we didn't have to be
like BSD (coupled user space and kernel) and we could rely on others
(principally the GNU project in the early days) to provide the userspace
and that we could decouple kernel development from the userspace
releases.  Threading models were, I think, the biggest challenges to
this assumption, but we survived.

> Second it's at a very different evolution/maturity point (20 years old
> vs. 0-4 years old for OpenStack projects).

Yes, but I thought I covered this in the email: you can see that at the
4 year point in its lifecycle, the kernel was behaving very differently
(and in fact more similar to OpenStack).  The question I thought was
still valid is whether anything was learnable from the way the kernel
evolved later.  I think the key issue, which you seem to have in
OpenStack is that the separate develop/stabilise phases caused
frustration to build up in our system which (nine years later) led the
kernel to adopt the main branch stabilisation with overlapping subsystem
development cycle.

>  Finally it sits at a
> different layer, so there is less need for documentation/translations to
> be shipped with the software release.

It's certainly a lot less than you, but we have the entire system call
man pages.  It's an official project of the kernel:

https://www.kernel.org/doc/man-pages/

And we maintain translations for it

https://www.kernel.org/doc/man-pages/translations.html

The main difference is that our translation projects are not integrated
into our releases.  They're done mostly by the downstream (debian if you
look: the rest of the distros do invest in translations but they don't
make the upstream effort with them), so we don't specify what
translations we have, we allow interested parties to choose to help.

I would argue that the downstream is the best place to manage
translations because they decide which markets are important or
interesting, but if you still want to control this at the upstream end,
with Daniel's proposal, it would mean you only really need translations
for stable releases.

James

> The only comparable project in terms of evolution/maturity in the
> OpenStack world would be Swift, and it happily produces releases every
> ~2months with a 1-week stabilisation period.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Re-evaluating the suitability of the 6 month release cycle

2015-03-02 Thread James Bottomley
On Tue, 2015-02-24 at 12:05 +0100, Thierry Carrez wrote:
> Daniel P. Berrange wrote:
> > [...]
> > The key observations
> > 
> > 
> > The first key observation from the schedule is that although we have
> > a 6 month release cycle, we in fact make 4 releases in that six
> > months because there are 3 milestones releases approx 6-7 weeks apart
> > from each other, in addition to the final release. So one of the key
> > burdens of a more frequent release cycle is already being felt, to
> > some degree.
> > 
> > The second observation is that thanks to the need to support a
> > continuous deployment models, the GIT master branches are generally
> > considered to be production ready at all times. The tree does not
> > typically go through periods of major instability that can be seen
> > in other projects, particular those which lack such comprehensive
> > testing infrastructure.
> > 
> > The third observation is that due to the relatively long cycle, and
> > increasing amounts of process, the work accomplished during the
> > cycles is becoming increasingly bursty. This is in turn causing
> > unacceptably long delays for contributors when their work is unlucky
> > enough to not get accepted during certain critical short windows of
> > opportunity in the cycle.
> > 
> > The first two observations strongly suggest that the choice of 6
> > months as a cycle length is a fairly arbitrary decision that can be
> > changed without unreasonable pain. The third observation suggests a
> > much shorter cycle length would smooth out the bumps and lead to a
> > more efficient & satisfying development process for all involved.
> 
> I think you're judging the cycle from the perspective of developers
> only. 6 months was not an arbitrary decision. Translations and
> documentation teams basically need a month of feature/string freeze in
> order to complete their work. Since we can't reasonably freeze one month
> every 2 months, we picked 6 months.

Actually, this is possible: look at Linux, it freezes for 10 weeks of a
12 month release cycle (or 6 weeks of an 8 week one).  More on this
below.

> It's also worth noting that we were on a 3-month cycle at the start of
> OpenStack. That was dropped after a cataclysmic release that managed the
> feat of (a) not having anything significant done, and (b) have out of
> date documentation and translations.
> 
> While I agree that the packagers and stable teams can opt to skip a
> release, the docs, translations or security teams don't really have that
> luxury... Please go beyond the developers needs and consider the needs
> of the other teams.
> 
> Random other comments below:
> 
> > [...]
> > Release schedule
> > 
> > 
> > First the releases would probably be best attached to a set of
> > pre-determined fixed dates that don't ever vary from year to year.
> > eg releses happen Feb 1st, Apr 1st, Jun 1st, Aug 1st, Oct 1st, and
> > Dec 1st. If a particular release slips, don't alter following release
> > dates, just shorten the length of the dev cycle, so it becomes fully
> > self-correcting. The even numbered months are suggested to avoid a
> > release landing in xmas/new year :-)
> 
> The Feb 1 release would probably be pretty empty :)
> 
> > [...]
> > Stable branches
> > ---
> > 
> > The consequences of a 2 month release cycle appear fairly severe for
> > the stable branch maint teams at first sight. This is not, however,
> > an insurmountable problem. The linux kernel shows an easy way forward
> > with their approach of only maintaining stable branches for a subset
> > of major releases, based around user / vendor demand. So it is still
> > entirely conceivable that the stable team only provide stable branch
> > releases for 2 out of the 6 yearly releases. ie no additional burden
> > over what they face today. Of course they might decide they want to
> > do more stable branches, but maintain each for a shorter time. So I
> > could equally see them choosing todo 3 or 4 stable branches a year.
> > Whatever is most effective for those involved and those consuming
> > them is fine.
> 
> Stable branches may have the luxury of skipping releases and designate a
> "stable" one from time to time (I reject the Linux comparison because
> the kernel is at a very different moment in software lifecycle). The
> trick being, making one release "special" is sure to recreate the peak
> issues you're trying to solve.

I don't disagree with the observation about different points in the
lifecycle, but perhaps it might be instructive to ask if the linux
kernel ever had a period in its development history that looks somewhat
like OpenStack does now.  I would claim it did: before 2.6, we had the
odd/even develop/stabilise cycle.  The theory driving it was that we
needed a time for everyone to develop then a time for everyone to help
make stable.  You yourself said this in the other thread:

> Joe Gordon wrote:
> > [...]
> > I think a lot of the frustration with

Re: [openstack-dev] [Magnum] Scheduling for Magnum

2015-02-06 Thread James Bottomley
On Sat, 2015-02-07 at 00:44 +, Adrian Otto wrote:
> Magnum Team,
> 
> In our initial spec, we addressed the subject of resource scheduling. Our 
> plan was to begin with a naive scheduler that places resources on a specified 
> Node and can sequentially fill Nodes if one is not specified.
> 
> Magnum supports multiple conductor backends[1], one of which is our 
> Kubernetes backend. We also have a native Docker backend that we would like 
> to enhance so that when pods or containers are created, the target nodes can 
> be selected according to user-supplied filters. Some examples of this are:
> 
> Constraint, Affinity, Anti-Affinity, Health
> 
> We have multiple options for solving this challenge. Here are a few:
> 
> 1) Cherry pick scheduler code from Nova, which already has a working a
> filter scheduler design. 
> 2) Integrate swarmd to leverage its scheduler[2]. 
> 3) Wait for the Gantt, when Nova Scheduler to be moved out of Nova.
> This is expected to happen about a year from now, possibly sooner.
> 4) Write our own filter scheduler, inspired by Nova.
> 
> I suggest that we deserve to have a scheduling implementation for our
> native docker backend before Gantt is ready. It’s unrealistic that the
> Magnum team will be able to accelerate Gantt’s progress, as
> significant changes must be made in Nova for this to happen. The Nova
> team is best equipped to do this. It requires active participation
> from Nova’s core review team, which has limited bandwidth, and other
> priorities to focus on. I think we unanimously agree that we would
> prefer to use Gantt, if it were available sooner.
> 
> I suggest we also rule out option 4, because it amounts to
> re-inventing the wheel.
> 
> That leaves us with options 1 and 2 in the short term. The
> disadvantage of either of these approaches is that we will likely need
> to remove them and replace them with Gantt (or a derivative work) once
> it matures. The advantage of option 1 is that python code already
> exists for this, and we know it works well in Nova. We could cherry
> pick that over, and drop it directly into Magnum. The advantage of
> option 2 is that we leverage the talents of the developers working on
> Swarm, which is better than option 4. New features are likely to
> surface in parallel with our efforts, so we would enjoy the benefits
> of those without expending work in our own project.
> 
> So, how do you feel about options 1 and 2? Which do you feel is more
> suitable for Magnum? What other options should we consider that might
> be better than either of these choices?
> 
> I have a slight preference for option 2 - integrating with Swarm, but
> I could be persuaded to pick option 1, or something even more
> brilliant. Please discuss.

Got to say that Option 1 looks far preferable.  As you say, we have to
switch to Gantt eventually, so it might end up being an expensive and
difficult retro fit with Option 2.  With Option 1, we look mostly like
the Nova scheduler, so can let them take the initial hit of doing the
shift to Gantt and slipstream in their wake once the major pain points
are ironed out.

James



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] on Dockerfile patterns

2014-10-15 Thread James Bottomley
On Wed, 2014-10-15 at 11:24 -0400, David Vossel wrote:
> 
> - Original Message -
> > On Tue, 2014-10-14 at 19:52 -0400, David Vossel wrote:
> > > 
> > > - Original Message -
> > > > Ok, why are you so down on running systemd in a container?
> > > 
> > > It goes against the grain.
> > > 
> > > From a distributed systems view, we gain quite a bit of control by
> > > maintaining
> > > "one service per container". Containers can be re-organised and 
> > > re-purposed
> > > dynamically.
> > > If we have systemd trying to manage an entire stack of resources within a
> > > container,
> > > we lose this control.
> > > 
> > > From my perspective a containerized application stack needs to be managed
> > > externally
> > > by whatever is orchestrating the containers to begin with. When we take a
> > > step back
> > > and look at how we actually want to deploy containers, systemd doesn't 
> > > make
> > > much sense.
> > > It actually limits us in the long run.
> > > 
> > > Also... recovery. Using systemd to manage a stack of resources within a
> > > single container
> > > makes it difficult for whatever is externally enforcing the availability 
> > > of
> > > that container
> > > to detect the health of the container.  As it is now, the actual service 
> > > is
> > > pid 1 of a
> > > container. If that service dies, the container dies. If systemd is pid 1,
> > > there can
> > > be all kinds of chaos occurring within the container, but the external
> > > distributed
> > > orchestration system won't have a clue (unless it invokes some custom
> > > health monitoring
> > > tools within the container itself, which will likely be the case someday.)
> > 
> > I don't really think this is a good argument.  If you're using docker,
> > docker is the management and orchestration system for the containers.
> 
> no, docker is a local tool for pulling images and launching containers.
> Docker is not the distributed resource manager in charge of overseeing
> what machines launch what containers and how those containers are linked
> together.

Well, neither is systemd: fleet management has a variety of solution.

> > There's no dogmatic answer to the question should you run init in the
> > container.
> 
> an init daemon might make sense to put in some containers where we have
> a tightly coupled resource stack. There could be a use case where it would
> make more sense to put these resources in a single container.
> 
> I don't think systemd is a good solution for the init daemon though. Systemd
> attempts to handle recovery itself as if it has the entire view of the 
> system. With containers, the system view exists outside of the containers.
> If we put an internal init daemon within the containers, that daemon needs
> to escalate internal failures. The easiest way to do this is to
> have init die if it encounters a resource failure (init is pid 1, pid 1 
> exiting
> causes container to exit, container exiting gets the attention of whatever
> is managing the containers)

I won't comment on what init should be.  However, init should not be
running in application containers, as I have said because it complicates
the situation.  Application containers are more compelling the simpler
they are constructed because they're easier to describe in xml +
templates.

> > The reason for not running init inside a container managed by docker is
> > that you want the template to be thin for ease of orchestration and
> > transfer, so you want to share as much as possible with the host.  The
> > more junk you put into the container, the fatter and less agile it
> > becomes, so you should probably share the init system with the host in
> > this paradigm.
> 
> I don't think the local init system and containers should have anything
> to do with one another.  I said this in a previous reply, I'm approaching
> this problem from a distributed management perspective. The host's
> init daemon only has a local view of the world. 

If the container is an OS container, what you run inside is a full OS
stack; the only sharing is the kernel, so you get whatever the distro is
using as init and for some of them, that's systemd.  You have no choice
for OS containers.

> > 
> > Conversely, containers can be used to virtualize full operating systems.
> > This isn't the standard way of doing docker, but LXC and OpenVZ by
> > default do containers this way.  For this type of container, because you
> > have a full OS running inside the container, you have to also have
> > systemd (assuming it's the init system) running within the container.
> 
> sure, if you want to do this use systemd. I don't understand the use case
> where this makes any sense though. For me this falls in the "yeah you can do 
> it,
> but why?" category.

It's the standard Service Provider use case: containers are used as
dense, lightweight Virtual Environments for Virtual Private Servers.
The customer can be provisioned with whatever OS they like, but you
still get 3x the density and 

Re: [openstack-dev] [kolla] on Dockerfile patterns

2014-10-15 Thread James Bottomley
On Tue, 2014-10-14 at 19:52 -0400, David Vossel wrote:
> 
> - Original Message -
> > Ok, why are you so down on running systemd in a container?
> 
> It goes against the grain.
> 
> From a distributed systems view, we gain quite a bit of control by maintaining
> "one service per container". Containers can be re-organised and re-purposed 
> dynamically.
> If we have systemd trying to manage an entire stack of resources within a 
> container,
> we lose this control.
> 
> From my perspective a containerized application stack needs to be managed 
> externally
> by whatever is orchestrating the containers to begin with. When we take a 
> step back
> and look at how we actually want to deploy containers, systemd doesn't make 
> much sense.
> It actually limits us in the long run.
> 
> Also... recovery. Using systemd to manage a stack of resources within a 
> single container
> makes it difficult for whatever is externally enforcing the availability of 
> that container
> to detect the health of the container.  As it is now, the actual service is 
> pid 1 of a
> container. If that service dies, the container dies. If systemd is pid 1, 
> there can
> be all kinds of chaos occurring within the container, but the external 
> distributed
> orchestration system won't have a clue (unless it invokes some custom health 
> monitoring
> tools within the container itself, which will likely be the case someday.)

I don't really think this is a good argument.  If you're using docker,
docker is the management and orchestration system for the containers.
There's no dogmatic answer to the question should you run init in the
container.

The reason for not running init inside a container managed by docker is
that you want the template to be thin for ease of orchestration and
transfer, so you want to share as much as possible with the host.  The
more junk you put into the container, the fatter and less agile it
becomes, so you should probably share the init system with the host in
this paradigm.

Conversely, containers can be used to virtualize full operating systems.
This isn't the standard way of doing docker, but LXC and OpenVZ by
default do containers this way.  For this type of container, because you
have a full OS running inside the container, you have to also have
systemd (assuming it's the init system) running within the container.

James



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-11 Thread James Bottomley
On Thu, 2014-09-11 at 16:20 +0100, Duncan Thomas wrote:
> On 11 September 2014 15:35, James Bottomley
>  wrote:
> 
> > OK, so look at a concrete example: in 2002, the Linux kernel went with
> > bitkeeper precisely because we'd reached the scaling limit of a single
> > integration point, so we took the kernel from a single contributing team
> > to a bunch of them.  This was expanded with git in 2005 and leads to the
> > hundreds of contributing teams we have today.
> 
> 
> One thing the kernel has that Openstack doesn't, that alter the way
> this model plays out, is a couple of very strong, forthright and frank
> personalities at the top who are pretty well respected. Both Andrew
> and Linux (and others) regularly if not frequently rip into ideas
> quite scathingly, even after they have passed other barriers and
> gauntlets and just say no to things. Openstack has nothing of this
> sort, and there is no evidence that e.g. the TC can, should or desire
> to fill this role.

Linus is the court of last appeal.  It's already a team negotiation
failure if stuff bubbles up to him.  The somewhat abrasive response
you'll get if you're being stupid acts as strong downward incentive on
the teams to sort out their own API squabbles *before* they get this
type of visibility.

The whole point of open source is aligning the structures with the
desire to fix it yourself.  In an ideal world, everything would get
sorted at the local level and nothing would bubble up.  Of course, the
world isn't ideal, so you need some court of last appeal, but it doesn't
have to be an individual ... it just has to be something that's
daunting, to encourage local settlement, and decisive.

Every process has to have something like this anyway.  If there's no
process way of sorting out intractable disputes, they go on for ever and
damage the project.

James



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-11 Thread James Bottomley
On Thu, 2014-09-11 at 07:36 -0400, Sean Dague wrote:
> >>> b) The conflict Dan is speaking of is around the current situation where 
> >>> we
> >>> have a limited core review team bandwidth and we have to pick and choose
> >>> which virt driver-specific features we will review. This leads to bad
> >>> feelings and conflict.
> >>
> >> The way this worked in the past is we had cores who were subject
> >> matter experts in various parts of the code -- there is a clear set of
> >> cores who "get" xen or libivrt for example and I feel like those
> >> drivers get reasonable review times. What's happened though is that
> >> we've added a bunch of drivers without adding subject matter experts
> >> to core to cover those drivers. Those newer drivers therefore have a
> >> harder time getting things reviewed and approved.
> > 
> > FYI, for Juno at least I really don't consider that even the libvirt
> > driver got acceptable review times in any sense. The pain of waiting
> > for reviews in libvirt code I've submitted this cycle is what prompted
> > me to start this thread. All the virt drivers are suffering way more
> > than they should be, but those without core team representation suffer
> > to an even greater degree.  And this is ignoring the point Jay & I
> > were making about how the use of a single team means that there is
> > always contention for feature approval, so much work gets cut right
> > at the start even if maintainers of that area felt it was valuable
> > and worth taking.
> 
> I continue to not understand how N non overlapping teams makes this any
> better. You have to pay the integration cost somewhere. Right now we're
> trying to pay it 1 patch at a time. This model means the integration
> units get much bigger, and with less common ground.

OK, so look at a concrete example: in 2002, the Linux kernel went with
bitkeeper precisely because we'd reached the scaling limit of a single
integration point, so we took the kernel from a single contributing team
to a bunch of them.  This was expanded with git in 2005 and leads to the
hundreds of contributing teams we have today.

The reason this scales nicely is precisely because the integration costs
are lower.  However, there are a couple of principles that really assist
us getting there.  The first is internal API management: an Internal API
is a contract between two teams (may be more, but usually two).  If
someone wants to change this API they have to negotiate between the two
(or more) teams.  This naturally means that only the affected components
review this API change, but *only* they need to review it, so it doesn't
bubble up to the whole kernel community.  The second is automation:
linux-next and the zero day test programme build and smoke test an
integration of all our development trees.  If one team does something
that impacts another in their development tree, this system gives us
immediate warning.  Basically we run continuous integration, so when
Linus does his actual integration pull, everything goes smoothly (that's
how we integrate all the 300 or so trees for a kernel release in about
ten days).  We also now have a lot of review automation (checkpatch.pl
for instance), but that's independent of the number of teams

In this model the scaling comes from the local reviews and integration.
The more teams the greater the scaling.  The factor which obstructs
scaling is the internal API ... it usually doesn't make sense to
separate a component where there's no API between the two pieces ...
however, if you think there should be, separating and telling the teams
to figure it out is a great way to generate the API.   The point here is
that since an API is a contract, forcing people to negotiate and abide
by the contract tends to make them think much more carefully about it.
Internal API moves from being a global issue to being a local one.

By the way, the extra link work is actually time well spent because it
means the link APIs are negotiated by teams with use cases not just
designed by abstract architecture.  The greater the link pain the
greater the indication that there's an API problem and the greater the
pressure on the teams either end to fix it.  Once the link pain is
minimised, the API is likely a good one.

> Look at how much active work in crossing core teams we've had to do to
> make any real progress on the neutron replacing nova-network front. And
> how slow that process is. I think you'll see that hugely show up here.

Well, as I said, separating the components leads to API negotiation
between the teams  Because of the API negotiation, taking one thing and
making it two does cause more work, and it's visible work because the
two new teams get to do the API negotiation which didn't exist before.
The trick to getting the model to scale is the network effect.  The
scaling comes by splitting out into high numbers of teams (say N) the
added work comes in the links (the API contracts) between the N teams.
If the network is star shaped (ev

Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-09 Thread James Bottomley
On Mon, 2014-09-08 at 17:20 -0700, Stefano Maffulli wrote:
> On 09/05/2014 07:07 PM, James Bottomley wrote:
> > Actually, I don't think this analysis is accurate.  Some people are
> > simply interested in small aspects of a project.  It's the "scratch your
> > own itch" part of open source.  The thing which makes itch scratchers
> > not lone wolfs is the desire to go the extra mile to make what they've
> > done useful to the community.  If they never do this, they likely have a
> > forked repo with only their changes (and are the epitome of a lone
> > wolf).  If you scratch your own itch and make the effort to get it
> > upstream, you're assisting the community (even if that's the only piece
> > of code you do) and that assistance makes you (at least for a time) part
> > of the community.
> 
> I'm starting to think that the processes we have implemented are slowing
> down (if not preventing) "scratch your own itch" contributions. The CLA
> has been identified as the cause for this but after carefully looking at
> our development processes and the documentation, I think that's only one
> part of the problem (and maybe not even as big as initially thought).

CLAs are a well known and documented barrier to casual contributions
(just look at all the project harmony discussion),  they affect one offs
disproportionately since they require an investment of effort to
understand and legal resources are often unavailable to individuals.
The key problem for individuals in the US is usually do I or my employer
own my contribution?  Because that makes a huge difference to the
process for signing.

> The gerrit workflow for example is something that requires quite an
> investment in time and energy and casual developers (think operators
> fixing small bugs in code, or documentation) have little incentive to go
> through the learning curve.

I've done both ... I do prefer the patch workflow to the gerrit one, but
I think that's just because the former is what I used for ten years and
I'm very comfortable with it.  The good thing about the patch workflow
is that the initial barrier is very low.  However, the later barriers
can be as high or higher.

> To go back in topic, to the proposal to split drivers out of tree, I
> think we may want to evaluate other, simpler, paths before we embark in
> a huge task which is already quite clear will require more cross-project
> coordination.
> 
> From conversations with PTLs and core reviewers I get the impression
> that lots of drivers contributions come with bad code.

Bad code is a bit of a pejorative term.  However, I can sympathize with
the view: In the Linux Kernel, drivers are often the biggest source of
coding style and maintenance issues.  I maintain a driver subsystem and
I would have to admit that a lot of code that goes into those drivers
that wouldn't be of sufficient quality to be admitted to the core kernel
without a lot more clean up and flow changes.  However, is this bad
code?  It mostly works, so it does the job it's designed for.  Usually
the company producing the device is the one maintaining the driver so as
long as they have the maintenance burden and do their job there's no
real harm.  It's a balance, and sometimes I get it wrong, but I do know
from bitter effort that there's a limit to what you can get busy
developers to do in the driver space.

>  These require a
> lot of time and reviewers energy to be cleaned up, causing burn out and
> bad feelings on all sides. What if we establish a new 'place' of some
> sort where we can send people to improve their code (or dump it without
> interfering with core?) Somewhere there may be a workflow
> "go-improve-over-there" where a Community Manager (or mentors or some
> other program we may invent) takes over and does what core reviewers
> have been trying to do 'on the side'? The advantage is that this way we
> don't have to change radically how current teams operate, we may be able
> to start this immediately with Kilo. Thoughts?

I think it's a question of communities, like Daniel said.  In the
kernel, the driver reviewers are a different community from the core
kernel code reviewers.  Most core reviewers would probably fry their own
eyeballs before they'd review device driver code.  So the solution is
not to make them; instead we set up a review community of people who
understand driver code and make allowances for some of its
eccentricities.  At the end of the day, bad code is measured by defect
count which impacts usability for drivers and the reputation of that
driver is what suffers.  I'm sure in OpenStack, driver reputation is an
easy way to encourage better drivers ... after all hypervisors are
pretty fungible: if the Bar 

Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-05 Thread James Bottomley
On Fri, 2014-09-05 at 14:14 +0200, Thierry Carrez wrote:
> Daniel P. Berrange wrote:
> > For a long time I've use the LKML 'subsystem maintainers' model as the
> > reference point for ideas. In a more LKML like model, each virt team
> > (or other subsystem team) would have their own separate GIT repo with
> > a complete Nova codebase, where they did they day to day code submissions,
> > reviews and merges. Periodically the primary subsystem maintainer would
> > submit a large pull / merge requests to the overall Nova maintainer.
> > The $1,000,000 question in such a model is what kind of code review
> > happens during the big pull requests to integrate subsystem trees. 
> 
> Please note that the Kernel subsystem model is actually a trust tree
> based on 20 years of trust building. OpenStack is only 4 years old, so
> it's difficult to apply the same model as-is.

That's true but not entirely accurate.  The kernel maintainership is a
trust tree, but not every person in that tree has been in the position
for 20 years.  We have one or two who have (Dave Miller, net maintainer,
for instance), but we have some newcomers: Sarah Sharp has only been on
USB3.0 for a year.  People pass in and out of the maintainer tree all
the time.

In many ways, the Open Stack core model is also a trust tree (you elect
people to the core and support their nominations because you trust them
to do the required job).  It's not a 1 for 1 conversion, but it should
be possible to derive the trust you need from the model you already
have, should you wish to make OpenStack function more like the Linux
Kernel.

Essentially Daniel's proposal boils down to making the trust boundaries
align with separated community interests to get more scaling in the
model.  This is very similar to the way the kernel operates: most
maintainers only have expertise in their own areas.  We have a few
people with broad reach, like Andrew and Linus, but by and large most
people settle down in a much smaller area.  However, you don't have to
follow the kernel model to get this to happen, you just have to identify
the natural interest boundaries of the contributors and align around
them (provided they have enough mass to form their own community).

James



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-05 Thread James Bottomley

On Fri, 2014-09-05 at 08:02 -0400, Sean Dague wrote:
> On 09/05/2014 07:40 AM, Daniel P. Berrange wrote:
> > On Fri, Sep 05, 2014 at 07:12:37AM -0400, Sean Dague wrote:
> >> On 09/05/2014 06:40 AM, Nikola Đipanov wrote:
> >>> A handy example of this I can think of is the currently granted FFE for
> >>> serial consoles - consider how much of the code went into the common
> >>> part vs. the libvirt specific part, I would say the ratio is very close
> >>> to 1 if not even in favour of the common part (current 4 outstanding
> >>> patches are all for core, and out of the 5 merged - only one of them was
> >>> purely libvirt specific, assuming virt/ will live in nova-common).
> >>>
> >>> Joe asked a similar question elsewhere on the thread.
> >>>
> >>> Once again - I am not against doing it - what I am saying is that we
> >>> need to look into this closer as it may not be as big of a win from the
> >>> number of changes needed per feature as we may think.
> >>>
> >>> Just some things to think about with regards to the whole idea, by no
> >>> means exhaustive.
> >>
> >> So maybe the better question is: what are the top sources of technical
> >> debt in Nova that we need to address? And if we did, everyone would be
> >> more sane, and feel less burnt.
> >>
> >> Maybe the drivers are the worst debt, and jettisoning them makes them
> >> someone else's problem, so that helps some. I'm not entirely convinced
> >> right now.
> >>
> >> I think Cells represents a lot of debt right now. It doesn't fully work
> >> with the rest of Nova, and produces a ton of extra code paths special
> >> cased for the cells path.
> >>
> >> The Scheduler has a ton of debt as has been pointed out by the efforts
> >> in and around Gannt. The focus has been on the split, but realistically
> >> I'm with Jay is that we should focus on the debt, and exposing a REST
> >> interface in Nova.
> >>
> >> What about the Nova objects transition? That continues to be slow
> >> because it's basically Dan (with a few other helpers from time to time).
> >> Would it be helpful if we did an all hands on deck transition of the
> >> rest of Nova for K1 and just get it done? Would be nice to have the bulk
> >> of Nova core working on one thing like this and actually be in shared
> >> context with everyone else for a while.
> > 
> > I think the idea that we can tell everyone in Nova what they should
> > focus on for a cycle, or more generally, is doomed to failure. This
> > isn't a closed source company controlled project where you can dictate
> > what everyones priority must be. We must accept that rely on all our
> > contributors good will in voluntarily giving their time & resource to
> > the projct, to scratch whatever itch they have in the project. We have
> > to encourage them to want to work nova and demonstrate that we value
> > whatever form of contributor they choose to make. If we have technical
> > debt that we think is important to address we need to illustrate /
> > show people why they should care about helping. If they none the less
> > decide that work isn't for them, we can't just cast them aside and/or
> > ignore their contributions, while we get on with other things. This
> > is why I think it is important that we split up nova to allow each
> > are to self-organize around what they consider to be priorities in
> > their area of interest / motivation. Not enabling that is going to
> > to continue to kill our community
> 
> I'm getting tired of the reprieve that because we are an Open Source
> project declaring priorities is pointless, because it's not. I would say
> it's actually the exception that a developer wakes up in the morning and
> says "I completely disregard what anyone else thinks is important in
> this project, this is what I'm going to do today". Because if that's how
> they felt they wouldn't choose to be part of a community, they would
> just go do their own thing. Lone wolfs by definition don't form
> communities.

Actually, I don't think this analysis is accurate.  Some people are
simply interested in small aspects of a project.  It's the "scratch your
own itch" part of open source.  The thing which makes itch scratchers
not lone wolfs is the desire to go the extra mile to make what they've
done useful to the community.  If they never do this, they likely have a
forked repo with only their changes (and are the epitome of a lone
wolf).  If you scratch your own itch and make the effort to get it
upstream, you're assisting the community (even if that's the only piece
of code you do) and that assistance makes you (at least for a time) part
of the community.

A community doesn't necessarily require continuity from all its
elements.  It requires continuity from some (the core, if you will), but
it also allows for contributions from people who only have one or two
things they need doing.  For OpenStack to convert its users into its
contributors, it is going to have to embrace this, because they likely
only need a couple of things fixi

Re: [openstack-dev] [Containers][Nova] Containers Team Mid-Cycle Meetup to join Nova Meetup

2014-07-14 Thread James Bottomley
On Fri, 2014-07-11 at 22:31 +, Adrian Otto wrote:
> CORRECTION: This event happens July 28-31. Sorry for any confusion!
> Corrected Announcement:

I'm afraid all the Parallels guys (including me) will be in Moscow on
these dates for an already booked company meet up.

James



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Containers] Nova virt driver requirements

2014-07-10 Thread James Bottomley
On Thu, 2014-07-10 at 14:47 +0100, Daniel P. Berrange wrote:
> On Thu, Jul 10, 2014 at 05:36:59PM +0400, Dmitry Guryanov wrote:
> > I have a question about mounts - in OpenVZ project each container has its 
> > own 
> > filesystem in an image file. So to start a container we mount this 
> > filesystem 
> > in host OS (because all containers share the same linux kernel). Is it a 
> > security problem from the Openstack's developers vision?
> > 
> > 
> > I have this question, because libvirt's driver uses libguestfs to copy some 
> > files into guest filesystem instead of simple mount on host. Mounting with 
> > libguestfs is slower, then mount on host, so there should be strong 
> > reasons, 
> > why libvirt driver does it.
> 
> We consider mounting untrusted filesystems on the host kernel to be
> an unacceptable security risk. A user can craft a malicious filesystem
> that expliots bugs in the kernel filesystem drivers. This is particularly
> bad if you allow the kernel to probe for filesystem type since Linux
> has many many many filesystem drivers most of which are likely not
> audited enough to be considered safe against malicious data. Even the
> mainstream ext4 driver had a crasher bug present for many years
> 
>   https://lwn.net/Articles/538898/
>   http://libguestfs.org/guestfs.3.html#security-of-mounting-filesystems

Actually, there's a hidden assumption here that makes this statement not
necessarily correct for containers.  You're assuming the container has
to have raw access to the device it's mounting.  For hypervisors, this
is true, but it doesn't have to be for containers because the mount
operation is separate from raw read and write so we can allow or deny
them granularly.

Consider the old use case, where the container root is actually a
subdirectory of the host filesystem which gets bind mounted.  The
container has no possibility of altering the underlying block device
there.  For block roots, which we also do, at least in the VPS world,
they're mostly initialised by the hosting provider and the VPS
environment doesn't actually get to read or write directly to them
(there's often a block on this).  Of course, they *can* be set up so the
VPS has raw access and I believe some are, but it's a choice not a
requirement.

James



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [containers][nova][cinder] Cinder support in containers and unprivileged container-in-container

2014-06-13 Thread James Bottomley
On Fri, 2014-06-13 at 17:55 -0400, Eric Windisch wrote:
> >
> >
> > Why would you mount it from within the container?  CAP_SYS_ADMIN is a
> > per process property, so you use nsenter to execute the mount in the
> > required mount namespace with CAP_SYS_ADMIN from outside of the
> > container (i.e. the host).  I assume this requires changes to cinder so
> > it executes a mount rather than presenting a mountable device node, but
> > it's the same type of change we have to do for mounts which have no
> > node, like bind mounts.
> >
> 
> It's a matter of API adherence. You're right, however, another option for
> this etherpad is, "extend the API". We could add an extension to OpenStack
> that allows the host to initiate a mount inside an instance.  That isn't
> much different than the existing suggestion of a container-level API for
> speaking back to the host to initiate a mount, other than this suggestion
> being at the orchestration layer, rather than at the host-level.

OK, but this argument is effectively saying hypervisors can't do this,
so our API doesn't allow it ... it's true but a bit useless.  Containers
have all sorts of great capabilities that hypervisors don't.  The number
one great one from a security point of view is being able to reach into
the container from the host and do or configure things that the
container itself is prevented from doing.

This allows you to set up a completely secure babysat environment where
any dangerous action by the container gets referred up to the host to
perform.

> In part, this discussion and the exercise of writing this etherpad is to
> explore alternatives to "this isn't a valid use-case".  At a high-level,
> the alternatives seem to be to have an API the containers can use speak
> back to the host to initiate mounts or finding some configuration of the
> kernel (possibly with new features) that would provide a long-term solution.
> 
> I'm not fond of an API based solution because it means baking in
> expectations of a specific containers-service API such as the Docker API,
> or of a specific orchestration API such as the OpenStack Compute API. It
> might, however, be a good short-term option.

This is saying we (the container implementations) all do this in
different ways, which is true, but there's no reason we couldn't all
agree on a granular way of doing it we could then translate to an
OpenStack API ... we just need the action performed; I don't think any
of us has a great attachment to *how* the action is performed.  I think
the recently announced libcontainers effort will help us here because it
actually has a mount API ... we could 

> Daniel also brings up an interesting point about user namespaces, although
> I'm somewhat worried about that approach given that we can exploit the host
> with crafty filesystems. It had been considered that we could provide
> configurations that only allow FUSE. Granted, there might be some
> possibility of implementing a solution that would limit containers to
> mounting specific types of filesystems, such as only allowing FUSE mounts.

I replied in the other thread, but I think CAP_SYS_ADMIN is too
dangerous for a really secure container.

James



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [containers][nova][cinder] Cinder support in containers and unprivileged container-in-container

2014-06-13 Thread James Bottomley
On Fri, 2014-06-13 at 09:09 +0100, Daniel P. Berrange wrote:
> On Thu, Jun 12, 2014 at 09:57:41PM +, Adrian Otto wrote:
> > Containers Team,
> > 
> > The nova-docker developers are currently discussing options for
> > implementation for supporting mounting of Cinder volumes in
> > containers, and creation of unprivileged containers-in-containters.
> > Both of these currently require CAP_SYS_ADMIN[1] which is problematic
> > because if granted within a container, can lead to an escape from the
> > container back into the host.
> 
> NB it is fine for a container to have CAP_SYS_ADMIN if user namespaces
> are enabled and the root user remapped.

Not if you want a truly secure container, but this is more of a
judgement call as to how secure the container should be.  CAP_SYS_ADMIN
is a nasty sinkhole of miscellaneous privielges which makes it a pretty
dangerous capability for an ordinary user.  You have to be really
careful because there's lots of ways an ordinary user with CAP_SYS_ADMIN
can actually become root.  What we did for OpenVZ was break
CAP_SYS_ADMIN up into more granular capabilities and put guards on the
dangerous ones, but even just mount can be problematic: you have to
forbid suid executables etc and you have to beware of fuzzing the
filesystem.

James

> Also, we should remember that mounting filesystems is not the only use
> case for exposing block devices to containers. Some applications will
> happily use raw block devices directly without needing to format and
> mount any filesystem on them (eg databases).
> 
> Regards,
> Daniel




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [containers][nova][cinder] Cinder support in containers and unprivileged container-in-container

2014-06-12 Thread James Bottomley
On Thu, 2014-06-12 at 21:57 +, Adrian Otto wrote:
> Containers Team,
> 
> The nova-docker developers are currently discussing options for
> implementation for supporting mounting of Cinder volumes in
> containers, and creation of unprivileged containers-in-containters.
> Both of these currently require CAP_SYS_ADMIN[1] which is problematic
> because if granted within a container, can lead to an escape from the
> container back into the host.

Why would you mount it from within the container?  CAP_SYS_ADMIN is a
per process property, so you use nsenter to execute the mount in the
required mount namespace with CAP_SYS_ADMIN from outside of the
container (i.e. the host).  I assume this requires changes to cinder so
it executes a mount rather than presenting a mountable device node, but
it's the same type of change we have to do for mounts which have no
node, like bind mounts.

James



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Process for proposing patches attached to launchpad bugs?

2013-12-23 Thread James Bottomley
On Fri, 2013-12-20 at 14:07 -0500, Russell Bryant wrote:
> On 12/20/2013 09:32 AM, Dolph Mathews wrote:
> > In the past, I've been able to get authors of bug fixes attached to
> > Launchpad bugs to sign the CLA and submit the patch through gerrit...
> > although, in one case it took quite a bit of time (and thankfully it
> > wasn't a critical fix or anything).
> > 
> > This scenario just came up again (example: [1]), so I'm asking
> > preemptively... what if the author is unwilling / unable in signing the
> > CLA and propose through gerrit, or it's a critical bug fix and waiting
> > on an author to go through the CLA process is undesirable for the
> > community? Obviously that's a bit of a fail on our part, but what's the
> > most appropriate & expedient way to handle it?
> > 
> > Can we propose the patch to gerrit ourselves?
> > 
> > If so, who should appear as the --author of the commit? Who should
> > appear as Co-Authored-By, especially when the committer helps to evolve
> > the patch evolves further in review?
> > 
> > Alternatively, am I going about this all wrong?
> > 
> > Thanks!
> > 
> > [1]: https://bugs.launchpad.net/keystone/+bug/1198171/comments/8
> 
> It's not your code, so you really can't propose it without them having
> signed the CLA, or propose it as your own.
> 
> Ideally have someone else fix the same bug that hasn't looked at the patch.
> 
> >From a quick look, it seems likely that this fix is small and straight
> forward enough that the clean new implementation is going to end up
> looking very similar.  Still, I think it's the right thing to do.

What is the actual point of the CLA? since it seems to be a barrier to
contribution.  Why not do something affirmative like a DCO instead.  The
signed-off-by on a patch submitted by any mechanism (email list, patch
to bugzilla, launchpad, gerrit) can then be included because it carries
the DCO affirmation with it.

The reason CLAs are such a barrier is that if you're an employee of a
company, you have to get the approval of that company to sign them.  If
the company is used to this type of thing, it's usually a couple of days
turn around.  If it's not, it can be painful weeks trying to explain to
the corporate counsel what you're trying to do (and why it won't affect
the company too much) ... a large number of developers simply give up in
the middle of this.

The only point the current CLA would serve would be to allow the
OpenStack foundation to change the licence from ASL-2.0 to an ASL
incompatible licence since DCO + ASL-2.0 seems to cover all the other
terms in the current CLA.  Is that really worth all the pain?  I have a
hard time coming up with another licence besides GPLv2 that is actually
ASL-2.0 incompatible, so any licensing switch could apparently be made
without the need for a copyright grant.

James



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Generic question: Any tips for 'keeping up' with the mailing lists?

2013-12-14 Thread James Bottomley
On Thu, 2013-12-12 at 16:23 +, Justin Hammond wrote:
> I am a developer who is currently having troubles keeping up with the
> mailing list due to volume, and my inability to organize it in my client.
> I am nearly forced to use Outlook 2011 for Mac and I have read and
> attempted to implement
> https://wiki.openstack.org/wiki/MailingListEtiquette but it is still a lot
> to deal with. I read once a topic or wiki page on using X-Topics but I
> have no idea how to set that in outlook (google has told me that the
> feature was removed).
> 
> I'm not sure if this is a valid place for this question, but I *am* having
> difficulty as a developer.
> 
> Thank you for anyone who takes the time to read this.

I currently use the same technique I used to use with LKML (Linux Kernel
Mailing List) before we (finally) got subject specific lists and I could
just subscribe to those instead.

The essential requirement is to have a threaded mail reader with a "mark
thread read" function.  I use evolution where this is ctrl-H ctrl-K.  I
skip through the threads marking those I'm not interested in as read.
Sometimes I do this by subject other times by skimming the thread head
email.  When I'm finished, everything I didn't skip as read is usually
the interesting stuff.

Threads I'm really interested in, I tag with an imap flag which I have
an evolution search folder set on for flag+children (and which pops up
into my mail notification window), so I see immediately any activity on
interesting threads.

I also use procmail on the mail server with a set of heuristics mostly
on subject for things I may be interested in, these get flagged as well,
so they appear in my email notifications in evolution.  Likewise, things
I'm definitely not interested in get tagged as read by the heuristics.

The procmail thing is server side and means you need a sophisticated
mail server, but anything that runs Linux should have sieve which should
be able to do this (not sure about gmail, because I've never used it).

James



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Introducing the new OpenStack service for Containers

2013-11-19 Thread James Bottomley
On Tue, 2013-11-19 at 13:46 -0500, Eric Windisch wrote:
> On Tue, Nov 19, 2013 at 1:02 PM, James Bottomley
>  wrote:
> > On Mon, 2013-11-18 at 14:28 -0800, Stuart Fox wrote:
> >> Hey all
> >>
> >> Not having been at the summit (maybe the next one), could somebody
> >> give a really short explanation as to why it needs to be a separate
> >> service?
> >> It sounds like it should fit within the Nova area. It is, after all,
> >> just another hypervisor type, or so it seems.
> >
> > I can take a stab at this:  Firstly, a container is *not* a hypervisor.
> > Hypervisor based virtualisation is done at the hardware level (so with
> > hypervisors you boot a second kernel on top of the virtual hardware),
> > container based virtualisation is done at the OS (kernel) level (so all
> > containers share the same kernel ... and sometimes even huge chunks of
> > the OS). With recent advances in the Linux Kernel, we can make a
> > container behave like a hypervisor (full OS/IaaS virtualisation), but
> > quite a bit of the utility of containers is that they can do much more
> > than hypervisors, so they shouldn't be constrained by hypervisor APIs
> > (which are effectively virtual hardware APIs).
> >
> > It is possible to extend the Nova APIs to control containers more fully,
> > but there was resistance do doing this on the grounds that it's
> > expanding the scope of Nova, hence the new project.
> 
> It might be worth noting that it was also brought up that
> hypervisor-based virtualization can offer a number of features that
> bridge some of these gaps, but are not supported in, nor may ever be
> supported in Nova.
> 
> For example, Daniel brings up an interesting point with the
> libvirt-sandbox feature. This is one of those features that bridges
> some of the gaps. There are also solutions, however brittle, for
> introspection that work on hypervisor-driven VMs. It is not clear what
> the scope or desire for these features might be, how they might be
> sufficiently abstracted between hypervisors and guest OSes, nor how
> these would fit into any of the existing or planned compute API
> buckets.

It's certainly possible, but some of them are possible in the same way
as it's possible to get a square peg into a round hole by beating the
corners flat with a sledge hammer ... it works, but it's much less
hassle just to use a round peg because it actually fits the job.

> Having a separate service for managing containers draws a thick line
> in the sand that will somewhat stiffen innovation around
> hypervisor-based virtualization. That isn't necessarily a bad thing,
> it will help maintain stability in the project. However, the choice
> and the implications shouldn't be ignored.

How about this: we get the container API agreed and we use a driver
model like Nova (we have to anyway since there about four different
container technologies interested in this), then we see if someone wants
to do a hypervisor driver emulating the features.

James



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Introducing the new OpenStack service for Containers

2013-11-19 Thread James Bottomley
On Mon, 2013-11-18 at 14:28 -0800, Stuart Fox wrote:
> Hey all
> 
> Not having been at the summit (maybe the next one), could somebody
> give a really short explanation as to why it needs to be a separate
> service?
> It sounds like it should fit within the Nova area. It is, after all,
> just another hypervisor type, or so it seems.

I can take a stab at this:  Firstly, a container is *not* a hypervisor.
Hypervisor based virtualisation is done at the hardware level (so with
hypervisors you boot a second kernel on top of the virtual hardware),
container based virtualisation is done at the OS (kernel) level (so all
containers share the same kernel ... and sometimes even huge chunks of
the OS). With recent advances in the Linux Kernel, we can make a
container behave like a hypervisor (full OS/IaaS virtualisation), but
quite a bit of the utility of containers is that they can do much more
than hypervisors, so they shouldn't be constrained by hypervisor APIs
(which are effectively virtual hardware APIs).

It is possible to extend the Nova APIs to control containers more fully,
but there was resistance do doing this on the grounds that it's
expanding the scope of Nova, hence the new project.

James



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Bad review patterns

2013-11-06 Thread James Bottomley
On Thu, 2013-11-07 at 00:21 +, Day, Phil wrote:
> > 
> > Leaving a mark.
> > ===
> > 
> > You review a change and see that it is mostly fine, but you feel that since 
> > you
> > did so much work reviewing it, you should at least find
> > *something* wrong. So you find some nitpick and -1 the change just so that
> > they know you reviewed it.
> > 
> > This is quite obvious. Just don't do it. It's OK to spend an hour reviewing
> > something, and then leaving no comments on it, because it's simply fine, or
> > because we had to means to test someting (see the first pattern).
> > 
> > 
> 
> Another one that comes into this category is adding a -1 which just says "I 
> agree with
> the other -1's in here".   If you have some additional perspective and can 
> expand on
> it then that's fine - otherwise it adds very little and is just review count 
> chasing.
> 
> It's an unfortunate consequence of counting and publishing review stats that 
> having
> such a measure will inevitable also drive behavour.

Perhaps a source of the problem is early voting.  Feeling pressure to
add a +/-1 (or even 2) without fully asking what's going on in the code
leads to premature adjudication.

For instance, I have to do a lot of reviews in SCSI; I'm a subject
matter expert, so I do recognise some bad coding patterns and ask for
them to be changed unconditionally, but a lot of the time I don't
necessarily understand why the code was done in the way it was, so I
ask.  Before I get the answer I'm not really qualified to judge the
merits.

James



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev