On 08/14/2014 09:21 AM, Devananda van der Veen wrote:
> On Aug 14, 2014 2:04 AM, "Eoghan Glynn" <egl...@redhat.com
> <mailto:egl...@redhat.com>> wrote:
>> > >> Letting the industry field-test a project and feed their experience
>> > >> back into the community is a slow process, but that is the best
>> > >> measure of a project's success. I seem to recall this being an
>> > >> implicit expectation a few years ago, but haven't seen it
> discussed in
>> > >> a while.
>> > >
>> > > I think I recall us discussing a "must have feedback that it's
>> > > successfully deployed" requirement in the last cycle, but we
> recognized
>> > > that deployers often wait until a project is integrated.
>> >
>> > In the early discussions about incubation, we respected the need to
>> > officially recognize a project as part of OpenStack just to create the
>> > uptick in adoption necessary to mature projects. Similarly,
> integration is a
>> > recognition of the maturity of a project, but I think we have graduated
>> > several projects long before they actually reached that level of
> maturity.
>> > Actually running a project at scale for a period of time is the only
> way to
>> > know it is mature enough to run it in production at scale.
>> >
>> > I'm just going to toss this out there. What if we set the graduation
> bar to
>> > "is in production in at least two sizeable clouds" (note that I'm
> not saying
>> > "public clouds"). Trove is the only project that has, to my
> knowledge, met
>> > that bar prior to graduation, and it's the only project that
> graduated since
>> > Havana that I can, off hand, point at as clearly successful. Heat and
>> > Ceilometer both graduated prior to being in production; a few cycles
> later,
>> > they're still having adoption problems and looking at large
> architectural
>> > changes. I think the added cost to OpenStack when we integrate
> immature or
>> > unstable projects is significant enough at this point to justify a more
>> > defensive posture.
>> >
>> > FWIW, Ironic currently doesn't meet that bar either - it's in
> production in
>> > only one public cloud. I'm not aware of large private installations yet,
>> > though I suspect there are some large private deployments being spun up
>> > right now, planning to hit production with the Juno release.
>> We have some hard data from the user survey presented at the Juno summit,
>> with respectively 26 & 53 production deployments of Heat and Ceilometer
>> reported.
>> There's no cross-referencing of deployment size with services in
> production
>> in those data presented, though it may be possible to mine that out of the
>> raw survey responses.
> Indeed, and while that would be useful information, I was referring to
> the deployment of those services at scale prior to graduation, not post
> graduation.

We have a tough messaging problem here though.  I suspect many users
wait until graduation to consider a real deployment.  "Incubated" is
viewed as immature / WIP / etc.  That won't change quickly, even if we
want it to.

I think our intentions are already to not graduate something that isn't
ready for production.  That doesn't mean we haven't made mistakes, but
we're trying to learn and improve.  We developed a set of *written*
guidelines to stick to, and have been holding all projects up to them.
Teams like Ceilometer have been very receptive to the process, have
developed plans to fill gaps, and have been working hard on the issues.

A hard rule for production deployments seems like a heavy rule.  I'd
rather just say that we should be confident that it's a production ready
component, and known deployments are one such piece of input that would
provide that confidence.  It could also just be extraordinary testing
that shows both scale and quality.

Russell Bryant

OpenStack-dev mailing list

Reply via email to