On Wed, Aug 13, 2014 at 5:37 AM, Mark McLoughlin <mar...@redhat.com> wrote: > On Fri, 2014-08-08 at 15:36 -0700, Devananda van der Veen wrote: >> On Tue, Aug 5, 2014 at 10:02 AM, Monty Taylor <mord...@inaugust.com> wrote: > >> > Yes. >> > >> > Additionally, and I think we've been getting better at this in the 2 cycles >> > that we've had an all-elected TC, I think we need to learn how to say no on >> > technical merit - and we need to learn how to say "thank you for your >> > effort, but this isn't working out" Breaking up with someone is hard to do, >> > but sometimes it's best for everyone involved. >> > >> >> I agree. >> >> The challenge is scaling the technical assessment of projects. We're >> all busy, and digging deeply enough into a new project to make an >> accurate assessment of it is time consuming. Some times, there are >> impartial subject-matter experts who can spot problems very quickly, >> but how do we actually gauge fitness? > > Yes, it's important the TC does this and it's obvious we need to get a > lot better at it. > > The Marconi architecture threads are an example of us trying harder (and > kudos to you for taking the time), but it's a little disappointing how > it has turned out. On the one hand there's what seems like a "this > doesn't make any sense" gut feeling and on the other hand an earnest, > but hardly bite-sized justification for how the API was chosen and how > it lead to the architecture. Frustrating that appears to not be > resulting in either improved shared understanding, or improved > architecture. Yet everyone is trying really hard.
Sometimes "trying really hard" is not enough. Saying goodbye is hard, but as has been pointed out already in this thread, sometimes it's necessary. > >> Letting the industry field-test a project and feed their experience >> back into the community is a slow process, but that is the best >> measure of a project's success. I seem to recall this being an >> implicit expectation a few years ago, but haven't seen it discussed in >> a while. > > I think I recall us discussing a "must have feedback that it's > successfully deployed" requirement in the last cycle, but we recognized > that deployers often wait until a project is integrated. In the early discussions about incubation, we respected the need to officially recognize a project as part of OpenStack just to create the uptick in adoption necessary to mature projects. Similarly, integration is a recognition of the maturity of a project, but I think we have graduated several projects long before they actually reached that level of maturity. Actually running a project at scale for a period of time is the only way to know it is mature enough to run it in production at scale. I'm just going to toss this out there. What if we set the graduation bar to "is in production in at least two sizeable clouds" (note that I'm not saying "public clouds"). Trove is the only project that has, to my knowledge, met that bar prior to graduation, and it's the only project that graduated since Havana that I can, off hand, point at as clearly successful. Heat and Ceilometer both graduated prior to being in production; a few cycles later, they're still having adoption problems and looking at large architectural changes. I think the added cost to OpenStack when we integrate immature or unstable projects is significant enough at this point to justify a more defensive posture. FWIW, Ironic currently doesn't meet that bar either - it's in production in only one public cloud. I'm not aware of large private installations yet, though I suspect there are some large private deployments being spun up right now, planning to hit production with the Juno release. -Devananda
_______________________________________________ OpenStack-dev mailing list OpenStackemail@example.com http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev