On Thu, Dec 19, 2013 at 2:34 PM, Andrew Purtell <[email protected]> wrote:
> > You are not in favor of what is doc'd as community decision? > > No, and as a member of the community let me indicate that and suggest > reconsideration. > > Lets start up a formal discussion then Andrew with a DISCUSSION subject rather than do it down here on the tail of an unrelated thread. > > The policy has a mechanism for skirting absent owners; i.e. two +1s by > random > committers == an owner's +1. > > The ownership idea looks good on paper, but has it worked out? People on > the project come and go, and volunteer the time they can, which is variable > - this is normal and expected. > > The lieutenant idea is 'working' IMO: witness Jimmy input on all to do w/ AM, you or Gary on security/CPs, and so on. It would be better if it was more comprehensive with more components having more active guardians. > Perhaps we can address the issue of crap commits more directly? Is that > possible? > > > If you have suggestion, I'm all ears. Other means have been tried -- education and shaming to name two -- but these work sporadically if at all and even afterward, we still have unwanted 'testing infrastructure' committed caught only after a second reviewer showed up after the commit, changes to critical sections without proofing on 'real' cluster under 'real' loads (where the settings that work for unit tests, as it can be imagined, can fail miserably), and then incompatible changes being committed by 'veterans' even up unto recently (I have been guilty of all the above listed myself). I'm kinda of stumped on figuring another means of upping the quality of what goes in other than upping the friction. Fixing incompatibilities after a release is paid not by the committer but some other poor unfortunate, downstream soul and the price is usually way in excess of letting the patch steam a while until a second or third reviewer has had a looksee (I was recently such an unfortunate myself). We could spend more time on our testing infra. That would help w/ perf regressions and unexpected side effect bugs. We could build a public rig for incompatibility checking (Our Aleks has made a start). We've made a bunch of progress in the testing area over the 0.96 period w/ the hbase-it set but we could do more surfacing a public cluster w/ folks 'concerned' on error or fall-off in perf, or incompatibilites, taking up the fixing of the regression as a priority. If we had this in place, it would substitute for our introducing more friction (though we should have the friction anyways because better test infra won't catch the new 'testing infrastructure' or plain bad design). St.Ack
