On Tue, 10 Apr 2012 14:08:34 -0400 Jeffrey Altman <jalt...@secure-endpoints.com> wrote:
> What I said in reply to the "add as a non-failing step" proposal is > that nobody is going to look for a failing step if it doesn't break > the verification process. As such, doing so is pointless. If "make > check" is going to be added, it must be add so that it fails the > verification if it doesn't complete successfully. ...and I responded with that it makes it much easier to see when the 'make check' tests no longer fail for a platform, so we know when we can change it into a "failing step" (or "required to verify" step, or whatever). Instead of asking someone to manually run the tests, I can submit a change and look at the buildbot output to see if it worked. I have not yet heard any downside for doing that. > > Asking people to manually run the tests doesn't scale well, if there > > are a bunch of things to fix and we keep asking them to try new > > patches. > > That is not what I said. To repeat it: > > Before "make check" is turned on for any builder, that builder's > owner must perform "make check" manually to ensure that "make check" > succeeds. > > This process doesn't have to scale. It is done at most once per > active branch. Its no different than what I hope is done before a new > builder is added to the requirements list. Adding a builder that > doesn't build successfully because of a broken build environment has > exactly the same impact. You have made no guarantee that this is done at most once per branch: Say someone runs a builder, and 'make check' doesn't work for it because of issues in the tree (not in the build environment). The person running the builder isn't a developer. Now a developer has to send them proposed changes for them to run manually until it works. The person is now a human buildbot. -- Andrew Deason adea...@sinenomine.net _______________________________________________ OpenAFS-devel mailing list OpenAFS-devel@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-devel