On Tue, 10 Apr 2012 14:37:14 -0400 Jeffrey Altman <jalt...@secure-endpoints.com> wrote:
> > ...and I responded with that it makes it much easier to see when the > > 'make check' tests no longer fail for a platform, so we know when we > > can change it into a "failing step" (or "required to verify" step, > > or whatever). Instead of asking someone to manually run the tests, I > > can submit a change and look at the buildbot output to see if it > > worked. I have not yet heard any downside for doing that. > > As an intermediary step this is fine. This is not acceptable for long > term use. Yes, sorry I wasn't clear on that. I just meant doing that until 'make check' works on the platform in question. I intended this as something temporary until 'make check' becomes more robust/official, or for a temporary period if we add a new builder platform, and 'make check' doesn't work right away. > The tradeoff of your approach is that not only does the failing > builder have to build each test submission but every builder does > until the developer gets it right. I would prefer that the builder > owner and the developer work together out of band. Yeah, that is an issue, but it's not really a new one. I think that falls under the general issue that submitting any kind of platform-specific issue causes all of the builders to build again. But sure, this does make that problem worse. I think it's not so bad because it's for a rather short period of time in theory, but that's obviously just my opinion. If that is the reason for not wanting to do this, then I will not push further on it. -- Andrew Deason adea...@sinenomine.net _______________________________________________ OpenAFS-devel mailing list OpenAFS-devel@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-devel