I think I see what you mean. A standard Jenkins plugin sets one build status and do not set any label. Then the GitHub web UI shows the combined status (e.g. If one build status is failed, then the combined build status is failed)
With a bit of tweaking, a Jenkins plugin can set several labels. Also, strictly speaking, one Jenkins server can set several statuses from the same run. I like the idea of having several statuses (regardless how many servers are involved) so it is easier to go straight to the root cause of a failure. My next point was the build status should not be failed (otherwise the combined status is also failed) for minor annoyances (such as valgrind failure). Ideally, github would accept a "success with warning" build status, but it does not. Cheers, Gilles On Wednesday, November 25, 2015, Ralph Castain <r...@open-mpi.org> wrote: > I agree about the limitation. However, what Howard is doing helps resolve > it by breaking out the Jenkins runs into categories. So instead of one > massive test session, setup one Jenkins server for each category. Then we > can set the specific tags according to the test category. > > Make sense? > Ralph > > On Nov 25, 2015, at 3:54 AM, Gilles Gouaillardet < > gilles.gouaillar...@gmail.com > <javascript:_e(%7B%7D,'cvml','gilles.gouaillar...@gmail.com');>> wrote: > > Ralph and all, > > My 0.02US$ > > We are kind of limited by the github API > https://developer.github.com/v3/repos/statuses/ > Basically, a status is pending, success, error or failure plus a string. > > A possible work around is to have Jenkins set labels on the PR. > > If only valgrind fails, the status could be succes, and the valgrind > failure cold be reported via the status string (that no one might bother > reading, but this is an other story) or via a label > (Should the label be for success or failure ?) > > I agree it is not obvious (not to say impossible) to fully understand what > Jenkins is doing under the hood. That could/should be documented, or at > least, the Jenkins plugin could be made published > (Public repository like the bot used to set labels/milestones/assignees, > or private in the ompi-tests repository) > > I will give some more thoughts to the testing part. > > Cheers, > > Gilles > > On Wednesday, November 25, 2015, Ralph Castain <r...@open-mpi.org > <javascript:_e(%7B%7D,'cvml','r...@open-mpi.org');>> wrote: > >> Hi folks >> >> I wanted to pull this conversation out from the specific issue where it >> was being conducted as I think it merits a broader discussion. I understand >> and appreciate the role of the Jenkins testing - what I am trying to find >> is a way to make that testing more usable. >> >> There are two things that I think would help: >> >> 1. separating the tests being conducted into different “buckets”. We now >> have several types of testing being conducted: >> >> * simple build tests. These don’t involve any execution. If something >> fails a build test, it would be >> very helpful to clearly state exactly what configure options were >> being used, and what compiler. >> Ideally, such failures would be labeled as “build”. >> >> * valgrind tests. These are problematic in that they are not >> necessarily PR-specific - if anything >> causes a leak or valgrind issue, every PR is marked as “failed” >> and can lead to wasted time >> chasing non-existent problems with a specific PR. Unfortunately, I >> can’t think of a way to get >> Jenkins to properly deal with the issue other than to mark such >> test results as “valgrind” so >> they are clearly called out as being in that category >> >> * distribution tests that build tarballs, run “make distcheck”, etc. >> These usually fail due to something >> not being included in the tarball, or some directory not being >> completely cleaned. This is >> another case where it is really important to know, for example, >> that someone used a platform >> file when building the tarball, so it would really help to know >> exactly how this test was conducted. >> Ideally, any distribution test failure would be marked as >> “distribution” so we know what happened. >> >> * run tests that execute various programs. Lots of things can go >> wrong here, many of them >> dependent on exactly how the code was built (so we know which >> components were >> around) and how it was run (e.g., default MCA param file). Ideally, >> these failures would >> be marked as “run”. >> >> Please note: when I ask for a clear statement of configuration options >> etc, what I’m saying is that it is very hard to sift thru hundreds of lines >> of output to find, for example, the cmd line that failed. A more concise >> test output would make debugging much faster and easier, and therefore make >> Jenkins testing much more usable. >> >> >> 2. having the Jenkins testers clearly tell us what tests they are >> expecting us to pass. Perhaps a list of these could be posted somewhere, >> and some notification given as to when those lists are being changed? It >> would help avoid surprises and allow developers a chance to test things >> themselves before posting PRs. >> >> I know I’m asking for some effort on behalf of those running these >> servers. However, I think it would make those efforts much more useful. >> Ralph >> >> _______________________________________________ >> devel mailing list >> de...@open-mpi.org >> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/devel >> Link to this post: >> http://www.open-mpi.org/community/lists/devel/2015/11/18388.php > > _______________________________________________ > devel mailing list > de...@open-mpi.org <javascript:_e(%7B%7D,'cvml','de...@open-mpi.org');> > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/devel > Link to this post: > http://www.open-mpi.org/community/lists/devel/2015/11/18389.php > > >