Hi folks

I wanted to pull this conversation out from the specific issue where it was 
being conducted as I think it merits a broader discussion. I understand and 
appreciate the role of the Jenkins testing - what I am trying to find is a way 
to make that testing more usable.

There are two things that I think would help:

1. separating the tests being conducted into different “buckets”. We now have 
several types of testing being conducted:

    * simple build tests. These don’t involve any execution. If something fails 
a build test, it would be
      very helpful to clearly state exactly what configure options were being 
used, and what compiler.
      Ideally, such failures would be labeled as “build”.

    * valgrind tests. These are problematic in that they are not necessarily 
PR-specific - if anything
       causes a leak or valgrind issue, every PR is marked as “failed” and can 
lead to wasted time
       chasing non-existent problems with a specific PR. Unfortunately, I can’t 
think of a way to get
       Jenkins to properly deal with the issue other than to mark such test 
results as “valgrind” so
       they are clearly called out as being in that category

    * distribution tests that build tarballs, run “make distcheck”, etc. These 
usually fail due to something
      not being included in the tarball, or some directory not being completely 
cleaned. This is
      another case where it is really important to know, for example, that 
someone used a platform
      file when building the tarball, so it would really help to know exactly 
how this test was conducted.
      Ideally, any distribution test failure would be marked as “distribution” 
so we know what happened.

    * run tests that execute various programs. Lots of things can go wrong 
here, many of them
      dependent on exactly how the code was built (so we know which components 
were
      around) and how it was run (e.g., default MCA param file). Ideally, these 
failures would
      be marked as “run”.

Please note: when I ask for a clear statement of configuration options etc, 
what I’m saying is that it is very hard to sift thru hundreds of lines of 
output to find, for example, the cmd line that failed. A more concise test 
output would make debugging much faster and easier, and therefore make Jenkins 
testing much more usable.


2. having the Jenkins testers clearly tell us what tests they are expecting us 
to pass. Perhaps a list of these could be posted somewhere, and some 
notification given as to when those lists are being changed? It would help 
avoid surprises and allow developers a chance to test things themselves before 
posting PRs.

I know I’m asking for some effort on behalf of those running these servers. 
However, I think it would make those efforts much more useful.
Ralph

Reply via email to