With our MTT testing we have noticed a problem that has cropped up in
the trunk. There are some tests that are supposed to return a non-zero
status because they are getting errors, but are instead returning 0.
This problem does not exist in r23022 but does exist in r23023.
One can use the ibm/final test to reproduce the problem. An example of
a passing case followed by a failing case is shown below.
Ralph, you want me to open a ticket on this? Or do you just want to
take a look. I am asking you since you did the r23023 commit.
Rolf
TRUNK VERSION r23022:
[rolfv@burl-ct-x2200-6 environment]$ mpirun -np 1 -mca btl sm,self final
**************************************************************************
This test should generate a message about MPI is either not initialized or
has already been finialized.
ERRORS ARE EXPECTED AND NORMAL IN THIS PROGRAM!!
**************************************************************************
*** The MPI_Barrier() function was called after MPI_FINALIZE was invoked.
*** This is disallowed by the MPI standard.
*** Your MPI job will now abort.
[burl-ct-x2200-6:6072] Abort after MPI_FINALIZE completed successfully;
not able to guarantee that all other processes were killed!
--------------------------------------------------------------------------
mpirun noticed that the job aborted, but has no info as to the process
that caused that situation.
--------------------------------------------------------------------------
[rolfv@burl-ct-x2200-6 environment]$ echo $status
1
[rolfv@burl-ct-x2200-6 environment]$
TRUNK VERSION r23023:
[rolfv@burl-ct-x2200-6 environment]$ mpirun -np 1 -mca btl sm,self final
**************************************************************************
This test should generate a message about MPI is either not initialized or
has already been finialized.
ERRORS ARE EXPECTED AND NORMAL IN THIS PROGRAM!!
**************************************************************************
*** The MPI_Barrier() function was called after MPI_FINALIZE was invoked.
*** This is disallowed by the MPI standard.
*** Your MPI job will now abort.
[burl-ct-x2200-6:4089] Abort after MPI_FINALIZE completed successfully;
not able to guarantee that all other processes were killed!
[rolfv@burl-ct-x2200-6 environment]$ echo $status
0
[rolfv@burl-ct-x2200-6 environment]$