Hello all,

[Background]
I have a jenkins job where jenkins is mainly *automating*.  The tests are 
not completely automated.  The tests aren't unit tests but rather 
comparisons between results from previous result files and current result 
files.  i.e., the comparison of results is so complex that it will 
sometimes need human review.  And the human will sometimes determine that 
any differences found in the comparison are valid (they're due to code or 
data changes and are valid).

We understand that this is decidedly unoptimal, but it's legacy and what we 
have to work with until we are able to improve the tests.

[Question]
Given the above, if a specific build fails due to the automated comparison, 
is there a way to *UNFAIL* the build?  i.e., mark it as not failed after 
human review?

Many thanks for any pointers.

Gerald Quimpo

Reply via email to