Paul Rogers created IMPALA-8055:
-----------------------------------

             Summary: run-tests.py reports tests as passed even if the did not
                 Key: IMPALA-8055
                 URL: https://issues.apache.org/jira/browse/IMPALA-8055
             Project: IMPALA
          Issue Type: Bug
          Components: Infrastructure
    Affects Versions: Impala 3.1.0
            Reporter: Paul Rogers


Been mucking about with the EXPLAIN output format which required rebasing a 
bunch of tests on the new format. PlannerTest is fine: it clearly fails when 
the expected ".test" files don't match the new "actual" files.

When run on Jenkins in "pre-review" mode, the build does fail if a Python 
end-to-end test fails. But, the job seems to give up at that point, not running 
other tests and finding more problems. (There were three separate test cases 
that needed fixing; took multiple runs to find them.)

When run on my dev box, I get the following (highly abbreviated) output:

{noformat}
'|  in pipelines: 00(GETNEXT)' != '|  row-size=402B cardinality=5.76M'
...
[gw3] PASSED 
metadata/test_explain.py::TestExplain::test_explain_level0[protocol: beeswax | 
exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
'abort_on_error': 1, 'debug_action': None, 'exec_single_node_rows_threshold': 
0} | table_format: text/none] 
...
==== 6 passed in 68.63 seconds =====
{noformat}

I've learned that "passed" means "maybe failed" and to go back and inspect the 
actual output to figure out if the test did, indeed, fail. I suspect "passed" 
means "didn't crash" rather than "tests worked."

Would be very helpful to plumb the failure through to the summary line so it 
said "3 passed, 3 failed" or whatever. Would be a huge time-saver.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to