[ 
https://issues.apache.org/jira/browse/IMPALA-8055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750438#comment-16750438
 ] 

Philip Zeyliger commented on IMPALA-8055:
-----------------------------------------

I think this is the key:

bq. That is, run-tests.py when run with only this one test, produces both lines 
of output.

run-tests.py calls pytest multiple times, and you only sort of see the last 
"==== 2 passed ===" output. The other one is probably there, but it's easily 
missed. To give you an example from a recent run:
{code}
$curl --silent 
'https://jenkins.impala.io/job/gerrit-verify-dryrun/3660/consoleText' | grep -i 
'===.*in.*seconds'
] =================== 70 skipped, 140 xfailed in 98.21 seconds 
===================
] ======= 1 failed, 2225 passed, 87 skipped, 51 xfailed in 2527.44 seconds 
=======
] =========================== 2 passed in 0.05 seconds 
===========================
{code}

I followed your footsteps and I think what you're seeing is the multiple runs 
of py.test within run-tests, with the "metrics" tests failing. The following 
patch identifies the failing tests at the end by collecting them, but I'm not 
entirely sure that's what we want. You can also pass {{-x}} to run-tests.py to 
have it fail at the first failure, which makes things easier to spot.

{code}
diff --git 
testdata/workloads/functional-query/queries/QueryTest/explain-level1.test 
testdata/workloads/functional-query/queries/QueryTest/explain-level1.test
index 9a6dea3..23365ae 100644
--- testdata/workloads/functional-query/queries/QueryTest/explain-level1.test
+++ testdata/workloads/functional-query/queries/QueryTest/explain-level1.test
@@ -14,7 +14,7 @@ row_regex:.*Per-Host Resource Estimates: Memory=[0-9.]*MB.*
 '|'
 '02:HASH JOIN [INNER JOIN, BROADCAST]'
 '|  hash predicates: l_orderkey = o_orderkey'
-'|  runtime filters: RF000 <- o_orderkey'
+'|  runtime filters: RF000 <- bogus'
 row_regex:.*row-size=.* cardinality=.*
 '|'
 '|--03:EXCHANGE [BROADCAST]'
diff --git tests/run-tests.py tests/run-tests.py
index 0de9ce9..8ea9b0c 100755
--- tests/run-tests.py
+++ tests/run-tests.py
@@ -83,6 +83,7 @@ class TestCounterPlugin(object):
   def __init__(self):
     self.tests_collected = set()
     self.tests_executed = set()
+    self.failed_tests = []

   # pytest hook to handle test collection when xdist is used (parallel tests)
   # https://github.com/pytest-dev/pytest-xdist/pull/35/commits (No official 
documentation available)
@@ -100,11 +101,14 @@ class TestCounterPlugin(object):
   def pytest_runtest_logreport(self, report):
     if report.passed:
        self.tests_executed.add(report.nodeid)
+    if report.failed:
+      self.failed_tests.append(report.nodeid)

 class TestExecutor(object):
   def __init__(self, exit_on_error=True):
     self._exit_on_error = exit_on_error
     self.tests_failed = False
+    self.failed_tests = []
     self.total_executed = 0

   def run_tests(self, args):
@@ -121,10 +125,13 @@ class TestExecutor(object):
         print(test)

     self.total_executed += len(testcounterplugin.tests_executed)
+    self.failed_tests.extend(testcounterplugin.failed_tests)

     if 0 < pytest_exit_code < EXIT_NOTESTSCOLLECTED and self._exit_on_error:
       sys.exit(pytest_exit_code)
     self.tests_failed = 0 < pytest_exit_code < EXIT_NOTESTSCOLLECTED or 
self.tests_failed
+    if len(self.failed_tests) > 0:
+      assert self.tests_failed

 def build_test_args(base_name, valid_dirs=VALID_TEST_DIRS):
   """
@@ -305,4 +312,7 @@ if __name__ == "__main__":
     run(args)

   if test_executor.tests_failed:
+    print "Failed tests:"
+    for t in test_executor.failed_tests:
+      print "  " + t
     sys.exit(1)
{code}

> run-tests.py reports tests as passed even if the did not
> --------------------------------------------------------
>
>                 Key: IMPALA-8055
>                 URL: https://issues.apache.org/jira/browse/IMPALA-8055
>             Project: IMPALA
>          Issue Type: Bug
>          Components: Infrastructure
>    Affects Versions: Impala 3.1.0
>            Reporter: Paul Rogers
>            Priority: Minor
>
> Been mucking about with the EXPLAIN output format which required rebasing a 
> bunch of tests on the new format. PlannerTest is fine: it clearly fails when 
> the expected ".test" files don't match the new "actual" files.
> When run on Jenkins in "pre-review" mode, the build does fail if a Python 
> end-to-end test fails. But, the job seems to give up at that point, not 
> running other tests and finding more problems. (There were three separate 
> test cases that needed fixing; took multiple runs to find them.)
> When run on my dev box, I get the following (highly abbreviated) output:
> {noformat}
> '|  in pipelines: 00(GETNEXT)' != '|  row-size=402B cardinality=5.76M'
> ...
> [gw3] PASSED 
> metadata/test_explain.py::TestExplain::test_explain_level0[protocol: beeswax 
> | exec_option: {'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
> 'abort_on_error': 1, 'debug_action': None, 'exec_single_node_rows_threshold': 
> 0} | table_format: text/none] 
> ...
> ==== 6 passed in 68.63 seconds =====
> {noformat}
> I've learned that "passed" means "maybe failed" and to go back and inspect 
> the actual output to figure out if the test did, indeed, fail. I suspect 
> "passed" means "didn't crash" rather than "tests worked."
> Would be very helpful to plumb the failure through to the summary line so it 
> said "3 passed, 3 failed" or whatever. Would be a huge time-saver.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org

Reply via email to