Title: [90413] trunk
Revision
90413
Author
[email protected]
Date
2011-07-05 16:16:18 -0700 (Tue, 05 Jul 2011)

Log Message

2011-07-05  Eric Seidel  <[email protected]>

        buildbot needs to understand whether NRWT exited early after having too many failures
        https://bugs.webkit.org/show_bug.cgi?id=63839

        Reviewed by Adam Barth.

        Teach the new results.html how to display a warning when testing exited early.
        The warning isn't quite as nice as ORWT since I couldn't figure out how to
        find the total tests run, or total unexpected crashes.  I figure
        this is enough to get us going and we can refine it further.

        * fast/harness/results.html:
2011-07-05  Eric Seidel  <[email protected]>

        buildbot needs to understand whether NRWT exited early after having too many failures
        https://bugs.webkit.org/show_bug.cgi?id=63839

        Reviewed by Adam Barth.

        Fix-up the exited early messages printed by NRWT so that
        the buildbot can parse them as expected.
        It looks for lines using "if line.find('Exiting early') >= 0:"

        I also plumbed the "early exit" status through to results.json
        in the form of an "interrupted" bool.  It was unclear to me
        if results.json already had enough information to compute this bool
        itself.  It's possible Ojan could come up with a better fix.

        * Scripts/webkitpy/layout_tests/layout_package/manager.py:
        * Scripts/webkitpy/layout_tests/layout_package/manager_unittest.py:
        * Scripts/webkitpy/layout_tests/layout_package/printing_unittest.py:
        * Scripts/webkitpy/layout_tests/layout_package/result_summary.py:

Modified Paths

Diff

Modified: trunk/LayoutTests/ChangeLog (90412 => 90413)


--- trunk/LayoutTests/ChangeLog	2011-07-05 22:59:53 UTC (rev 90412)
+++ trunk/LayoutTests/ChangeLog	2011-07-05 23:16:18 UTC (rev 90413)
@@ -1,3 +1,17 @@
+2011-07-05  Eric Seidel  <[email protected]>
+
+        buildbot needs to understand whether NRWT exited early after having too many failures
+        https://bugs.webkit.org/show_bug.cgi?id=63839
+
+        Reviewed by Adam Barth.
+
+        Teach the new results.html how to display a warning when testing exited early.
+        The warning isn't quite as nice as ORWT since I couldn't figure out how to
+        find the total tests run, or total unexpected crashes.  I figure
+        this is enough to get us going and we can refine it further.
+
+        * fast/harness/results.html:
+
 2011-07-05  Sam Weinig  <[email protected]>
 
         Null deref accessing CustomEvent.detail

Modified: trunk/LayoutTests/fast/harness/results.html (90412 => 90413)


--- trunk/LayoutTests/fast/harness/results.html	2011-07-05 22:59:53 UTC (rev 90412)
+++ trunk/LayoutTests/fast/harness/results.html	2011-07-05 23:16:18 UTC (rev 90413)
@@ -76,6 +76,13 @@
     left: 1px;
 }
 
+.stopped-running-early-message {
+    border: 3px solid #d00;
+    font-weight: bold;
+    display: inline-block;
+    padding: 3px;
+}
+
 .result-container {
     display: inline-block;
     border: 1px solid gray;
@@ -1016,6 +1023,9 @@
             '<label><input id="toggle-images" type=checkbox checked _onchange_="handleToggleImagesChange()">Toggle images</label>' +
         '</div></div>';
 
+    if (globalState().results.interrupted)
+        html += "<p class='stopped-running-early-message'>Testing exited early.</p>"
+
     html += failingTestsTable(globalState().nonFlakyFailingTests,
         'Tests where results did not match expected results', 'results-table');
 

Modified: trunk/Tools/ChangeLog (90412 => 90413)


--- trunk/Tools/ChangeLog	2011-07-05 22:59:53 UTC (rev 90412)
+++ trunk/Tools/ChangeLog	2011-07-05 23:16:18 UTC (rev 90413)
@@ -1,5 +1,26 @@
 2011-07-05  Eric Seidel  <[email protected]>
 
+        buildbot needs to understand whether NRWT exited early after having too many failures
+        https://bugs.webkit.org/show_bug.cgi?id=63839
+
+        Reviewed by Adam Barth.
+
+        Fix-up the exited early messages printed by NRWT so that
+        the buildbot can parse them as expected.
+        It looks for lines using "if line.find('Exiting early') >= 0:"
+
+        I also plumbed the "early exit" status through to results.json
+        in the form of an "interrupted" bool.  It was unclear to me
+        if results.json already had enough information to compute this bool
+        itself.  It's possible Ojan could come up with a better fix.
+
+        * Scripts/webkitpy/layout_tests/layout_package/manager.py:
+        * Scripts/webkitpy/layout_tests/layout_package/manager_unittest.py:
+        * Scripts/webkitpy/layout_tests/layout_package/printing_unittest.py:
+        * Scripts/webkitpy/layout_tests/layout_package/result_summary.py:
+
+2011-07-05  Eric Seidel  <[email protected]>
+
         new-run-webkit-tests fails to start http server if one is already running
         https://bugs.webkit.org/show_bug.cgi?id=63956
 

Modified: trunk/Tools/Scripts/webkitpy/layout_tests/layout_package/manager.py (90412 => 90413)


--- trunk/Tools/Scripts/webkitpy/layout_tests/layout_package/manager.py	2011-07-05 22:59:53 UTC (rev 90412)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/layout_package/manager.py	2011-07-05 23:16:18 UTC (rev 90413)
@@ -73,7 +73,11 @@
 TestExpectations = test_expectations.TestExpectations
 
 
-def summarize_results(port_obj, expectations, result_summary, retry_summary, test_timings, only_unexpected):
+# FIXME: This should be on the Manager class (since that's the only caller)
+# or split off from Manager onto another helper class, but should not be a free function.
+# Most likely this should be made into its own class, and this super-long function
+# split into many helper functions.
+def summarize_results(port_obj, expectations, result_summary, retry_summary, test_timings, only_unexpected, interrupted):
     """Summarize failing results as a dict.
 
     FIXME: split this data structure into a separate class?
@@ -101,10 +105,8 @@
 
     tbe = result_summary.tests_by_expectation
     tbt = result_summary.tests_by_timeline
-    results['fixable'] = len(tbt[test_expectations.NOW] -
-                                tbe[test_expectations.PASS])
-    results['skipped'] = len(tbt[test_expectations.NOW] &
-                                tbe[test_expectations.SKIP])
+    results['fixable'] = len(tbt[test_expectations.NOW] - tbe[test_expectations.PASS])
+    results['skipped'] = len(tbt[test_expectations.NOW] & tbe[test_expectations.SKIP])
 
     num_passes = 0
     num_flaky = 0
@@ -207,6 +209,7 @@
     results['num_flaky'] = num_flaky
     results['num_regressions'] = num_regressions
     results['uses_expectations_file'] = port_obj.uses_test_expectations_file()
+    results['interrupted'] = interrupted  # Does results.html have enough information to compute this itself? (by checking total number of results vs. total number of tests?)
     results['layout_tests_dir'] = port_obj.layout_tests_dir()
     results['has_wdiff'] = port_obj.wdiff_available()
     results['has_pretty_patch'] = port_obj.pretty_patch_available()
@@ -223,6 +226,7 @@
     """Raised when a test run should be stopped immediately."""
     def __init__(self, reason):
         self.reason = reason
+        self.msg = reason
 
     def __reduce__(self):
         return self.__class__, (self.reason,)
@@ -815,8 +819,7 @@
                                              result_summary.expected,
                                              result_summary.unexpected)
 
-        unexpected_results = summarize_results(self._port,
-            self._expectations, result_summary, retry_summary, individual_test_timings, _only_unexpected_=True)
+        unexpected_results = summarize_results(self._port, self._expectations, result_summary, retry_summary, individual_test_timings, _only_unexpected_=True, interrupted=interrupted)
         self._printer.print_unexpected_results(unexpected_results)
 
         # Re-raise a KeyboardInterrupt if necessary so the caller can handle it.
@@ -829,10 +832,8 @@
             not keyboard_interrupted):
             # Write the same data to log files and upload generated JSON files
             # to appengine server.
-            summarized_results = summarize_results(self._port,
-                self._expectations, result_summary, retry_summary, individual_test_timings, _only_unexpected_=False)
-            self._upload_json_files(unexpected_results, summarized_results, result_summary,
-                                    individual_test_timings)
+            summarized_results = summarize_results(self._port, self._expectations, result_summary, retry_summary, individual_test_timings, _only_unexpected_=False, interrupted=interrupted)
+            self._upload_json_files(unexpected_results, summarized_results, result_summary, individual_test_timings)
 
         # Write the summary to disk (results.html) and display it if requested.
         if not self._options.dry_run:
@@ -884,33 +885,38 @@
 
             self._update_summary_with_result(result_summary, result)
 
+    def _interrupt_if_at_failure_limits(self, result_summary):
+        # Note: The messages in this method are constructed to match old-run-webkit-tests
+        # so that existing buildbot grep rules work.
+        def interrupt_if_at_failure_limit(limit, failure_count, result_summary, message):
+            if limit and failure_count >= limit:
+                message += " %d tests run." % (result_summary.expected + result_summary.unexpected)
+                raise TestRunInterruptedException(message)
+
+        interrupt_if_at_failure_limit(
+            self._options.exit_after_n_failures,
+            result_summary.unexpected_failures,
+            result_summary,
+            "Exiting early after %d failures." % result_summary.unexpected_failures)
+        interrupt_if_at_failure_limit(
+            self._options.exit_after_n_crashes_or_timeouts,
+            result_summary.unexpected_crashes + result_summary.unexpected_timeouts,
+            result_summary,
+            # This differs from ORWT because it does not include WebProcess crashes.
+            "Exiting early after %d crashes and %d timeouts." % (result_summary.unexpected_crashes, result_summary.unexpected_timeouts))
+
     def _update_summary_with_result(self, result_summary, result):
         if result.type == test_expectations.SKIP:
             result_summary.add(result, expected=True)
         else:
-            expected = self._expectations.matches_an_expected_result(
-                result.filename, result.type, self._options.pixel_tests)
+            expected = self._expectations.matches_an_expected_result(result.filename, result.type, self._options.pixel_tests)
             result_summary.add(result, expected)
-            exp_str = self._expectations.get_expectations_string(
-                result.filename)
+            exp_str = self._expectations.get_expectations_string(result.filename)
             got_str = self._expectations.expectation_to_string(result.type)
             self._printer.print_test_result(result, expected, exp_str, got_str)
-        self._printer.print_progress(result_summary, self._retrying,
-                                     self._test_files_list)
+        self._printer.print_progress(result_summary, self._retrying, self._test_files_list)
+        self._interrupt_if_at_failure_limits(result_summary)
 
-        def interrupt_if_at_failure_limit(limit, count, message):
-            if limit and count >= limit:
-                raise TestRunInterruptedException(message % count)
-
-        interrupt_if_at_failure_limit(
-            self._options.exit_after_n_failures,
-            result_summary.unexpected_failures,
-            "Aborting run since %d failures were reached")
-        interrupt_if_at_failure_limit(
-            self._options.exit_after_n_crashes_or_timeouts,
-            result_summary.unexpected_crashes_or_timeouts,
-            "Aborting run since %d crashes or timeouts were reached")
-
     def _clobber_old_results(self):
         # Just clobber the actual test results directories since the other
         # files in the results directory are explicitly used for cross-run
@@ -1369,6 +1375,8 @@
     return tests
 
 
+# FIXME: These two free functions belong either on manager (since it's the only one
+# which uses them) or in a different file (if they need to be re-used).
 def path_key(filesystem, path):
     """Turns a path into a list with two sublists, the natural key of the
     dirname, and the natural key of the basename.

Modified: trunk/Tools/Scripts/webkitpy/layout_tests/layout_package/manager_unittest.py (90412 => 90413)


--- trunk/Tools/Scripts/webkitpy/layout_tests/layout_package/manager_unittest.py	2011-07-05 22:59:53 UTC (rev 90412)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/layout_package/manager_unittest.py	2011-07-05 23:16:18 UTC (rev 90413)
@@ -35,10 +35,12 @@
 from webkitpy.common.system import filesystem_mock
 from webkitpy.thirdparty.mock import Mock
 
-from webkitpy.layout_tests.layout_package import manager
+from webkitpy.layout_tests.layout_package.manager import Manager, natural_sort_key, path_key, TestRunInterruptedException
+from webkitpy.layout_tests.layout_package.result_summary import ResultSummary
+from webkitpy.tool.mocktool import MockOptions
 
 
-class ManagerWrapper(manager.Manager):
+class ManagerWrapper(Manager):
     def _get_test_input_for_file(self, test_file):
         return test_file
 
@@ -79,10 +81,36 @@
         self.assertEqual("tests_to_http_lock", multi_thread_results[0][0])
         self.assertEqual(expected_tests_to_http_lock, set(multi_thread_results[0][1]))
 
+    def test_interrupt_if_at_failure_limits(self):
+        port = Mock()
+        port._filesystem = filesystem_mock.MockFileSystem()
+        manager = Manager(port=port, options=MockOptions(), printer=Mock())
 
+        manager._options = MockOptions(exit_after_n_failures=None, exit_after_n_crashes_or_timeouts=None)
+        result_summary = ResultSummary(expectations=Mock(), test_files=[])
+        result_summary.unexpected_failures = 100
+        result_summary.unexpected_crashes = 50
+        result_summary.unexpected_timeouts = 50
+        # No exception when the exit_after* options are None.
+        manager._interrupt_if_at_failure_limits(result_summary)
+
+        # No exception when we haven't hit the limit yet.
+        manager._options.exit_after_n_failures = 101
+        manager._options.exit_after_n_crashes_or_timeouts = 101
+        manager._interrupt_if_at_failure_limits(result_summary)
+
+        # Interrupt if we've exceeded either limit:
+        manager._options.exit_after_n_crashes_or_timeouts = 10
+        self.assertRaises(TestRunInterruptedException, manager._interrupt_if_at_failure_limits, result_summary)
+
+        manager._options.exit_after_n_crashes_or_timeouts = None
+        manager._options.exit_after_n_failures = 10
+        exception = self.assertRaises(TestRunInterruptedException, manager._interrupt_if_at_failure_limits, result_summary)
+
+
 class NaturalCompareTest(unittest.TestCase):
     def assert_cmp(self, x, y, result):
-        self.assertEquals(cmp(manager.natural_sort_key(x), manager.natural_sort_key(y)), result)
+        self.assertEquals(cmp(natural_sort_key(x), natural_sort_key(y)), result)
 
     def test_natural_compare(self):
         self.assert_cmp('a', 'a', 0)
@@ -107,7 +135,7 @@
         self.filesystem = filesystem_mock.MockFileSystem()
 
     def path_key(self, k):
-        return manager.path_key(self.filesystem, k)
+        return path_key(self.filesystem, k)
 
     def assert_cmp(self, x, y, result):
         self.assertEquals(cmp(self.path_key(x), self.path_key(y)), result)

Modified: trunk/Tools/Scripts/webkitpy/layout_tests/layout_package/printing_unittest.py (90412 => 90413)


--- trunk/Tools/Scripts/webkitpy/layout_tests/layout_package/printing_unittest.py	2011-07-05 22:59:53 UTC (rev 90412)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/layout_package/printing_unittest.py	2011-07-05 23:16:18 UTC (rev 90413)
@@ -445,36 +445,27 @@
             """
             paths, rs, exp = self.get_result_summary(tests, expectations)
             if expected:
-                rs.add(self.get_result('passes/text.html', test_expectations.PASS),
-                       expected)
-                rs.add(self.get_result('failures/expected/timeout.html',
-                       test_expectations.TIMEOUT), expected)
-                rs.add(self.get_result('failures/expected/crash.html', test_expectations.CRASH),
-                   expected)
+                rs.add(self.get_result('passes/text.html', test_expectations.PASS), expected)
+                rs.add(self.get_result('failures/expected/timeout.html', test_expectations.TIMEOUT), expected)
+                rs.add(self.get_result('failures/expected/crash.html', test_expectations.CRASH), expected)
             elif passing:
                 rs.add(self.get_result('passes/text.html'), expected)
                 rs.add(self.get_result('failures/expected/timeout.html'), expected)
                 rs.add(self.get_result('failures/expected/crash.html'), expected)
             else:
-                rs.add(self.get_result('passes/text.html', test_expectations.TIMEOUT),
-                       expected)
-                rs.add(self.get_result('failures/expected/timeout.html',
-                       test_expectations.CRASH), expected)
-                rs.add(self.get_result('failures/expected/crash.html',
-                                  test_expectations.TIMEOUT),
-                   expected)
+                rs.add(self.get_result('passes/text.html', test_expectations.TIMEOUT), expected)
+                rs.add(self.get_result('failures/expected/timeout.html', test_expectations.CRASH), expected)
+                rs.add(self.get_result('failures/expected/crash.html', test_expectations.TIMEOUT), expected)
             retry = rs
             if flaky:
-                paths, retry, exp = self.get_result_summary(tests,
-                                                expectations)
+                paths, retry, exp = self.get_result_summary(tests, expectations)
                 retry.add(self.get_result('passes/text.html'), True)
                 retry.add(self.get_result('failures/expected/timeout.html'), True)
                 retry.add(self.get_result('failures/expected/crash.html'), True)
-            unexpected_results = manager.summarize_results(self._port, exp, rs, retry, test_timings={}, _only_unexpected_=True)
+            unexpected_results = manager.summarize_results(self._port, exp, rs, retry, test_timings={}, _only_unexpected_=True, interrupted=False)
             return unexpected_results
 
-        tests = ['passes/text.html', 'failures/expected/timeout.html',
-                 'failures/expected/crash.html']
+        tests = ['passes/text.html', 'failures/expected/timeout.html', 'failures/expected/crash.html']
         expectations = ''
 
         printer, err, out = self.get_printer(['--print', 'nothing'])
@@ -483,8 +474,7 @@
         self.assertTrue(err.empty())
         self.assertTrue(out.empty())
 
-        printer, err, out = self.get_printer(['--print',
-                                              'unexpected-results'])
+        printer, err, out = self.get_printer(['--print', 'unexpected-results'])
 
         # test everything running as expected
         ur = get_unexpected_results(expected=True, passing=False, flaky=False)

Modified: trunk/Tools/Scripts/webkitpy/layout_tests/layout_package/result_summary.py (90412 => 90413)


--- trunk/Tools/Scripts/webkitpy/layout_tests/layout_package/result_summary.py	2011-07-05 22:59:53 UTC (rev 90412)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/layout_package/result_summary.py	2011-07-05 23:16:18 UTC (rev 90413)
@@ -52,7 +52,8 @@
         self.expected = 0
         self.unexpected = 0
         self.unexpected_failures = 0
-        self.unexpected_crashes_or_timeouts = 0
+        self.unexpected_crashes = 0
+        self.unexpected_timeouts = 0
         self.tests_by_expectation = {}
         self.tests_by_timeline = {}
         self.results = {}
@@ -62,8 +63,7 @@
         for expectation in TestExpectations.EXPECTATIONS.values():
             self.tests_by_expectation[expectation] = set()
         for timeline in TestExpectations.TIMELINES.values():
-            self.tests_by_timeline[timeline] = (
-                expectations.get_tests_with_timeline(timeline))
+            self.tests_by_timeline[timeline] = expectations.get_tests_with_timeline(timeline)
 
     def add(self, result, expected):
         """Add a TestResult into the appropriate bin.
@@ -85,5 +85,7 @@
             self.unexpected += 1
             if len(result.failures):
                 self.unexpected_failures += 1
-            if result.type == test_expectations.CRASH or result.type == test_expectations.TIMEOUT:
-                self.unexpected_crashes_or_timeouts += 1
+            if result.type == test_expectations.CRASH:
+                self.unexpected_crashes += 1
+            elif result.type == test_expectations.TIMEOUT:
+                self.unexpected_timeouts += 1
_______________________________________________
webkit-changes mailing list
[email protected]
http://lists.webkit.org/mailman/listinfo.cgi/webkit-changes

Reply via email to