Title: [277781] trunk/Tools
Revision
277781
Author
[email protected]
Date
2021-05-20 06:48:42 -0700 (Thu, 20 May 2021)

Log Message

Store whether a test is slow on TestInput
https://bugs.webkit.org/show_bug.cgi?id=224563

Reviewed by Jonathan Bedard.

Additionally, notably, this makes a TestResult store a TestInput rather than a
test_name string. With that there, we then don't need to punch through multiple
layers to find out whether a test is slow or not. Note that replacing the
test_name with a Test or TestInput as part of removing the 1:1 relationship
between files and tests.

With this done, we don't have to pass around a test_is_slow_fn, as we can directly
look at the result to determine whether or not it is slow.

* Scripts/webkitpy/layout_tests/controllers/layout_test_runner.py:
(LayoutTestRunner.__init__): Remove test_is_slow_fn argument
(LayoutTestRunner._mark_interrupted_tests_as_skipped): Remove test_is_slow argument
(LayoutTestRunner._update_summary_with_result): Remove test_is_slow argument
(Worker._run_test_in_another_thread): Remove test_is_slow argument
* Scripts/webkitpy/layout_tests/controllers/layout_test_runner_unittest.py:
(LayoutTestRunnerTests._runner): Remove test_is_slow_fn argument
(LayoutTestRunnerTests.test_update_summary_with_result): TestResult arg rename
* Scripts/webkitpy/layout_tests/controllers/manager.py:
(Manager): Improve docstring
(Manager.__init__): Tidy up reading tests-options.json
(Manager._test_input_for_file): Set is_slow
(Manager.run): Remove test_is_slow_fn argument
(Manager._look_for_new_crash_logs): Remove test_is_slow_fn/test_is_slow argument
* Scripts/webkitpy/layout_tests/controllers/single_test_runner.py:
(SingleTestRunner.__init__): Store TestInput object
(SingleTestRunner._test_name): Replacement getter
(SingleTestRunner._should_run_pixel_test): Replacement getter
(SingleTestRunner._should_dump_jsconsolelog_in_stderr): Replacement getter
(SingleTestRunner._reference_files): Replacement getter
(SingleTestRunner._timeout): Replacement getter
(SingleTestRunner._compare_output): Pass TestInput to TestResult
(SingleTestRunner._run_reftest): Pass TestInput to TestResult
(SingleTestRunner._compare_output_with_reference): Pass TestInput to TestResult
* Scripts/webkitpy/layout_tests/models/test_input.py:
(TestInput): Add is_slow boolean
* Scripts/webkitpy/layout_tests/models/test_results.py:
(TestResult.__init__): Rename test_name -> test_input, construct TestInput if we must
(TestResult.test_name): Replacement getter
* Scripts/webkitpy/layout_tests/models/test_results_unittest.py:
(TestResultsTest.test_pickle_roundtrip): TestResult arg rename
* Scripts/webkitpy/layout_tests/models/test_run_results.py:
(TestRunResults.add): Remove test_is_slow argument, look at TestResult
* Scripts/webkitpy/layout_tests/models/test_run_results_unittest.py:
(summarized_results): Remove test_is_slow argument
* Scripts/webkitpy/layout_tests/run_webkit_tests_integrationtest.py:
(RunTest.test_tests_options): Add a test that test-options.json works

Modified Paths

Diff

Modified: trunk/Tools/ChangeLog (277780 => 277781)


--- trunk/Tools/ChangeLog	2021-05-20 13:31:05 UTC (rev 277780)
+++ trunk/Tools/ChangeLog	2021-05-20 13:48:42 UTC (rev 277781)
@@ -1,3 +1,57 @@
+2021-05-20  Sam Sneddon  <[email protected]>
+
+        Store whether a test is slow on TestInput
+        https://bugs.webkit.org/show_bug.cgi?id=224563
+
+        Reviewed by Jonathan Bedard.
+
+        Additionally, notably, this makes a TestResult store a TestInput rather than a
+        test_name string. With that there, we then don't need to punch through multiple
+        layers to find out whether a test is slow or not. Note that replacing the
+        test_name with a Test or TestInput as part of removing the 1:1 relationship
+        between files and tests.
+
+        With this done, we don't have to pass around a test_is_slow_fn, as we can directly
+        look at the result to determine whether or not it is slow.
+
+        * Scripts/webkitpy/layout_tests/controllers/layout_test_runner.py:
+        (LayoutTestRunner.__init__): Remove test_is_slow_fn argument
+        (LayoutTestRunner._mark_interrupted_tests_as_skipped): Remove test_is_slow argument
+        (LayoutTestRunner._update_summary_with_result): Remove test_is_slow argument
+        (Worker._run_test_in_another_thread): Remove test_is_slow argument
+        * Scripts/webkitpy/layout_tests/controllers/layout_test_runner_unittest.py:
+        (LayoutTestRunnerTests._runner): Remove test_is_slow_fn argument
+        (LayoutTestRunnerTests.test_update_summary_with_result): TestResult arg rename
+        * Scripts/webkitpy/layout_tests/controllers/manager.py:
+        (Manager): Improve docstring
+        (Manager.__init__): Tidy up reading tests-options.json
+        (Manager._test_input_for_file): Set is_slow
+        (Manager.run): Remove test_is_slow_fn argument
+        (Manager._look_for_new_crash_logs): Remove test_is_slow_fn/test_is_slow argument
+        * Scripts/webkitpy/layout_tests/controllers/single_test_runner.py:
+        (SingleTestRunner.__init__): Store TestInput object
+        (SingleTestRunner._test_name): Replacement getter
+        (SingleTestRunner._should_run_pixel_test): Replacement getter
+        (SingleTestRunner._should_dump_jsconsolelog_in_stderr): Replacement getter
+        (SingleTestRunner._reference_files): Replacement getter
+        (SingleTestRunner._timeout): Replacement getter
+        (SingleTestRunner._compare_output): Pass TestInput to TestResult
+        (SingleTestRunner._run_reftest): Pass TestInput to TestResult
+        (SingleTestRunner._compare_output_with_reference): Pass TestInput to TestResult
+        * Scripts/webkitpy/layout_tests/models/test_input.py:
+        (TestInput): Add is_slow boolean
+        * Scripts/webkitpy/layout_tests/models/test_results.py:
+        (TestResult.__init__): Rename test_name -> test_input, construct TestInput if we must
+        (TestResult.test_name): Replacement getter
+        * Scripts/webkitpy/layout_tests/models/test_results_unittest.py:
+        (TestResultsTest.test_pickle_roundtrip): TestResult arg rename
+        * Scripts/webkitpy/layout_tests/models/test_run_results.py:
+        (TestRunResults.add): Remove test_is_slow argument, look at TestResult
+        * Scripts/webkitpy/layout_tests/models/test_run_results_unittest.py:
+        (summarized_results): Remove test_is_slow argument
+        * Scripts/webkitpy/layout_tests/run_webkit_tests_integrationtest.py:
+        (RunTest.test_tests_options): Add a test that test-options.json works
+
 2021-05-19  Devin Rousso  <[email protected]>
 
         Add a way to create `"wheel"` events from gesture/touch events

Modified: trunk/Tools/Scripts/webkitpy/layout_tests/controllers/layout_test_runner.py (277780 => 277781)


--- trunk/Tools/Scripts/webkitpy/layout_tests/controllers/layout_test_runner.py	2021-05-20 13:31:05 UTC (rev 277780)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/controllers/layout_test_runner.py	2021-05-20 13:48:42 UTC (rev 277781)
@@ -63,12 +63,11 @@
 
 
 class LayoutTestRunner(object):
-    def __init__(self, options, port, printer, results_directory, test_is_slow_fn, needs_http=False, needs_websockets=False, needs_web_platform_test_server=False):
+    def __init__(self, options, port, printer, results_directory, needs_http=False, needs_websockets=False, needs_web_platform_test_server=False):
         self._options = options
         self._port = port
         self._printer = printer
         self._results_directory = results_directory
-        self._test_is_slow = test_is_slow_fn
         self._needs_http = needs_http
         self._needs_websockets = needs_websockets
         self._needs_web_platform_test_server = needs_web_platform_test_server
@@ -146,11 +145,11 @@
     def _mark_interrupted_tests_as_skipped(self, run_results):
         for test_input in self._test_inputs:
             if test_input.test_name not in run_results.results_by_name:
-                result = test_results.TestResult(test_input.test_name, [test_failures.FailureEarlyExit()])
+                result = test_results.TestResult(test_input, [test_failures.FailureEarlyExit()])
                 # FIXME: We probably need to loop here if there are multiple iterations.
                 # FIXME: Also, these results are really neither expected nor unexpected. We probably
                 # need a third type of result.
-                run_results.add(result, expected=False, test_is_slow=self._test_is_slow(test_input.test_name))
+                run_results.add(result, expected=False)
 
     def _interrupt_if_at_failure_limits(self, run_results):
         # Note: The messages in this method are constructed to match old-run-webkit-tests
@@ -183,7 +182,7 @@
             exp_str = self._expectations.model().expectations_to_string(expectations)
             got_str = self._expectations.model().expectation_to_string(result.type)
 
-        run_results.add(result, expected, self._test_is_slow(result.test_name))
+        run_results.add(result, expected)
 
         self._printer.print_finished_test(result, expected, exp_str, got_str)
 
@@ -436,7 +435,7 @@
         driver.stop()
 
         if not result:
-            result = test_results.TestResult(test_input.test_name, failures=failures, test_run_time=0)
+            result = test_results.TestResult(test_input, failures=failures, test_run_time=0)
         return result
 
     def _run_test_in_this_thread(self, test_input, stop_when_done):

Modified: trunk/Tools/Scripts/webkitpy/layout_tests/controllers/layout_test_runner_unittest.py (277780 => 277781)


--- trunk/Tools/Scripts/webkitpy/layout_tests/controllers/layout_test_runner_unittest.py	2021-05-20 13:31:05 UTC (rev 277780)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/controllers/layout_test_runner_unittest.py	2021-05-20 13:48:42 UTC (rev 277781)
@@ -79,7 +79,7 @@
 
         host = MockHost()
         port = port or host.port_factory.get(options.platform, options=options)
-        return LayoutTestRunner(options, port, FakePrinter(), port.results_directory(), lambda test_name: False)
+        return LayoutTestRunner(options, port, FakePrinter(), port.results_directory())
 
     def _run_tests(self, runner, tests):
         test_inputs = [TestInput(Test(test), 6000) for test in tests]
@@ -131,19 +131,19 @@
         runner._expectations = expectations
 
         run_results = TestRunResults(expectations, 1)
-        result = TestResult(test_name=test, failures=[test_failures.FailureReftestMismatchDidNotOccur()], reftest_type=['!='])
+        result = TestResult(test, failures=[test_failures.FailureReftestMismatchDidNotOccur()], reftest_type=['!='])
         runner._update_summary_with_result(run_results, result)
         self.assertEqual(1, run_results.expected)
         self.assertEqual(0, run_results.unexpected)
 
         run_results = TestRunResults(expectations, 1)
-        result = TestResult(test_name=test, failures=[], reftest_type=['=='])
+        result = TestResult(test, failures=[], reftest_type=['=='])
         runner._update_summary_with_result(run_results, result)
         self.assertEqual(0, run_results.expected)
         self.assertEqual(1, run_results.unexpected)
 
         run_results = TestRunResults(expectations, 1)
-        result = TestResult(test_name=leak_test, failures=[])
+        result = TestResult(leak_test, failures=[])
         runner._update_summary_with_result(run_results, result)
         self.assertEqual(1, run_results.expected)
         self.assertEqual(0, run_results.unexpected)

Modified: trunk/Tools/Scripts/webkitpy/layout_tests/controllers/manager.py (277780 => 277781)


--- trunk/Tools/Scripts/webkitpy/layout_tests/controllers/manager.py	2021-05-20 13:31:05 UTC (rev 277780)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/controllers/manager.py	2021-05-20 13:48:42 UTC (rev 277781)
@@ -65,9 +65,14 @@
 
 
 class Manager(object):
-    """A class for managing running a series of tests on a series of layout
-    test files."""
+    """Test execution manager
 
+    This class has the main entry points for run-webkit-tests; the ..run_webkit_tests module almost
+    exclusively just handles CLI options. It orchestrates collecting the tests (through
+    LayoutTestFinder), running them (LayoutTestRunner), and then displaying the results
+    (TestResultWriter/Printer).
+    """
+
     def __init__(self, port, options, printer):
         """Initialize test runner data structures.
 
@@ -77,17 +82,23 @@
           printer: a Printer object to record updates to.
         """
         self._port = port
-        self._filesystem = port.host.filesystem
+        fs = port.host.filesystem
+        self._filesystem = fs
         self._options = options
         self._printer = printer
         self._expectations = OrderedDict()
-        self.LAYOUT_TESTS_DIRECTORY = 'LayoutTests'
         self._results_directory = self._port.results_directory()
         self._finder = LayoutTestFinder(self._port, self._options)
         self._runner = None
 
-        test_options_json_path = self._port.path_from_webkit_base(self.LAYOUT_TESTS_DIRECTORY, "tests-options.json")
-        self._tests_options = json.loads(self._filesystem.read_text_file(test_options_json_path)) if self._filesystem.exists(test_options_json_path) else {}
+        self._tests_options = {}
+        test_options_json_path = fs.join(self._port.layout_tests_dir(), "tests-options.json")
+        if fs.exists(test_options_json_path):
+            with fs.open_binary_file_for_reading(test_options_json_path) as fd:
+                try:
+                    self._tests_options = json.load(fd)
+                except (ValueError, IOError):
+                    pass
 
     def _collect_tests(self,
                        paths,  # type: List[str]
@@ -214,12 +225,13 @@
         return tests_to_run
 
     def _test_input_for_file(self, test_file, device_type):
+        test_is_slow = self._test_is_slow(test_file.test_path, device_type=device_type)
         reference_files = self._port.reference_files(
             test_file.test_path, device_type=device_type
         )
         timeout = (
             self._options.slow_time_out_ms
-            if self._test_is_slow(test_file.test_path, device_type=device_type)
+            if test_is_slow
             else self._options.time_out_ms
         )
         should_dump_jsconsolelog_in_stderr = (
@@ -243,6 +255,7 @@
         return TestInput(
             test_file,
             timeout=timeout,
+            is_slow=test_is_slow,
             needs_servers=test_file.needs_any_server,
             should_dump_jsconsolelog_in_stderr=should_dump_jsconsolelog_in_stderr,
             reference_files=reference_files,
@@ -353,7 +366,7 @@
         needs_http = any(test.needs_http_server for tests in itervalues(tests_to_run_by_device) for test in tests)
         needs_web_platform_test_server = any(test.needs_wpt_server for tests in itervalues(tests_to_run_by_device) for test in tests)
         needs_websockets = any(test.needs_websocket_server for tests in itervalues(tests_to_run_by_device) for test in tests)
-        self._runner = LayoutTestRunner(self._options, self._port, self._printer, self._results_directory, self._test_is_slow,
+        self._runner = LayoutTestRunner(self._options, self._port, self._printer, self._results_directory,
                                         needs_http=needs_http, needs_web_platform_test_server=needs_web_platform_test_server, needs_websockets=needs_websockets)
 
         initial_results = None
@@ -365,7 +378,6 @@
         uploads = []
 
         for device_type in device_type_list:
-            self._runner._test_is_slow = lambda test_file: self._test_is_slow(test_file, device_type=device_type)
             self._options.child_processes = min(self._port.max_child_processes(device_type=device_type), int(child_processes_option_value or self._port.default_child_processes(device_type=device_type)))
 
             _log.info('')
@@ -399,7 +411,7 @@
             for skipped_test in set(aggregate_tests_to_skip):
                 skipped_result = test_results.TestResult(skipped_test.test_path)
                 skipped_result.type = test_expectations.SKIP
-                skipped_results.add(skipped_result, expected=True, test_is_slow=self._test_is_slow(skipped_test.test_path, device_type=device_type))
+                skipped_results.add(skipped_result, expected=True)
             temp_initial_results = temp_initial_results.merge(skipped_results)
 
             if self._options.report_urls:
@@ -601,7 +613,7 @@
                     result = test_results.TestResult(test)
                     result.type = test_expectations.CRASH
                     result.is_other_crash = True
-                    run_results.add(result, expected=False, test_is_slow=False)
+                    run_results.add(result, expected=False)
                     _log.debug("Adding results for other crash: " + str(test))
 
     def _clobber_old_results(self):

Modified: trunk/Tools/Scripts/webkitpy/layout_tests/controllers/single_test_runner.py (277780 => 277781)


--- trunk/Tools/Scripts/webkitpy/layout_tests/controllers/single_test_runner.py	2021-05-20 13:31:05 UTC (rev 277780)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/controllers/single_test_runner.py	2021-05-20 13:48:42 UTC (rev 277781)
@@ -56,12 +56,8 @@
         self._results_directory = results_directory
         self._driver = driver
         self._worker_name = worker_name
-        self._test_name = test_input.test_name
-        self._should_run_pixel_test = test_input.should_run_pixel_test
-        self._should_dump_jsconsolelog_in_stderr = test_input.should_dump_jsconsolelog_in_stderr
-        self._reference_files = test_input.reference_files
+        self._test_input = test_input
         self._stop_when_done = stop_when_done
-        self._timeout = test_input.timeout
 
         if self._reference_files:
             # Detect and report a test which has a wrong combination of expectation files.
@@ -73,6 +69,26 @@
                 if self._filesystem.exists(expected_filename):
                     _log.error('%s is a reftest, but has an unused expectation file. Please remove %s.', self._test_name, expected_filename)
 
+    @property
+    def _test_name(self):
+        return self._test_input.test_name
+
+    @property
+    def _should_run_pixel_test(self):
+        return self._test_input.should_run_pixel_test
+
+    @property
+    def _should_dump_jsconsolelog_in_stderr(self):
+        return self._test_input.should_dump_jsconsolelog_in_stderr
+
+    @property
+    def _reference_files(self):
+        return self._test_input.reference_files
+
+    @property
+    def _timeout(self):
+        return self._test_input.timeout
+
     def _expected_driver_output(self):
         return DriverOutput(self._port.expected_text(self._test_name, device_type=self._driver.host.device_type),
                                  self._port.expected_image(self._test_name, device_type=self._driver.host.device_type),
@@ -96,7 +112,7 @@
         if self._reference_files:
             if self._port.get_option('no_ref_tests') or self._options.reset_results:
                 reftest_type = set([reference_file[0] for reference_file in self._reference_files])
-                result = TestResult(self._test_name, reftest_type=reftest_type)
+                result = TestResult(self._test_input, reftest_type=reftest_type)
                 result.type = test_expectations.SKIP
                 return result
             return self._run_reftest()
@@ -131,7 +147,7 @@
         # FIXME: It the test crashed or timed out, it might be better to avoid
         # to write new baselines.
         self._overwrite_baselines(driver_output)
-        return TestResult(self._test_name, failures, driver_output.test_time, driver_output.has_stderr(), pid=driver_output.pid)
+        return TestResult(self._test_input, failures, driver_output.test_time, driver_output.has_stderr(), pid=driver_output.pid)
 
     _render_tree_dump_pattern = re.compile(r"^layer at \(\d+,\d+\) size \d+x\d+\n")
 
@@ -223,13 +239,13 @@
         if driver_output.crash:
             # Don't continue any more if we already have a crash.
             # In case of timeouts, we continue since we still want to see the text and image output.
-            return TestResult(self._test_name, failures, driver_output.test_time, driver_output.has_stderr(), pid=driver_output.pid)
+            return TestResult(self._test_input, failures, driver_output.test_time, driver_output.has_stderr(), pid=driver_output.pid)
 
         failures.extend(self._compare_text(expected_driver_output.text, driver_output.text))
         failures.extend(self._compare_audio(expected_driver_output.audio, driver_output.audio))
         if self._should_run_pixel_test:
             failures.extend(self._compare_image(expected_driver_output, driver_output))
-        return TestResult(self._test_name, failures, driver_output.test_time, driver_output.has_stderr(), pid=driver_output.pid)
+        return TestResult(self._test_input, failures, driver_output.test_time, driver_output.has_stderr(), pid=driver_output.pid)
 
     def _compare_text(self, expected_text, actual_text):
         failures = []
@@ -317,7 +333,7 @@
         assert(reference_output)
         test_result_writer.write_test_result(self._filesystem, self._port, self._results_directory, self._test_name, test_output, reference_output, test_result.failures)
         reftest_type = set([reference_file[0] for reference_file in self._reference_files])
-        return TestResult(self._test_name, test_result.failures, total_test_time + test_result.test_run_time, test_result.has_stderr, reftest_type=reftest_type, pid=test_result.pid, references=reference_test_names)
+        return TestResult(self._test_input, test_result.failures, total_test_time + test_result.test_run_time, test_result.has_stderr, reftest_type=reftest_type, pid=test_result.pid, references=reference_test_names)
 
     def _compare_output_with_reference(self, reference_driver_output, actual_driver_output, reference_filename, mismatch):
         total_test_time = reference_driver_output.test_time + actual_driver_output.test_time
@@ -326,10 +342,10 @@
         failures.extend(self._handle_error(actual_driver_output))
         if failures:
             # Don't continue any more if we already have crash or timeout.
-            return TestResult(self._test_name, failures, total_test_time, has_stderr)
+            return TestResult(self._test_input, failures, total_test_time, has_stderr)
         failures.extend(self._handle_error(reference_driver_output, reference_filename=reference_filename))
         if failures:
-            return TestResult(self._test_name, failures, total_test_time, has_stderr, pid=actual_driver_output.pid)
+            return TestResult(self._test_input, failures, total_test_time, has_stderr, pid=actual_driver_output.pid)
 
         if not reference_driver_output.image_hash and not actual_driver_output.image_hash:
             failures.append(test_failures.FailureReftestNoImagesGenerated(reference_filename))
@@ -348,4 +364,4 @@
             elif diff_result[0]:
                 failures.append(test_failures.FailureReftestMismatch(reference_filename))
 
-        return TestResult(self._test_name, failures, total_test_time, has_stderr, pid=actual_driver_output.pid)
+        return TestResult(self._test_input, failures, total_test_time, has_stderr, pid=actual_driver_output.pid)

Modified: trunk/Tools/Scripts/webkitpy/layout_tests/models/test_input.py (277780 => 277781)


--- trunk/Tools/Scripts/webkitpy/layout_tests/models/test_input.py	2021-05-20 13:31:05 UTC (rev 277780)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/models/test_input.py	2021-05-20 13:48:42 UTC (rev 277781)
@@ -43,6 +43,7 @@
     """
     test = attr.ib(type=Test)
     timeout = attr.ib(default=None)  # type: Union[None, int, str]
+    is_slow = attr.ib(default=None)  # type: Optional[bool]
     needs_servers = attr.ib(default=None)  # type: Optional[bool]
     should_dump_jsconsolelog_in_stderr = attr.ib(default=None)  # type: Optional[bool]
     reference_files = attr.ib(default=None)  # type: Optional[List[Tuple[str str]]]

Modified: trunk/Tools/Scripts/webkitpy/layout_tests/models/test_results.py (277780 => 277781)


--- trunk/Tools/Scripts/webkitpy/layout_tests/models/test_results.py	2021-05-20 13:31:05 UTC (rev 277780)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/models/test_results.py	2021-05-20 13:48:42 UTC (rev 277781)
@@ -27,13 +27,21 @@
 # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 
 from webkitpy.layout_tests.models import test_failures
+from webkitpy.layout_tests.models.test import Test
+from webkitpy.layout_tests.models.test_input import TestInput
 
 
 class TestResult(object):
     """Data object containing the results of a single test."""
 
-    def __init__(self, test_name, failures=None, test_run_time=None, has_stderr=False, reftest_type=None, pid=None, references=None):
-        self.test_name = test_name
+    def __init__(self, test_input, failures=None, test_run_time=None, has_stderr=False, reftest_type=None, pid=None, references=None):
+        # this takes a TestInput, and not a Test, as running the same Test with
+        # different input options can result in differing results
+        if not isinstance(test_input, TestInput):
+            # FIXME: figure out something better
+            # Changing all callers will be hard but probably worth it?
+            test_input = TestInput(Test(test_input))
+        self.test_input = test_input
         self.failures = failures or []
         self.test_run_time = test_run_time or 0  # The time taken to execute the test itself.
         self.has_stderr = has_stderr
@@ -51,6 +59,10 @@
         self.test_number = None
         self.is_other_crash = False
 
+    @property
+    def test_name(self):
+        return self.test_input.test_name
+
     def __eq__(self, other):
         return (self.test_name == other.test_name and
                 self.failures == other.failures and

Modified: trunk/Tools/Scripts/webkitpy/layout_tests/models/test_results_unittest.py (277780 => 277781)


--- trunk/Tools/Scripts/webkitpy/layout_tests/models/test_results_unittest.py	2021-05-20 13:31:05 UTC (rev 277780)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/models/test_results_unittest.py	2021-05-20 13:48:42 UTC (rev 277781)
@@ -44,7 +44,7 @@
         self.assertEqual(result.test_run_time, 0)
 
     def test_pickle_roundtrip(self):
-        result = TestResult(test_name='foo', failures=[], test_run_time=1.1)
+        result = TestResult('foo', failures=[], test_run_time=1.1)
         s = pickle.dumps(result)  # multiprocessing uses the default protocol version
         new_result = pickle.loads(s)
         self.assertIsInstance(new_result, TestResult)

Modified: trunk/Tools/Scripts/webkitpy/layout_tests/models/test_run_results.py (277780 => 277781)


--- trunk/Tools/Scripts/webkitpy/layout_tests/models/test_run_results.py	2021-05-20 13:31:05 UTC (rev 277780)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/models/test_run_results.py	2021-05-20 13:48:42 UTC (rev 277781)
@@ -67,7 +67,7 @@
         self.interrupted = False
         self.keyboard_interrupted = False
 
-    def add(self, test_result, expected, test_is_slow):
+    def add(self, test_result, expected):
         self.tests_by_expectation[test_result.type].add(test_result.test_name)
         self.results_by_name[test_result.test_name] = test_result
         if test_result.is_other_crash:
@@ -91,7 +91,7 @@
                 self.unexpected_crashes += 1
             elif test_result.type == test_expectations.TIMEOUT:
                 self.unexpected_timeouts += 1
-        if test_is_slow:
+        if test_result.test_input.is_slow:
             self.slow_tests.add(test_result.test_name)
 
     def change_result_to_failure(self, existing_result, new_result, existing_expected, new_expected):

Modified: trunk/Tools/Scripts/webkitpy/layout_tests/models/test_run_results_unittest.py (277780 => 277781)


--- trunk/Tools/Scripts/webkitpy/layout_tests/models/test_run_results_unittest.py	2021-05-20 13:31:05 UTC (rev 277780)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/models/test_run_results_unittest.py	2021-05-20 13:48:42 UTC (rev 277781)
@@ -60,58 +60,56 @@
 
 
 def summarized_results(port, expected, passing, flaky, include_passes=False):
-    test_is_slow = False
-
     initial_results = run_results(port)
     if expected:
-        initial_results.add(get_result('passes/text.html', test_expectations.PASS), expected, test_is_slow)
-        initial_results.add(get_result('failures/expected/audio.html', test_expectations.AUDIO), expected, test_is_slow)
-        initial_results.add(get_result('failures/expected/timeout.html', test_expectations.TIMEOUT), expected, test_is_slow)
-        initial_results.add(get_result('failures/expected/crash.html', test_expectations.CRASH), expected, test_is_slow)
+        initial_results.add(get_result('passes/text.html', test_expectations.PASS), expected)
+        initial_results.add(get_result('failures/expected/audio.html', test_expectations.AUDIO), expected)
+        initial_results.add(get_result('failures/expected/timeout.html', test_expectations.TIMEOUT), expected)
+        initial_results.add(get_result('failures/expected/crash.html', test_expectations.CRASH), expected)
 
         if port._options.pixel_tests:
-            initial_results.add(get_result('failures/expected/pixel-fail.html', test_expectations.IMAGE), expected, test_is_slow)
+            initial_results.add(get_result('failures/expected/pixel-fail.html', test_expectations.IMAGE), expected)
         else:
-            initial_results.add(get_result('failures/expected/pixel-fail.html', test_expectations.PASS), expected, test_is_slow)
+            initial_results.add(get_result('failures/expected/pixel-fail.html', test_expectations.PASS), expected)
 
         if port._options.world_leaks:
-            initial_results.add(get_result('failures/expected/leak.html', test_expectations.LEAK), expected, test_is_slow)
+            initial_results.add(get_result('failures/expected/leak.html', test_expectations.LEAK), expected)
         else:
-            initial_results.add(get_result('failures/expected/leak.html', test_expectations.PASS), expected, test_is_slow)
+            initial_results.add(get_result('failures/expected/leak.html', test_expectations.PASS), expected)
 
     elif passing:
-        initial_results.add(get_result('passes/text.html'), expected, test_is_slow)
-        initial_results.add(get_result('failures/expected/audio.html'), expected, test_is_slow)
-        initial_results.add(get_result('failures/expected/timeout.html'), expected, test_is_slow)
-        initial_results.add(get_result('failures/expected/crash.html'), expected, test_is_slow)
+        initial_results.add(get_result('passes/text.html'), expected)
+        initial_results.add(get_result('failures/expected/audio.html'), expected)
+        initial_results.add(get_result('failures/expected/timeout.html'), expected)
+        initial_results.add(get_result('failures/expected/crash.html'), expected)
 
         if port._options.pixel_tests:
-            initial_results.add(get_result('failures/expected/pixel-fail.html'), expected, test_is_slow)
+            initial_results.add(get_result('failures/expected/pixel-fail.html'), expected)
         else:
-            initial_results.add(get_result('failures/expected/pixel-fail.html', test_expectations.IMAGE), expected, test_is_slow)
+            initial_results.add(get_result('failures/expected/pixel-fail.html', test_expectations.IMAGE), expected)
 
         if port._options.world_leaks:
-            initial_results.add(get_result('failures/expected/leak.html'), expected, test_is_slow)
+            initial_results.add(get_result('failures/expected/leak.html'), expected)
         else:
-            initial_results.add(get_result('failures/expected/leak.html', test_expectations.PASS), expected, test_is_slow)
+            initial_results.add(get_result('failures/expected/leak.html', test_expectations.PASS), expected)
     else:
-        initial_results.add(get_result('passes/text.html', test_expectations.TIMEOUT), expected, test_is_slow)
-        initial_results.add(get_result('failures/expected/audio.html', test_expectations.AUDIO), expected, test_is_slow)
-        initial_results.add(get_result('failures/expected/timeout.html', test_expectations.CRASH), expected, test_is_slow)
-        initial_results.add(get_result('failures/expected/crash.html', test_expectations.TIMEOUT), expected, test_is_slow)
-        initial_results.add(get_result('failures/expected/pixel-fail.html', test_expectations.TIMEOUT), expected, test_is_slow)
-        initial_results.add(get_result('failures/expected/leak.html', test_expectations.CRASH), expected, test_is_slow)
+        initial_results.add(get_result('passes/text.html', test_expectations.TIMEOUT), expected)
+        initial_results.add(get_result('failures/expected/audio.html', test_expectations.AUDIO), expected)
+        initial_results.add(get_result('failures/expected/timeout.html', test_expectations.CRASH), expected)
+        initial_results.add(get_result('failures/expected/crash.html', test_expectations.TIMEOUT), expected)
+        initial_results.add(get_result('failures/expected/pixel-fail.html', test_expectations.TIMEOUT), expected)
+        initial_results.add(get_result('failures/expected/leak.html', test_expectations.CRASH), expected)
 
         # we only list hang.html here, since normally this is WontFix
-        initial_results.add(get_result('failures/expected/hang.html', test_expectations.TIMEOUT), expected, test_is_slow)
+        initial_results.add(get_result('failures/expected/hang.html', test_expectations.TIMEOUT), expected)
 
     if flaky:
         retry_results = run_results(port)
-        retry_results.add(get_result('passes/text.html'), True, test_is_slow)
-        retry_results.add(get_result('failures/expected/timeout.html'), True, test_is_slow)
-        retry_results.add(get_result('failures/expected/crash.html'), True, test_is_slow)
-        retry_results.add(get_result('failures/expected/pixel-fail.html'), True, test_is_slow)
-        retry_results.add(get_result('failures/expected/leak.html'), True, test_is_slow)
+        retry_results.add(get_result('passes/text.html'), True)
+        retry_results.add(get_result('failures/expected/timeout.html'), True)
+        retry_results.add(get_result('failures/expected/crash.html'), True)
+        retry_results.add(get_result('failures/expected/pixel-fail.html'), True)
+        retry_results.add(get_result('failures/expected/leak.html'), True)
     else:
         retry_results = None
 

Modified: trunk/Tools/Scripts/webkitpy/layout_tests/run_webkit_tests_integrationtest.py (277780 => 277781)


--- trunk/Tools/Scripts/webkitpy/layout_tests/run_webkit_tests_integrationtest.py	2021-05-20 13:31:05 UTC (rev 277780)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/run_webkit_tests_integrationtest.py	2021-05-20 13:48:42 UTC (rev 277781)
@@ -819,6 +819,21 @@
         self.assertTrue(passing_run(['--additional-expectations', '/tmp/overrides.txt', 'failures/unexpected/mismatch.html'],
                                     tests_included=True, host=host))
 
+    def test_tests_options(self):
+        host = MockHost()
+        host.filesystem.write_text_file(
+            '/test.checkout/LayoutTests/tests-options.json',
+            '{"failures/unexpected/timeout.html":["slow"]}'
+        )
+
+        details, _, _ = logging_run(['failures/expected/timeout.html',
+                                     'failures/unexpected/timeout.html'],
+                                    host=host)
+        self.assertEquals(details.initial_results.slow_tests,
+                          {'failures/unexpected/timeout.html'})
+        self.assertEquals(details.retry_results.slow_tests,
+                          {'failures/unexpected/timeout.html'})
+
     def test_no_http_and_force(self):
         # See test_run_force, using --force raises an exception.
         # FIXME: We would like to check the warnings generated.
_______________________________________________
webkit-changes mailing list
[email protected]
https://lists.webkit.org/mailman/listinfo/webkit-changes

Reply via email to