Title: [275773] trunk/Tools
Revision
275773
Author
gsnedd...@apple.com
Date
2021-04-09 12:49:58 -0700 (Fri, 09 Apr 2021)

Log Message

Make TestInput take a Test object, not a path
https://bugs.webkit.org/show_bug.cgi?id=224329

Reviewed by Jonathan Bedard.

* Scripts/webkitpy/layout_tests/controllers/layout_test_runner.py:
(LayoutTestRunner.run_tests): Remove test_to_skip argument; skipped tests aren't for the runner.
* Scripts/webkitpy/layout_tests/controllers/layout_test_runner_unittest.py:
(LayoutTestRunnerTests._run_tests): TestInput expects a Test.
(LayoutTestRunnerTests.test_interrupt_if_at_failure_limits): TestInput expects a Test.
(SharderTests.get_test_input): TestInput expects a Test.
* Scripts/webkitpy/layout_tests/controllers/manager.py:
(Manager._collect_tests):
Move duplicated code from Manager.run and Manager.print_expectations here.
(Manager._tests_to_run):
Renamed from Manager._prepare_lists; now just gives tests to run.
(Manager._test_input_for_file): Now takes a Test, not a str; device_type changes behaviour and shouldn't have any default.
(Manager._test_is_slow): device_type changes behaviour and shouldn't have any default.
(Manager._test_should_dump_jsconsolelog_in_stderr): device_type changes behaviour and shouldn't have any default.
(Manager._get_test_inputs): device_type changes behaviour and shouldn't have any default.
(Manager._update_worker_count): device_type changes behaviour and shouldn't have any default.
(Manager._set_up_run): device_type changes behaviour and shouldn't have any default.
(Manager.run):
Largely just change for Manager._collect_tests, also renaming the sets to match their
clearer names in print_expectations.
Additionally, now generate skip results here, as they aren't to do with actually running
tests, they're about test selection.
(Manager._run_test_subset): Remove tests_to_skip argument, devie_type is required.
(Manager._run_tests): Remove tests_to_skip argument, devie_type is required.
(Manager._print_expectation_line_for_test): Now takes a Test, not a str; device_type changes behaviour and shouldn't have any default.
(Manager._print_expectations_for_subset): Change default of tests_to_skip to avoid set/dict oddness.
(Manager.print_expectations): Largely change for Manager._collect_tests.
* Scripts/webkitpy/layout_tests/models/test.py:
(Test): Use attrs to avoid having to define cmp, hash, repr, etc.
* Scripts/webkitpy/layout_tests/models/test_input.py:
(TestInput.__init__): Take a Test, not a str.
(TestInput.test_name): Maintain compatibility with code reading TestInput.test_name for now.
* Scripts/webkitpy/layout_tests/models/test_run_results.py:
(TestRunResults.merge): Fix merging of dicts-of-sets, rather than overwriting the sets.
(TestRunResults.merge.merge_dict_sets): Implement the merging.

Modified Paths

Diff

Modified: trunk/Tools/ChangeLog (275772 => 275773)


--- trunk/Tools/ChangeLog	2021-04-09 19:05:07 UTC (rev 275772)
+++ trunk/Tools/ChangeLog	2021-04-09 19:49:58 UTC (rev 275773)
@@ -1,3 +1,46 @@
+2021-04-09  Sam Sneddon  <gsnedd...@apple.com>
+
+        Make TestInput take a Test object, not a path
+        https://bugs.webkit.org/show_bug.cgi?id=224329
+
+        Reviewed by Jonathan Bedard.
+
+        * Scripts/webkitpy/layout_tests/controllers/layout_test_runner.py:
+        (LayoutTestRunner.run_tests): Remove test_to_skip argument; skipped tests aren't for the runner.
+        * Scripts/webkitpy/layout_tests/controllers/layout_test_runner_unittest.py:
+        (LayoutTestRunnerTests._run_tests): TestInput expects a Test.
+        (LayoutTestRunnerTests.test_interrupt_if_at_failure_limits): TestInput expects a Test.
+        (SharderTests.get_test_input): TestInput expects a Test.
+        * Scripts/webkitpy/layout_tests/controllers/manager.py:
+        (Manager._collect_tests):
+        Move duplicated code from Manager.run and Manager.print_expectations here.
+        (Manager._tests_to_run):
+        Renamed from Manager._prepare_lists; now just gives tests to run.
+        (Manager._test_input_for_file): Now takes a Test, not a str; device_type changes behaviour and shouldn't have any default.
+        (Manager._test_is_slow): device_type changes behaviour and shouldn't have any default.
+        (Manager._test_should_dump_jsconsolelog_in_stderr): device_type changes behaviour and shouldn't have any default.
+        (Manager._get_test_inputs): device_type changes behaviour and shouldn't have any default.
+        (Manager._update_worker_count): device_type changes behaviour and shouldn't have any default.
+        (Manager._set_up_run): device_type changes behaviour and shouldn't have any default.
+        (Manager.run):
+        Largely just change for Manager._collect_tests, also renaming the sets to match their
+        clearer names in print_expectations.
+        Additionally, now generate skip results here, as they aren't to do with actually running
+        tests, they're about test selection.
+        (Manager._run_test_subset): Remove tests_to_skip argument, devie_type is required.
+        (Manager._run_tests): Remove tests_to_skip argument, devie_type is required.
+        (Manager._print_expectation_line_for_test): Now takes a Test, not a str; device_type changes behaviour and shouldn't have any default.
+        (Manager._print_expectations_for_subset): Change default of tests_to_skip to avoid set/dict oddness.
+        (Manager.print_expectations): Largely change for Manager._collect_tests.
+        * Scripts/webkitpy/layout_tests/models/test.py:
+        (Test): Use attrs to avoid having to define cmp, hash, repr, etc.
+        * Scripts/webkitpy/layout_tests/models/test_input.py:
+        (TestInput.__init__): Take a Test, not a str.
+        (TestInput.test_name): Maintain compatibility with code reading TestInput.test_name for now.
+        * Scripts/webkitpy/layout_tests/models/test_run_results.py:
+        (TestRunResults.merge): Fix merging of dicts-of-sets, rather than overwriting the sets.
+        (TestRunResults.merge.merge_dict_sets): Implement the merging.
+
 2021-04-09  Aditya Keerthi  <akeer...@apple.com>
 
         [iOS][FCR] Use context menus for text input datalist dropdowns

Modified: trunk/Tools/Scripts/webkitpy/layout_tests/controllers/layout_test_runner.py (275772 => 275773)


--- trunk/Tools/Scripts/webkitpy/layout_tests/controllers/layout_test_runner.py	2021-04-09 19:05:07 UTC (rev 275772)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/controllers/layout_test_runner.py	2021-04-09 19:49:58 UTC (rev 275773)
@@ -92,7 +92,7 @@
         all_shards = self._sharder.shard_tests(test_inputs, child_process_count, self._options.fully_parallel)
         return min(child_process_count, len(all_shards))
 
-    def run_tests(self, expectations, test_inputs, tests_to_skip, num_workers, retrying, device_type=None):
+    def run_tests(self, expectations, test_inputs, num_workers, retrying, device_type=None):
         self._expectations = expectations
         self._test_inputs = []
         for test_input in test_inputs:
@@ -102,7 +102,7 @@
         self._retrying = retrying
 
         # FIXME: rename all variables to test_run_results or some such ...
-        run_results = TestRunResults(self._expectations, len(test_inputs) + len(tests_to_skip))
+        run_results = TestRunResults(self._expectations, len(test_inputs))
         self._current_run_results = run_results
         self._printer.num_tests = len(test_inputs)
         self._printer.num_started = 0
@@ -110,11 +110,6 @@
         if not retrying:
             self._printer.print_expected(run_results, self._expectations.model().get_tests_with_result_type)
 
-        for test_name in set(tests_to_skip):
-            result = test_results.TestResult(test_name)
-            result.type = test_expectations.SKIP
-            run_results.add(result, expected=True, test_is_slow=self._test_is_slow(test_name))
-
         self._printer.write_update('Sharding tests ...')
         all_shards = self._sharder.shard_tests(test_inputs, int(self._options.child_processes), self._options.fully_parallel)
 

Modified: trunk/Tools/Scripts/webkitpy/layout_tests/controllers/layout_test_runner_unittest.py (275772 => 275773)


--- trunk/Tools/Scripts/webkitpy/layout_tests/controllers/layout_test_runner_unittest.py	2021-04-09 19:05:07 UTC (rev 275772)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/controllers/layout_test_runner_unittest.py	2021-04-09 19:49:58 UTC (rev 275773)
@@ -35,9 +35,10 @@
 from webkitpy.layout_tests.controllers.layout_test_runner import LayoutTestRunner, Sharder, TestRunInterruptedException
 from webkitpy.layout_tests.models import test_expectations
 from webkitpy.layout_tests.models import test_failures
-from webkitpy.layout_tests.models.test_run_results import TestRunResults
+from webkitpy.layout_tests.models.test import Test
 from webkitpy.layout_tests.models.test_input import TestInput
 from webkitpy.layout_tests.models.test_results import TestResult
+from webkitpy.layout_tests.models.test_run_results import TestRunResults
 from webkitpy.port.test import TestPort
 
 
@@ -81,10 +82,10 @@
         return LayoutTestRunner(options, port, FakePrinter(), port.results_directory(), lambda test_name: False)
 
     def _run_tests(self, runner, tests):
-        test_inputs = [TestInput(test, 6000) for test in tests]
+        test_inputs = [TestInput(Test(test), 6000) for test in tests]
         expectations = TestExpectations(runner._port, tests)
         expectations.parse_all_expectations()
-        runner.run_tests(expectations, test_inputs, set(),
+        runner.run_tests(expectations, test_inputs,
             num_workers=1, needs_http=any('http' in test for test in tests), needs_websockets=any(['websocket' in test for test in tests]), needs_web_platform_test_server=any(['imported/w3c' in test for test in tests]), retrying=False)
 
     def test_interrupt_if_at_failure_limits(self):
@@ -92,7 +93,7 @@
         runner._options.exit_after_n_failures = None
         runner._options.exit_after_n_crashes_or_times = None
         test_names = ['passes/text.html', 'passes/image.html']
-        runner._test_inputs = [TestInput(test_name, 6000) for test_name in test_names]
+        runner._test_inputs = [TestInput(Test(test_name), 6000) for test_name in test_names]
 
         expectations = TestExpectations(runner._port, test_names)
         expectations.parse_all_expectations()
@@ -261,7 +262,7 @@
     ]
 
     def get_test_input(self, test_file):
-        return TestInput(test_file, needs_servers=(test_file.startswith('http')))
+        return TestInput(Test(test_file), needs_servers=(test_file.startswith('http')))
 
     def get_shards(self, num_workers, fully_parallel, test_list=None):
         port = TestPort(MockSystemHost())

Modified: trunk/Tools/Scripts/webkitpy/layout_tests/controllers/manager.py (275772 => 275773)


--- trunk/Tools/Scripts/webkitpy/layout_tests/controllers/manager.py	2021-04-09 19:05:07 UTC (rev 275772)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/controllers/manager.py	2021-04-09 19:49:58 UTC (rev 275773)
@@ -55,7 +55,7 @@
 from webkitpy.layout_tests.models import test_results
 from webkitpy.layout_tests.models import test_run_results
 from webkitpy.layout_tests.models.test_input import TestInput
-from webkitpy.layout_tests.models.test_run_results import INTERRUPTED_EXIT_STATUS
+from webkitpy.layout_tests.models.test_run_results import INTERRUPTED_EXIT_STATUS, TestRunResults
 from webkitpy.tool.grammar import pluralize
 from webkitpy.results.upload import Upload
 from webkitpy.xcode.device_type import DeviceType
@@ -94,9 +94,35 @@
         test_options_json_path = self._port.path_from_webkit_base(self.LAYOUT_TESTS_DIRECTORY, "tests-options.json")
         self._tests_options = json.loads(self._filesystem.read_text_file(test_options_json_path)) if self._filesystem.exists(test_options_json_path) else {}
 
-    def _collect_tests(self, args, device_type=None):
-        return self._finder.find_tests(self._options, args, device_type=device_type)
+    def _collect_tests(self,
+                       paths,  # type: List[str]
+                       device_type_list,  # type: List[Optional[DeviceType]]
+                       ):
+        aggregate_tests = set()  # type: Set[Test]
+        aggregate_tests_to_run = set()  # type: Set[Test]
+        tests_to_run_by_device = {}  # type: Dict[Optional[DeviceType], List[Test]]
 
+        device_type_list = self._port.supported_device_types()
+        for device_type in device_type_list:
+            for_device_type = u'for {} '.format(device_type) if device_type else ''
+            self._printer.write_update(u'Collecting tests {}...'.format(for_device_type))
+            paths, tests = self._finder.find_tests(self._options, paths, device_type=device_type)
+            aggregate_tests.update(tests)
+
+            test_names = [test.test_path for test in tests]
+
+            self._printer.write_update(u'Parsing expectations {}...'.format(for_device_type))
+            self._expectations[device_type] = test_expectations.TestExpectations(self._port, test_names, force_expectations_pass=self._options.force, device_type=device_type)
+            self._expectations[device_type].parse_all_expectations()
+
+            tests_to_run = self._tests_to_run(tests, device_type=device_type)
+            tests_to_run_by_device[device_type] = [test for test in tests_to_run if test not in aggregate_tests_to_run]
+            aggregate_tests_to_run.update(tests_to_run_by_device[device_type])
+
+        aggregate_tests_to_skip = aggregate_tests - aggregate_tests_to_run
+
+        return tests_to_run_by_device, aggregate_tests_to_skip
+
     def _is_http_test(self, test):
         return self.HTTP_SUBDIR in test or self._is_websocket_test(test) or self._needs_web_platform_test(test)
 
@@ -187,41 +213,40 @@
 
         return (tests_to_run, set(test_names) - set(tests_to_run))
 
-    def _prepare_lists(self, paths, test_names, device_type=None):
-        tests_to_skip = self._skip_tests(test_names, self._expectations[device_type], self._http_tests(test_names))
-        tests_to_run = [test for test in test_names if test not in tests_to_skip]
+    def _tests_to_run(self, tests, device_type):
+        test_names = {test.test_path for test in tests}
+        test_names_to_skip = self._skip_tests(test_names, self._expectations[device_type], self._http_tests(test_names))
+        tests_to_run = [test for test in tests if test.test_path not in test_names_to_skip]
 
         # Create a sorted list of test files so the subset chunk,
         # if used, contains alphabetically consecutive tests.
         if self._options.order == 'natural':
-            tests_to_run.sort(key=self._port.test_key)
+            tests_to_run.sort(key=lambda x: self._port.test_key(x.test_path))
         elif self._options.order == 'random':
             random.shuffle(tests_to_run)
 
-        tests_to_run, tests_in_other_chunks = self._split_into_chunks(tests_to_run)
-        self._expectations[device_type].add_skipped_tests(tests_in_other_chunks)
-        tests_to_skip.update(tests_in_other_chunks)
+        tests_to_run, _ = self._split_into_chunks(tests_to_run)
+        return tests_to_run
 
-        return tests_to_run, tests_to_skip
+    def _test_input_for_file(self, test_file, device_type):
+        return TestInput(
+            test_file,
+            self._options.slow_time_out_ms if self._test_is_slow(test_file.test_path, device_type=device_type) else self._options.time_out_ms,
+            self._is_http_test(test_file.test_path),
+            should_dump_jsconsolelog_in_stderr=self._test_should_dump_jsconsolelog_in_stderr(test_file.test_path, device_type=device_type))
 
-    def _test_input_for_file(self, test_file, device_type=None):
-        return TestInput(test_file,
-            self._options.slow_time_out_ms if self._test_is_slow(test_file, device_type=device_type) else self._options.time_out_ms,
-            self._is_http_test(test_file),
-            should_dump_jsconsolelog_in_stderr=self._test_should_dump_jsconsolelog_in_stderr(test_file, device_type=device_type))
-
-    def _test_is_slow(self, test_file, device_type=None):
+    def _test_is_slow(self, test_file, device_type):
         if self._expectations[device_type].model().has_modifier(test_file, test_expectations.SLOW):
             return True
         return "slow" in self._tests_options.get(test_file, [])
 
-    def _test_should_dump_jsconsolelog_in_stderr(self, test_file, device_type=None):
+    def _test_should_dump_jsconsolelog_in_stderr(self, test_file, device_type):
         return self._expectations[device_type].model().has_modifier(test_file, test_expectations.DUMPJSCONSOLELOGINSTDERR)
 
     def needs_servers(self, test_names):
         return any(self._is_http_test(test_name) for test_name in test_names) and self._options.http
 
-    def _get_test_inputs(self, tests_to_run, repeat_each, iterations, device_type=None):
+    def _get_test_inputs(self, tests_to_run, repeat_each, iterations, device_type):
         test_inputs = []
         for _ in range(iterations):
             for test in tests_to_run:
@@ -229,12 +254,12 @@
                     test_inputs.append(self._test_input_for_file(test, device_type=device_type))
         return test_inputs
 
-    def _update_worker_count(self, test_names, device_type=None):
+    def _update_worker_count(self, test_names, device_type):
         test_inputs = self._get_test_inputs(test_names, self._options.repeat_each, self._options.iterations, device_type=device_type)
         worker_count = self._runner.get_worker_count(test_inputs, int(self._options.child_processes))
         self._options.child_processes = worker_count
 
-    def _set_up_run(self, test_names, device_type=None):
+    def _set_up_run(self, test_names, device_type):
         # This must be started before we check the system dependencies,
         # since the helper may do things to make the setup correct.
         self._printer.write_update("Starting helper ...")
@@ -256,55 +281,48 @@
 
     def run(self, args):
         num_failed_uploads = 0
-        total_tests = set()
-        aggregate_test_names = set()
-        aggregate_tests = set()
-        tests_to_run_by_device = {}
 
         device_type_list = self._port.supported_device_types()
-        for device_type in device_type_list:
-            """Run the tests and return a RunDetails object with the results."""
-            for_device_type = u'for {} '.format(device_type) if device_type else ''
-            self._printer.write_update(u'Collecting tests {}...'.format(for_device_type))
-            try:
-                paths, tests = self._collect_tests(args, device_type=device_type)
-            except IOError:
-                # This is raised if --test-list doesn't exist
-                return test_run_results.RunDetails(exit_code=-1)
+        try:
+            tests_to_run_by_device, aggregate_tests_to_skip = self._collect_tests(args, device_type_list)
+        except IOError:
+            # This is raised if --test-list doesn't exist
+            return test_run_results.RunDetails(exit_code=-1)
 
-            test_names = [test.test_path for test in tests]
+        aggregate_tests_to_run = set()  # type: Set[Test]
+        for v in tests_to_run_by_device.values():
+            aggregate_tests_to_run.update(v)
 
-            self._printer.write_update(u'Parsing expectations {}...'.format(for_device_type))
-            self._expectations[device_type] = test_expectations.TestExpectations(self._port, test_names, force_expectations_pass=self._options.force, device_type=device_type)
-            self._expectations[device_type].parse_all_expectations()
+        skipped_tests_by_path = defaultdict(set)
+        for test in aggregate_tests_to_skip:
+            skipped_tests_by_path[test.test_path].add(test)
 
-            aggregate_test_names.update(test_names)
-            tests_to_run, tests_to_skip = self._prepare_lists(paths, test_names, device_type=device_type)
-
-            total_tests.update(tests_to_run)
-            total_tests.update(tests_to_skip)
-
-            tests_to_run_by_device[device_type] = [test for test in tests_to_run if test not in aggregate_tests]
-            aggregate_tests.update(tests_to_run)
-
         # If a test is marked skipped, but was explicitly requested, run it anyways
         if self._options.skipped != 'always':
             for arg in args:
-                if arg in total_tests and arg not in aggregate_tests:
-                    tests_to_run_by_device[device_type_list[0]].append(arg)
-                    aggregate_tests.add(arg)
+                if arg in skipped_tests_by_path:
+                    tests = skipped_tests_by_path[arg]
+                    tests_to_run_by_device[device_type_list[0]].extend(tests)
+                    aggregate_tests_to_run |= tests
+                    aggregate_tests_to_skip -= tests
+                    del skipped_tests_by_path[arg]
 
-        tests_to_skip = total_tests - aggregate_tests
-        self._printer.print_found(len(aggregate_test_names), len(aggregate_tests), self._options.repeat_each, self._options.iterations)
+        aggregate_tests = aggregate_tests_to_run | aggregate_tests_to_skip
+
+        self._printer.print_found(len(aggregate_tests),
+                                  len(aggregate_tests_to_run),
+                                  self._options.repeat_each,
+                                  self._options.iterations)
         start_time = time.time()
 
         # Check to see if all tests we are running are skipped.
-        if tests_to_skip == total_tests:
+        if aggregate_tests == aggregate_tests_to_skip:
+            # XXX: this is currently identical to the follow if, which likely isn't intended
             _log.error("All tests skipped.")
             return test_run_results.RunDetails(exit_code=0, skipped_all_tests=True)
 
         # Check to make sure we have no tests to run that are not skipped.
-        if not sum([len(tests) for tests in itervalues(tests_to_run_by_device)]):
+        if not aggregate_tests_to_run:
             _log.critical('No tests to run.')
             return test_run_results.RunDetails(exit_code=-1)
 
@@ -319,9 +337,9 @@
         # Create the output directory if it doesn't already exist.
         self._port.host.filesystem.maybe_make_directory(self._results_directory)
 
-        needs_http = any((self._is_http_test(test) and not self._needs_web_platform_test(test)) for tests in itervalues(tests_to_run_by_device) for test in tests)
-        needs_web_platform_test_server = any(self._needs_web_platform_test(test) for tests in itervalues(tests_to_run_by_device) for test in tests)
-        needs_websockets = any(self._is_websocket_test(test) for tests in itervalues(tests_to_run_by_device) for test in tests)
+        needs_http = any((self._is_http_test(test.test_path) and not self._needs_web_platform_test(test.test_path)) for tests in itervalues(tests_to_run_by_device) for test in tests)
+        needs_web_platform_test_server = any(self._needs_web_platform_test(test.test_path) for tests in itervalues(tests_to_run_by_device) for test in tests)
+        needs_websockets = any(self._is_websocket_test(test.test_path) for tests in itervalues(tests_to_run_by_device) for test in tests)
         self._runner = LayoutTestRunner(self._options, self._port, self._printer, self._results_directory, self._test_is_slow,
                                         needs_http=needs_http, needs_web_platform_test_server=needs_web_platform_test_server, needs_websockets=needs_websockets)
 
@@ -358,8 +376,15 @@
             configuration = self._port.configuration_for_upload(self._port.target_host(0))
             if not configuration.get('flavor', None):  # The --result-report-flavor argument should override wk1/wk2
                 configuration['flavor'] = 'wk2' if self._options.webkit_test_runner else 'wk1'
-            temp_initial_results, temp_retry_results, temp_enabled_pixel_tests_in_retry = self._run_test_subset(tests_to_run_by_device[device_type], tests_to_skip, device_type=device_type)
+            temp_initial_results, temp_retry_results, temp_enabled_pixel_tests_in_retry = self._run_test_subset(tests_to_run_by_device[device_type], device_type=device_type)
 
+            skipped_results = TestRunResults(self._expectations[device_type], len(aggregate_tests_to_skip))
+            for skipped_test in set(aggregate_tests_to_skip):
+                skipped_result = test_results.TestResult(skipped_test.test_path)
+                skipped_result.type = test_expectations.SKIP
+                skipped_results.add(skipped_result, expected=True, test_is_slow=self._test_is_slow(skipped_test.test_path, device_type=device_type))
+            temp_initial_results = temp_initial_results.merge(skipped_results)
+
             if self._options.report_urls:
                 self._printer.writeln('\n')
                 self._printer.write_update('Preparing upload data ...')
@@ -419,10 +444,13 @@
             result.exit_code = -1
         return result
 
-    def _run_test_subset(self, tests_to_run, tests_to_skip, device_type=None):
+    def _run_test_subset(self,
+                         tests_to_run,  # type: List[Test]
+                         device_type,  # type: Optional[DeviceType]
+                         ):
         try:
             enabled_pixel_tests_in_retry = False
-            initial_results = self._run_tests(tests_to_run, tests_to_skip, self._options.repeat_each, self._options.iterations, int(self._options.child_processes), retrying=False, device_type=device_type)
+            initial_results = self._run_tests(tests_to_run, self._options.repeat_each, self._options.iterations, int(self._options.child_processes), retrying=False, device_type=device_type)
 
             tests_to_retry = self._tests_to_retry(initial_results, include_crashes=self._port.should_retry_crashes())
             # Don't retry failures when interrupted by user or failures limit exception.
@@ -433,7 +461,12 @@
                 _log.info('')
                 _log.info("Retrying %s ..." % pluralize(len(tests_to_retry), "unexpected failure"))
                 _log.info('')
-                retry_results = self._run_tests(tests_to_retry, tests_to_skip=set(), repeat_each=1, iterations=1, num_workers=1, retrying=True, device_type=device_type)
+                retry_results = self._run_tests([test for test in tests_to_run if test.test_path in tests_to_retry],
+                                                repeat_each=1,
+                                                iterations=1,
+                                                num_workers=1,
+                                                retrying=True,
+                                                device_type=device_type)
 
                 if enabled_pixel_tests_in_retry:
                     self._options.pixel_tests = False
@@ -477,10 +510,18 @@
                 exit_code = self._port.exit_code_from_summarized_results(summarized_results)
         return test_run_results.RunDetails(exit_code, summarized_results, initial_results, retry_results, enabled_pixel_tests_in_retry)
 
-    def _run_tests(self, tests_to_run, tests_to_skip, repeat_each, iterations, num_workers, retrying, device_type=None):
+    def _run_tests(self,
+                   tests_to_run,  # type: List[Test]
+                   repeat_each,  # type: int
+                   iterations,  # type: int
+                   num_workers,  # type: int
+                   retrying,  # type: bool
+                   device_type,  # type: Optional[DeviceType]
+                   ):
         test_inputs = self._get_test_inputs(tests_to_run, repeat_each, iterations, device_type=device_type)
 
-        return self._runner.run_tests(self._expectations[device_type], test_inputs, tests_to_skip, num_workers, retrying, device_type)
+        assert self._runner is not None
+        return self._runner.run_tests(self._expectations[device_type], test_inputs, num_workers, retrying, device_type)
 
     def _clean_up_run(self):
         _log.debug("Flushing stdout")
@@ -652,11 +693,15 @@
             json_results_generator.add_path_to_trie(name, value, stats_trie)
         return stats_trie
 
-    def _print_expectation_line_for_test(self, format_string, test, device_type=None):
-        line = self._expectations[device_type].model().get_expectation_line(test)
-        print(format_string.format(test, line.expected_behavior, self._expectations[device_type].readable_filename_and_line_number(line), line.original_string or ''))
+    def _print_expectation_line_for_test(self, format_string, test, device_type):
+        test_path = test.test_path
+        line = self._expectations[device_type].model().get_expectation_line(test_path)
+        print(format_string.format(test_path,
+                                   line.expected_behavior,
+                                   self._expectations[device_type].readable_filename_and_line_number(line),
+                                   line.original_string or ''))
 
-    def _print_expectations_for_subset(self, device_type, test_col_width, tests_to_run, tests_to_skip={}):
+    def _print_expectations_for_subset(self, device_type, test_col_width, tests_to_run, tests_to_skip=None):
         format_string = '{{:{width}}} {{}} {{}} {{}}'.format(width=test_col_width)
         if tests_to_skip:
             print('')
@@ -670,40 +715,22 @@
             self._print_expectation_line_for_test(format_string, test, device_type=device_type)
 
     def print_expectations(self, args):
-        aggregate_test_names = set()
-        aggregate_tests_to_run = set()
-        aggregate_tests_to_skip = set()
-        tests_to_run_by_device = {}
-
         device_type_list = self._port.DEFAULT_DEVICE_TYPES or [self._port.DEVICE_TYPE]
-        for device_type in device_type_list:
-            """Run the tests and return a RunDetails object with the results."""
-            for_device_type = 'for {} '.format(device_type) if device_type else ''
-            self._printer.write_update('Collecting tests {}...'.format(for_device_type))
-            try:
-                paths, tests = self._collect_tests(args, device_type=device_type)
-            except IOError:
-                # This is raised if --test-list doesn't exist
-                return test_run_results.RunDetails(exit_code=-1)
 
-            test_names = [test.test_path for test in tests]
+        try:
+            tests_to_run_by_device, aggregate_tests_to_skip = self._collect_tests(args, device_type_list)
+        except IOError:
+            # This is raised if --test-list doesn't exist
+            return -1
 
-            self._printer.write_update('Parsing expectations {}...'.format(for_device_type))
-            self._expectations[device_type] = test_expectations.TestExpectations(self._port, test_names, force_expectations_pass=self._options.force, device_type=device_type)
-            self._expectations[device_type].parse_all_expectations()
+        aggregate_tests_to_run = set()
+        for v in tests_to_run_by_device.values():
+            aggregate_tests_to_run.update(v)
+        aggregate_tests = aggregate_tests_to_run | aggregate_tests_to_skip
 
-            aggregate_test_names.update(test_names)
-            tests_to_run, tests_to_skip = self._prepare_lists(paths, test_names, device_type=device_type)
-            aggregate_tests_to_skip.update(tests_to_skip)
+        self._printer.print_found(len(aggregate_tests), len(aggregate_tests_to_run), self._options.repeat_each, self._options.iterations)
+        test_col_width = max(len(test.test_path) for test in aggregate_tests) + 1
 
-            tests_to_run_by_device[device_type] = [test for test in tests_to_run if test not in aggregate_tests_to_run]
-            aggregate_tests_to_run.update(tests_to_run)
-
-        aggregate_tests_to_skip = aggregate_tests_to_skip - aggregate_tests_to_run
-
-        self._printer.print_found(len(aggregate_test_names), len(aggregate_tests_to_run), self._options.repeat_each, self._options.iterations)
-        test_col_width = len(max(aggregate_tests_to_run.union(aggregate_tests_to_skip), key=len)) + 1
-
         self._print_expectations_for_subset(device_type_list[0], test_col_width, tests_to_run_by_device[device_type_list[0]], aggregate_tests_to_skip)
 
         for device_type in device_type_list[1:]:

Modified: trunk/Tools/Scripts/webkitpy/layout_tests/models/test.py (275772 => 275773)


--- trunk/Tools/Scripts/webkitpy/layout_tests/models/test.py	2021-04-09 19:05:07 UTC (rev 275772)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/models/test.py	2021-04-09 19:49:58 UTC (rev 275773)
@@ -27,50 +27,17 @@
 # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 
 
+import attr
+
+
+@attr.s(frozen=True)
 class Test(object):
     """Data about a test and its expectations.
 
     Note that this is inherently platform specific, as expectations are platform specific."""
-
-    def __init__(
-        self,
-        test_path,
-        expected_text_path=None,
-        expected_image_path=None,
-        expected_checksum_path=None,
-        expected_audio_path=None,
-        reference_files=None,
-    ):
-        self.test_path = test_path
-        self.expected_text_path = expected_text_path
-        self.expected_image_path = expected_image_path
-        self.expected_checksum_path = expected_checksum_path
-        self.expected_audio_path = expected_audio_path
-        self.reference_files = reference_files
-
-    def __repr__(self):
-        return (
-            "Test(%r, "
-            "expected_text_path=%r, "
-            "expected_image_path=%r, "
-            "expected_checksum_path=%r, "
-            "expected_audio_path=%r, "
-            "reference_files=%r)"
-        ) % (
-            self.test_path,
-            self.expected_text_path,
-            self.expected_image_path,
-            self.expected_checksum_path,
-            self.expected_audio_path,
-            self.reference_files,
-        )
-
-    def __eq__(self, other):
-        return (
-            self.test_path == other.test_path
-            and self.expected_text_path == other.expected_text_path
-            and self.expected_image_path == other.expected_image_path
-            and self.expected_checksum_path == other.expected_checksum_path
-            and self.expected_audio_path == other.expected_audio_path
-            and self.reference_files == other.reference_files
-        )
+    test_path = attr.ib(type=str)
+    expected_text_path = attr.ib(default=None, type=str)
+    expected_image_path = attr.ib(default=None, type=str)
+    expected_checksum_path = attr.ib(default=None, type=str)
+    expected_audio_path = attr.ib(default=None, type=str)
+    reference_files = attr.ib(default=None, type=list)

Modified: trunk/Tools/Scripts/webkitpy/layout_tests/models/test_input.py (275772 => 275773)


--- trunk/Tools/Scripts/webkitpy/layout_tests/models/test_input.py	2021-04-09 19:05:07 UTC (rev 275772)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/models/test_input.py	2021-04-09 19:49:58 UTC (rev 275773)
@@ -28,6 +28,8 @@
 # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 
 
+from .test import Test
+
 class TestInput(object):
     """Information about a test needed to run it.
 
@@ -35,15 +37,24 @@
     derived from TestExpectations/test execution options (e.g., timeout).
     """
 
-    def __init__(self, test_name, timeout=None, needs_servers=None, should_dump_jsconsolelog_in_stderr=None):
+    def __init__(self,
+                 test,  # type: Test
+                 timeout=None,  # type: Union[None, int, str]
+                 needs_servers=None,  # type: Optional[bool]
+                 should_dump_jsconsolelog_in_stderr=None,  # type: Optional[bool]
+                 ):
         # TestInput objects are normally constructed by the manager and passed
         # to the workers, but these some fields are set lazily in the workers where possible
         # because they require us to look at the filesystem and we want to be able to do that in parallel.
-        self.test_name = test_name
+        self.test = test
         self.timeout = timeout  # in msecs; should rename this for consistency
         self.needs_servers = needs_servers
         self.should_dump_jsconsolelog_in_stderr = should_dump_jsconsolelog_in_stderr
         self.reference_files = None
 
+    @property
+    def test_name(self):
+        return self.test.test_path
+
     def __repr__(self):
         return "TestInput('%s', timeout=%s, needs_servers=%s, reference_files=%s, should_dump_jsconsolelog_in_stderr=%s)" % (self.test_name, self.timeout, self.needs_servers, self.reference_files, self.should_dump_jsconsolelog_in_stderr)

Modified: trunk/Tools/Scripts/webkitpy/layout_tests/models/test_run_results.py (275772 => 275773)


--- trunk/Tools/Scripts/webkitpy/layout_tests/models/test_run_results.py	2021-04-09 19:05:07 UTC (rev 275772)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/models/test_run_results.py	2021-04-09 19:49:58 UTC (rev 275773)
@@ -131,7 +131,23 @@
     def merge(self, test_run_results):
         if not test_run_results:
             return self
+
         # self.expectations should be the same for both
+        # FIXME: this isn't actually true when we run on multiple device_types
+        # if self.expectations != test_run_results.expectations:
+        #     raise ValueError("different TestExpectations")
+
+        def merge_dict_sets(a, b):
+            merged = {}
+            keys = set(a.keys()) | set(b.keys())
+            for k in keys:
+                v_a = a.get(k, set())
+                assert isinstance(v_a, set)
+                v_b = b.get(k, set())
+                assert isinstance(v_b, set)
+                merged[k] = v_a | v_b
+            return merged
+
         self.total += test_run_results.total
         self.remaining += test_run_results.remaining
         self.expected += test_run_results.expected
@@ -139,8 +155,8 @@
         self.unexpected_failures += test_run_results.unexpected_failures
         self.unexpected_crashes += test_run_results.unexpected_crashes
         self.unexpected_timeouts += test_run_results.unexpected_timeouts
-        self.tests_by_expectation.update(test_run_results.tests_by_expectation)
-        self.tests_by_timeline.update(test_run_results.tests_by_timeline)
+        self.tests_by_expectation = merge_dict_sets(self.tests_by_expectation, test_run_results.tests_by_expectation)
+        self.tests_by_timeline = merge_dict_sets(self.tests_by_timeline, test_run_results.tests_by_timeline)
         self.results_by_name.update(test_run_results.results_by_name)
         self.all_results += test_run_results.all_results
         self.unexpected_results_by_name.update(test_run_results.unexpected_results_by_name)
@@ -147,8 +163,6 @@
         self.failures_by_name.update(test_run_results.failures_by_name)
         self.total_failures += test_run_results.total_failures
         self.expected_skips += test_run_results.expected_skips
-        self.tests_by_expectation.update(test_run_results.tests_by_expectation)
-        self.tests_by_timeline.update(test_run_results.tests_by_timeline)
         self.slow_tests.update(test_run_results.slow_tests)
 
         self.interrupted |= test_run_results.interrupted
_______________________________________________
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes

Reply via email to