Title: [205080] trunk
Revision
205080
Author
[email protected]
Date
2016-08-27 11:07:52 -0700 (Sat, 27 Aug 2016)

Log Message

Add run-webkit-tests --print-expectations to show expectations for all or a subset of tests
https://bugs.webkit.org/show_bug.cgi?id=161217

Reviewed by Ryosuke Niwa.
Tools:

"run-webkit-tests --print-expectations" runs the same logic as running the tests, but
dumps out the lists of tests that would be run and skipped, and, for each, the entry
in TestExpectations that determines the expected outcome of the test.

This is an improved version of webkit-patch print-expectations.

See bug for sample output.

* Scripts/webkitpy/layout_tests/controllers/manager.py:
(Manager._print_expectations_for_subset): Print out the list of tests and expected
outcome for some subset of tests.
(Manager.print_expectations): Do the same splitting by device class that running tests
does, and for each subset of tests, call _print_expectations_for_subset.
* Scripts/webkitpy/layout_tests/models/test_expectations.py:
(TestExpectationParser.expectation_for_skipped_test): Set the flag
expectation_line.not_applicable_to_current_platform
(TestExpectationLine.__init__): Init not_applicable_to_current_platform to False
(TestExpectationLine.expected_behavior): line.expectation is ['PASS'] by default,
even for skipped tests. This function returns a list relevant for display, taking the skipped
modifier into account.
(TestExpectationLine.create_passing_expectation): expectations is normally a list, not a set.
(TestExpectations.readable_filename_and_line_number): Return something printable for
lines with and without filenames
* Scripts/webkitpy/layout_tests/run_webkit_tests.py:
(main): Handle options.print_expectations
(parse_args): Add support for --print-expectations
(_print_expectations):
* Scripts/webkitpy/port/ios.py:
(IOSSimulatorPort.default_child_processes): Make this a debug log.

LayoutTests:

Explicitly skip fast/viewport

* platform/mac/TestExpectations:

Modified Paths

Diff

Modified: trunk/LayoutTests/ChangeLog (205079 => 205080)


--- trunk/LayoutTests/ChangeLog	2016-08-27 17:45:12 UTC (rev 205079)
+++ trunk/LayoutTests/ChangeLog	2016-08-27 18:07:52 UTC (rev 205080)
@@ -1,3 +1,14 @@
+2016-08-27  Simon Fraser  <[email protected]>
+
+        Add run-webkit-tests --print-expectations to show expectations for all or a subset of tests
+        https://bugs.webkit.org/show_bug.cgi?id=161217
+
+        Reviewed by Ryosuke Niwa.
+        
+        Explicitly skip fast/viewport
+
+        * platform/mac/TestExpectations:
+
 2016-08-27  Youenn Fablet  <[email protected]>
 
         html/dom/interfaces.html is flaky due to WebSocket test

Modified: trunk/LayoutTests/platform/mac/TestExpectations (205079 => 205080)


--- trunk/LayoutTests/platform/mac/TestExpectations	2016-08-27 17:45:12 UTC (rev 205079)
+++ trunk/LayoutTests/platform/mac/TestExpectations	2016-08-27 18:07:52 UTC (rev 205080)
@@ -223,7 +223,7 @@
 webkit.org/b/43960 scrollbars/custom-scrollbar-with-incomplete-style.html
 
 # viewport meta tag support
-fast/viewport
+fast/viewport [ Skip ]
 
 webkit.org/b/116640 plugins/plugin-initiate-popup-window.html
 

Modified: trunk/Tools/ChangeLog (205079 => 205080)


--- trunk/Tools/ChangeLog	2016-08-27 17:45:12 UTC (rev 205079)
+++ trunk/Tools/ChangeLog	2016-08-27 18:07:52 UTC (rev 205080)
@@ -1,3 +1,40 @@
+2016-08-27  Simon Fraser  <[email protected]>
+
+        Add run-webkit-tests --print-expectations to show expectations for all or a subset of tests
+        https://bugs.webkit.org/show_bug.cgi?id=161217
+
+        Reviewed by Ryosuke Niwa.
+
+        "run-webkit-tests --print-expectations" runs the same logic as running the tests, but
+        dumps out the lists of tests that would be run and skipped, and, for each, the entry
+        in TestExpectations that determines the expected outcome of the test.
+
+        This is an improved version of webkit-patch print-expectations.
+
+        See bug for sample output.
+
+        * Scripts/webkitpy/layout_tests/controllers/manager.py:
+        (Manager._print_expectations_for_subset): Print out the list of tests and expected
+        outcome for some subset of tests.
+        (Manager.print_expectations): Do the same splitting by device class that running tests
+        does, and for each subset of tests, call _print_expectations_for_subset.
+        * Scripts/webkitpy/layout_tests/models/test_expectations.py:
+        (TestExpectationParser.expectation_for_skipped_test): Set the flag
+        expectation_line.not_applicable_to_current_platform
+        (TestExpectationLine.__init__): Init not_applicable_to_current_platform to False
+        (TestExpectationLine.expected_behavior): line.expectation is ['PASS'] by default,
+        even for skipped tests. This function returns a list relevant for display, taking the skipped
+        modifier into account.
+        (TestExpectationLine.create_passing_expectation): expectations is normally a list, not a set.
+        (TestExpectations.readable_filename_and_line_number): Return something printable for 
+        lines with and without filenames
+        * Scripts/webkitpy/layout_tests/run_webkit_tests.py:
+        (main): Handle options.print_expectations
+        (parse_args): Add support for --print-expectations
+        (_print_expectations):
+        * Scripts/webkitpy/port/ios.py:
+        (IOSSimulatorPort.default_child_processes): Make this a debug log.
+
 2016-08-26  Dan Bernstein  <[email protected]>
 
         Keep trying to fix the build after r205057.

Modified: trunk/Tools/Scripts/webkitpy/layout_tests/controllers/manager.py (205079 => 205080)


--- trunk/Tools/Scripts/webkitpy/layout_tests/controllers/manager.py	2016-08-27 17:45:12 UTC (rev 205079)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/controllers/manager.py	2016-08-27 18:07:52 UTC (rev 205080)
@@ -524,3 +524,60 @@
         for name, value in stats.iteritems():
             json_results_generator.add_path_to_trie(name, value, stats_trie)
         return stats_trie
+
+    def _print_expectation_line_for_test(self, format_string, test):
+        line = self._expectations.model().get_expectation_line(test)
+        print format_string.format(test, line.expected_behavior, self._expectations.readable_filename_and_line_number(line), line.original_string or '')
+    
+    def _print_expectations_for_subset(self, device_class, test_col_width, tests_to_run, tests_to_skip={}):
+        format_string = '{{:{width}}} {{}} {{}} {{}}'.format(width=test_col_width)
+        if tests_to_skip:
+            print ''
+            print 'Tests to skip ({})'.format(len(tests_to_skip))
+            for test in sorted(tests_to_skip):
+                self._print_expectation_line_for_test(format_string, test)
+
+        print ''
+        print 'Tests to run{} ({})'.format(' for ' + device_class if device_class else '', len(tests_to_run))
+        for test in sorted(tests_to_run):
+            self._print_expectation_line_for_test(format_string, test)
+
+    def print_expectations(self, args):
+        self._printer.write_update("Collecting tests ...")
+        try:
+            paths, test_names = self._collect_tests(args)
+        except IOError:
+            # This is raised if --test-list doesn't exist
+            return -1
+
+        self._printer.write_update("Parsing expectations ...")
+        self._expectations = test_expectations.TestExpectations(self._port, test_names, force_expectations_pass=self._options.force)
+        self._expectations.parse_all_expectations()
+
+        tests_to_run, tests_to_skip = self._prepare_lists(paths, test_names)
+        self._printer.print_found(len(test_names), len(tests_to_run), self._options.repeat_each, self._options.iterations)
+
+        test_col_width = len(max(tests_to_run + list(tests_to_skip), key=len)) + 1
+
+        default_device_tests = []
+
+        # Look for tests with custom device requirements.
+        custom_device_tests = defaultdict(list)
+        for test_file in tests_to_run:
+            custom_device = self._custom_device_for_test(test_file)
+            if custom_device:
+                custom_device_tests[custom_device].append(test_file)
+            else:
+                default_device_tests.append(test_file)
+
+        if custom_device_tests:
+            for device_class in custom_device_tests:
+                _log.debug('{} tests use device {}'.format(len(custom_device_tests[device_class]), device_class))
+
+        self._print_expectations_for_subset(None, test_col_width, tests_to_run, tests_to_skip)
+
+        for device_class in custom_device_tests:
+            device_tests = custom_device_tests[device_class]
+            self._print_expectations_for_subset(device_class, test_col_width, device_tests)
+
+        return 0

Modified: trunk/Tools/Scripts/webkitpy/layout_tests/models/test_expectations.py (205079 => 205080)


--- trunk/Tools/Scripts/webkitpy/layout_tests/models/test_expectations.py	2016-08-27 17:45:12 UTC (rev 205079)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/models/test_expectations.py	2016-08-27 18:07:52 UTC (rev 205080)
@@ -108,6 +108,7 @@
         expectation_line.filename = '<Skipped file>'
         expectation_line.line_number = 0
         expectation_line.expectations = [TestExpectationParser.PASS_EXPECTATION]
+        expectation_line.not_applicable_to_current_platform = True
         self._parse_line(expectation_line)
         return expectation_line
 
@@ -380,6 +381,7 @@
         self.comment = None
         self.matching_tests = []
         self.warnings = []
+        self.not_applicable_to_current_platform = False
 
     def is_invalid(self):
         return self.warnings and self.warnings != [TestExpectationParser.MISSING_BUG_WARNING]
@@ -387,6 +389,21 @@
     def is_flaky(self):
         return len(self.parsed_expectations) > 1
 
+    @property
+    def expected_behavior(self):
+        expectations = self.expectations
+        if "SLOW" in self.modifiers:
+            expectations += ["SLOW"]
+
+        if "SKIP" in self.modifiers:
+            expectations = ["SKIP"]
+        elif "WONTFIX" in self.modifiers:
+            expectations = ["WONTFIX"]
+        elif "CRASH" in self.modifiers:
+            expectations += ["CRASH"]
+
+        return expectations
+
     @staticmethod
     def create_passing_expectation(test):
         expectation_line = TestExpectationLine()
@@ -393,7 +410,7 @@
         expectation_line.name = test
         expectation_line.path = test
         expectation_line.parsed_expectations = set([PASS])
-        expectation_line.expectations = set(['PASS'])
+        expectation_line.expectations = ['PASS']
         expectation_line.matching_tests = [test]
         return expectation_line
 
@@ -844,6 +861,15 @@
         self._include_overrides = include_overrides
         self._expectations_to_lint = expectations_to_lint
 
+    def readable_filename_and_line_number(self, line):
+        if line.not_applicable_to_current_platform:
+            return "(skipped for this platform)"
+        if not line.filename:
+            return ''
+        if line.filename.startswith(self._port.path_from_webkit_base()):
+            return '{}:{}'.format(self._port.host.filesystem.relpath(line.filename, self._port.path_from_webkit_base()), line.line_number)
+        return '{}:{}'.format(line.filename, line.line_number)
+
     def parse_generic_expectations(self):
         if self._port.path_to_generic_test_expectations_file() in self._expectations_dict:
             if self._include_generic:

Modified: trunk/Tools/Scripts/webkitpy/layout_tests/run_webkit_tests.py (205079 => 205080)


--- trunk/Tools/Scripts/webkitpy/layout_tests/run_webkit_tests.py	2016-08-27 17:45:12 UTC (rev 205079)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/run_webkit_tests.py	2016-08-27 18:07:52 UTC (rev 205080)
@@ -73,6 +73,9 @@
         print >> stderr, str(e)
         return EXCEPTIONAL_EXIT_STATUS
 
+    if options.print_expectations:
+        return _print_expectations(port, options, args, stderr)
+
     try:
         # Force all tests to use a smaller stack so that stack overflow tests can run faster.
         stackSizeInBytes = int(1.5 * 1024 * 1024)
@@ -299,6 +302,9 @@
         optparse.make_option("--lint-test-files", action=""
         default=False, help=("Makes sure the test files parse for all "
                             "configurations. Does not run any tests.")),
+        optparse.make_option("--print-expectations", action=""
+        default=False, help=("Print the expected outcome for the given test, or all tests listed in TestExpectations. "
+                            "Does not run any tests.")),
     ]))
 
     option_group_definitions.append(("Web Platform Test Server Options", [
@@ -338,6 +344,24 @@
     return option_parser.parse_args(args)
 
 
+def _print_expectations(port, options, args, logging_stream):
+    logger = logging.getLogger()
+    logger.setLevel(logging.DEBUG if options.debug_rwt_logging else logging.INFO)
+    try:
+        printer = printing.Printer(port, options, logging_stream, logger=logger)
+
+        _set_up_derived_options(port, options)
+        manager = Manager(port, options, printer)
+
+        exit_code = manager.print_expectations(args)
+        _log.debug("Printing expectations completed, Exit status: %d", exit_code)
+        return exit_code
+    except Exception as error:
+        _log.error('Error printing expectations: {}'.format(error))
+    finally:
+        printer.cleanup()
+        return -1
+
 def _set_up_derived_options(port, options):
     """Sets the options values that depend on other options values."""
     if not options.child_processes:

Modified: trunk/Tools/Scripts/webkitpy/port/ios.py (205079 => 205080)


--- trunk/Tools/Scripts/webkitpy/port/ios.py	2016-08-27 17:45:12 UTC (rev 205079)
+++ trunk/Tools/Scripts/webkitpy/port/ios.py	2016-08-27 18:07:52 UTC (rev 205080)
@@ -159,7 +159,7 @@
         best_child_process_count_for_cpu = self._executive.cpu_count() / 2
         system_process_count_limit = int(subprocess.check_output(["ulimit", "-u"]).strip())
         current_process_count = len(subprocess.check_output(["ps", "aux"]).strip().split('\n'))
-        _log.info('Process limit: %d, current #processes: %d' % (system_process_count_limit, current_process_count))
+        _log.debug('Process limit: %d, current #processes: %d' % (system_process_count_limit, current_process_count))
         maximum_simulator_count_on_this_system = (system_process_count_limit - current_process_count) // self.PROCESS_COUNT_ESTIMATE_PER_SIMULATOR_INSTANCE
         # FIXME: We should also take into account the available RAM.
 
_______________________________________________
webkit-changes mailing list
[email protected]
https://lists.webkit.org/mailman/listinfo/webkit-changes

Reply via email to