Title: [101845] trunk/Tools
Revision
101845
Author
[email protected]
Date
2011-12-02 13:42:32 -0800 (Fri, 02 Dec 2011)

Log Message

[NRWT] reftest should support having multiple references per test
https://bugs.webkit.org/show_bug.cgi?id=73613

Reviewed by Dirk Pranke.

Add a support for having multiple reference files for a single test.

Because a reftest succeeds when it matches at least one of expected matches and fails when it matches
at least one of expected mismatches, we compare expected mismatches first in order to minimize
the number of reference files to open on DRT.

* Scripts/webkitpy/layout_tests/controllers/manager.py:
(interpret_test_failures): Remove checks no longer applicable.
* Scripts/webkitpy/layout_tests/controllers/manager_unittest.py:
(ResultSummaryTest.test_interpret_test_failures): Ditto.
* Scripts/webkitpy/layout_tests/controllers/single_test_runner.py:
(SingleTestRunner.__init__): Remove a bunch of code and just call port.reference_files.
(SingleTestRunner._driver_input):
(SingleTestRunner.run):
(SingleTestRunner._run_reftest): Compare the output of the test to each reference file.
* Scripts/webkitpy/layout_tests/models/test_input.py:
(TestInput.__init__): Remove ref_file and is_mismatch_reftest because they are no longer used.
* Scripts/webkitpy/layout_tests/port/base.py:
(Port.reference_files): Renamed from _reference_file_for. Returns a list of expectation, filename pairs.
(_parse_reftest_list): Now supports parsing multiple entries for a single test.
* Scripts/webkitpy/layout_tests/port/base_unittest.py:
(PortTest.test_parse_reftest_list):
* Scripts/webkitpy/layout_tests/port/test.py:
* Scripts/webkitpy/layout_tests/run_webkit_tests_integrationtest.py:
(MainTest.test_unexpected_failures):
(MainTest.test_reftest_skipped_if_unlisted): Renamed from test_missing_and_unexpected_results.
(EndToEndTest.test_end_to_end):
(EndToEndTest.test_reftest_with_two_notrefs): Added.

Modified Paths

Diff

Modified: trunk/Tools/ChangeLog (101844 => 101845)


--- trunk/Tools/ChangeLog	2011-12-02 21:08:40 UTC (rev 101844)
+++ trunk/Tools/ChangeLog	2011-12-02 21:42:32 UTC (rev 101845)
@@ -1,3 +1,39 @@
+2011-12-01  Ryosuke Niwa  <[email protected]>
+
+        [NRWT] reftest should support having multiple references per test
+        https://bugs.webkit.org/show_bug.cgi?id=73613
+
+        Reviewed by Dirk Pranke.
+
+        Add a support for having multiple reference files for a single test.
+
+        Because a reftest succeeds when it matches at least one of expected matches and fails when it matches
+        at least one of expected mismatches, we compare expected mismatches first in order to minimize
+        the number of reference files to open on DRT.
+
+        * Scripts/webkitpy/layout_tests/controllers/manager.py:
+        (interpret_test_failures): Remove checks no longer applicable.
+        * Scripts/webkitpy/layout_tests/controllers/manager_unittest.py:
+        (ResultSummaryTest.test_interpret_test_failures): Ditto.
+        * Scripts/webkitpy/layout_tests/controllers/single_test_runner.py:
+        (SingleTestRunner.__init__): Remove a bunch of code and just call port.reference_files.
+        (SingleTestRunner._driver_input):
+        (SingleTestRunner.run):
+        (SingleTestRunner._run_reftest): Compare the output of the test to each reference file.
+        * Scripts/webkitpy/layout_tests/models/test_input.py:
+        (TestInput.__init__): Remove ref_file and is_mismatch_reftest because they are no longer used.
+        * Scripts/webkitpy/layout_tests/port/base.py:
+        (Port.reference_files): Renamed from _reference_file_for. Returns a list of expectation, filename pairs.
+        (_parse_reftest_list): Now supports parsing multiple entries for a single test.
+        * Scripts/webkitpy/layout_tests/port/base_unittest.py:
+        (PortTest.test_parse_reftest_list):
+        * Scripts/webkitpy/layout_tests/port/test.py:
+        * Scripts/webkitpy/layout_tests/run_webkit_tests_integrationtest.py:
+        (MainTest.test_unexpected_failures):
+        (MainTest.test_reftest_skipped_if_unlisted): Renamed from test_missing_and_unexpected_results.
+        (EndToEndTest.test_end_to_end):
+        (EndToEndTest.test_reftest_with_two_notrefs): Added.
+
 2011-12-02  Gustavo Noronha Silva  <[email protected]>
 
         Build libsoup without gnome dependencies (like keyring).

Modified: trunk/Tools/Scripts/webkitpy/layout_tests/controllers/manager.py (101844 => 101845)


--- trunk/Tools/Scripts/webkitpy/layout_tests/controllers/manager.py	2011-12-02 21:08:40 UTC (rev 101844)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/controllers/manager.py	2011-12-02 21:42:32 UTC (rev 101845)
@@ -89,12 +89,10 @@
             test_dict['image_diff_percent'] = failure.diff_percent
         elif isinstance(failure, test_failures.FailureReftestMismatch):
             test_dict['is_reftest'] = True
-            if failure.reference_filename != port.reftest_expected_filename(test_name):
-                test_dict['ref_file'] = port.relative_test_filename(failure.reference_filename)
+            test_dict['ref_file'] = port.relative_test_filename(failure.reference_filename)
         elif isinstance(failure, test_failures.FailureReftestMismatchDidNotOccur):
             test_dict['is_mismatch_reftest'] = True
-            if failure.reference_filename != port.reftest_expected_mismatch_filename(test_name):
-                test_dict['ref_file'] = port.relative_test_filename(failure.reference_filename)
+            test_dict['ref_file'] = port.relative_test_filename(failure.reference_filename)
 
     if test_failures.FailureMissingResult in failure_types:
         test_dict['is_missing_text'] = True

Modified: trunk/Tools/Scripts/webkitpy/layout_tests/controllers/manager_unittest.py (101844 => 101845)


--- trunk/Tools/Scripts/webkitpy/layout_tests/controllers/manager_unittest.py	2011-12-02 21:08:40 UTC (rev 101844)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/controllers/manager_unittest.py	2011-12-02 21:42:32 UTC (rev 101845)
@@ -316,7 +316,6 @@
             [test_failures.FailureReftestMismatch(self.port.abspath_for_test('foo/reftest-expected.html'))])
         self.assertTrue('is_reftest' in test_dict)
         self.assertFalse('is_mismatch_reftest' in test_dict)
-        self.assertFalse('ref_file' in test_dict)
 
         test_dict = interpret_test_failures(self.port, 'foo/reftest.html',
             [test_failures.FailureReftestMismatch(self.port.abspath_for_test('foo/common.html'))])
@@ -328,7 +327,6 @@
             [test_failures.FailureReftestMismatchDidNotOccur(self.port.abspath_for_test('foo/reftest-expected-mismatch.html'))])
         self.assertFalse('is_reftest' in test_dict)
         self.assertTrue(test_dict['is_mismatch_reftest'])
-        self.assertFalse('ref_file' in test_dict)
 
         test_dict = interpret_test_failures(self.port, 'foo/reftest.html',
             [test_failures.FailureReftestMismatchDidNotOccur(self.port.abspath_for_test('foo/common.html'))])

Modified: trunk/Tools/Scripts/webkitpy/layout_tests/controllers/single_test_runner.py (101844 => 101845)


--- trunk/Tools/Scripts/webkitpy/layout_tests/controllers/single_test_runner.py	2011-12-02 21:08:40 UTC (rev 101844)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/controllers/single_test_runner.py	2011-12-02 21:42:32 UTC (rev 101845)
@@ -57,41 +57,18 @@
         self._test_name = test_input.test_name
 
         self._is_reftest = False
-        self._is_mismatch_reftest = False
-        self._reference_filename = None
+        self._reference_files = port.reference_files(self._test_name)
 
-        fs = port._filesystem
-        if test_input.ref_file:
-            self._is_reftest = True
-            self._reference_filename = fs.join(self._port.layout_tests_dir(), test_input.ref_file)
-            self._is_mismatch_reftest = test_input.is_mismatch_reftest
-            return
-
-        reftest_expected_filename = port.reftest_expected_filename(self._test_name)
-        if reftest_expected_filename and fs.exists(reftest_expected_filename):
-            self._is_reftest = True
-            self._reference_filename = reftest_expected_filename
-
-        reftest_expected_mismatch_filename = port.reftest_expected_mismatch_filename(self._test_name)
-        if reftest_expected_mismatch_filename and fs.exists(reftest_expected_mismatch_filename):
-            if self._is_reftest:
-                _log.error('One test file cannot have both match and mismatch references. Please remove either %s or %s',
-                    reftest_expected_filename, reftest_expected_mismatch_filename)
-            else:
-                self._is_reftest = True
-                self._is_mismatch_reftest = True
-                self._reference_filename = reftest_expected_mismatch_filename
-
-        if self._is_reftest:
+        if self._reference_files:
             # Detect and report a test which has a wrong combination of expectation files.
             # For example, if 'foo.html' has two expectation files, 'foo-expected.html' and
             # 'foo-expected.txt', we should warn users. One test file must be used exclusively
             # in either layout tests or reftests, but not in both.
             for suffix in ('.txt', '.png', '.wav'):
                 expected_filename = self._port.expected_filename(self._test_name, suffix)
-                if fs.exists(expected_filename):
-                    _log.error('The reftest (%s) can not have an expectation file (%s).'
-                               ' Please remove that file.', self._test_name, expected_filename)
+                if port.host.filesystem.exists(expected_filename):
+                    _log.error('%s is both a reftest and has an expected output file %s.',
+                        self._test_name, expected_filename)
 
     def _expected_driver_output(self):
         return DriverOutput(self._port.expected_text(self._test_name),
@@ -111,10 +88,10 @@
         image_hash = None
         if self._should_fetch_expected_checksum():
             image_hash = self._port.expected_checksum(self._test_name)
-        return DriverInput(self._test_name, self._timeout, image_hash, self._is_reftest)
+        return DriverInput(self._test_name, self._timeout, image_hash, bool(self._reference_files))
 
     def run(self):
-        if self._is_reftest:
+        if self._reference_files:
             if self._port.get_option('no_ref_tests') or self._options.new_baseline or self._options.reset_results:
                 result = TestResult(self._test_name)
                 result.type = test_expectations.SKIP
@@ -282,15 +259,32 @@
         return failures
 
     def _run_reftest(self):
-        driver_output1 = self._driver.run_test(self._driver_input())
-        reference_test_name = self._port.relative_test_filename(self._reference_filename)
-        driver_output2 = self._driver.run_test(DriverInput(reference_test_name, self._timeout, driver_output1.image_hash, self._is_reftest))
-        test_result = self._compare_output_with_reference(driver_output1, driver_output2)
+        test_output = self._driver.run_test(self._driver_input())
+        total_test_time = 0
+        reference_output = None
+        test_result = None
 
-        test_result_writer.write_test_result(self._port, self._test_name, driver_output1, driver_output2, test_result.failures)
-        return test_result
+        # A reftest can have multiple match references and multiple mismatch references; 
+        # the test fails if any mismatch matches and all of the matches don't match. 
+        # To minimize the number of references we have to check, we run all of the mismatches first,
+        # then the matches, and short-circuit out as soon as we can.
+        # Note that sorting by the expectation sorts "!=" before "==" so this is easy to do.
 
-    def _compare_output_with_reference(self, driver_output1, driver_output2):
+        putAllMismatchBeforeMatch = sorted
+        for expectation, reference_filename in putAllMismatchBeforeMatch(self._reference_files):
+            reference_test_name = self._port.relative_test_filename(reference_filename)
+            reference_output = self._driver.run_test(DriverInput(reference_test_name, self._timeout, test_output.image_hash, is_reftest=True))
+            test_result = self._compare_output_with_reference(test_output, reference_output, reference_filename, expectation == '!=')
+
+            if (expectation == '!=' and test_result.failures) or (expectation == '==' and not test_result.failures):
+                break
+            total_test_time += test_result.test_run_time
+
+        assert(reference_output)
+        test_result_writer.write_test_result(self._port, self._test_name, test_output, reference_output, test_result.failures)
+        return TestResult(self._test_name, test_result.failures, total_test_time + test_result.test_run_time, test_result.has_stderr)
+
+    def _compare_output_with_reference(self, driver_output1, driver_output2, reference_filename, mismatch):
         total_test_time = driver_output1.test_time + driver_output2.test_time
         has_stderr = driver_output1.has_stderr() or driver_output2.has_stderr()
         failures = []
@@ -298,15 +292,15 @@
         if failures:
             # Don't continue any more if we already have crash or timeout.
             return TestResult(self._test_name, failures, total_test_time, has_stderr)
-        failures.extend(self._handle_error(driver_output2, reference_filename=self._reference_filename))
+        failures.extend(self._handle_error(driver_output2, reference_filename=reference_filename))
         if failures:
             return TestResult(self._test_name, failures, total_test_time, has_stderr)
 
         assert(driver_output1.image_hash or driver_output2.image_hash)
 
-        if self._is_mismatch_reftest:
+        if mismatch:
             if driver_output1.image_hash == driver_output2.image_hash:
-                failures.append(test_failures.FailureReftestMismatchDidNotOccur(self._reference_filename))
+                failures.append(test_failures.FailureReftestMismatchDidNotOccur(reference_filename))
         elif driver_output1.image_hash != driver_output2.image_hash:
-            failures.append(test_failures.FailureReftestMismatch(self._reference_filename))
+            failures.append(test_failures.FailureReftestMismatch(reference_filename))
         return TestResult(self._test_name, failures, total_test_time, has_stderr)

Modified: trunk/Tools/Scripts/webkitpy/layout_tests/controllers/test_result_writer_unittest.py (101844 => 101845)


--- trunk/Tools/Scripts/webkitpy/layout_tests/controllers/test_result_writer_unittest.py	2011-12-02 21:08:40 UTC (rev 101844)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/controllers/test_result_writer_unittest.py	2011-12-02 21:42:32 UTC (rev 101845)
@@ -46,7 +46,7 @@
         port = ImageDiffTestPort()
         fs = port._filesystem
         test_name = 'failures/unexpected/reftest.html'
-        test_reference_file = fs.join(port.layout_tests_dir(), port.reftest_expected_filename(test_name))
+        test_reference_file = fs.join(port.layout_tests_dir(), 'failures/unexpected/reftest-expected.html')
         driver_output1 = DriverOutput('text1', 'image1', 'imagehash1', 'audio1')
         driver_output2 = DriverOutput('text2', 'image2', 'imagehash2', 'audio2')
         failures = [test_failures.FailureReftestMismatch(test_reference_file)]

Modified: trunk/Tools/Scripts/webkitpy/layout_tests/models/test_input.py (101844 => 101845)


--- trunk/Tools/Scripts/webkitpy/layout_tests/models/test_input.py	2011-12-02 21:08:40 UTC (rev 101844)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/models/test_input.py	2011-12-02 21:42:32 UTC (rev 101845)
@@ -36,7 +36,7 @@
     ref_file = None
     is_mismatch_reftest = None
 
-    def __init__(self, test_name, timeout, ref_file=None, is_mismatch_reftest=False):
+    def __init__(self, test_name, timeout):
         """Holds the input parameters for a test.
         Args:
           test: name of test (not an absolute path!)
@@ -46,9 +46,6 @@
           """
         self.test_name = test_name
         self.timeout = timeout
-        if ref_file:
-            self.ref_file = ref_file
-            self.is_mismatch_reftest = is_mismatch_reftest
 
     def __repr__(self):
         return "TestInput('%s', %d)" % (self.test_name, self.timeout)

Modified: trunk/Tools/Scripts/webkitpy/layout_tests/port/base.py (101844 => 101845)


--- trunk/Tools/Scripts/webkitpy/layout_tests/port/base.py	2011-12-02 21:08:40 UTC (rev 101844)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/port/base.py	2011-12-02 21:42:32 UTC (rev 101845)
@@ -439,28 +439,25 @@
             return None
         reftest_list_file = filesystem.read_text_file(reftest_list_path)
 
-        parsed_list = dict()
+        parsed_list = {}
         for line in reftest_list_file.split('\n'):
             line = re.sub('#.+$', '', line)
             split_line = line.split()
             if len(split_line) < 3:
                 continue
             expectation_type, test_file, ref_file = split_line
-            parsed_list[filesystem.join(test_dirpath, test_file)] = (expectation_type, filesystem.join(test_dirpath, ref_file))
+            parsed_list.setdefault(filesystem.join(test_dirpath, test_file), []).append((expectation_type, filesystem.join(test_dirpath, ref_file)))
         return parsed_list
 
-    def _reference_file_for(self, test_name, expectation):
+    def reference_files(self, test_name):
+        """Return a list of expectation (== or !=) and filename pairs"""
+
         reftest_list = self._get_reftest_list(test_name)
         if not reftest_list:
-            if expectation == '==':
-                return self.expected_filename(test_name, '.html')
-            else:
-                return self.expected_filename(test_name, '-mismatch.html')
+            expected_filenames = [('==', self.expected_filename(test_name, '.html')), ('!=', self.expected_filename(test_name, '-mismatch.html'))]
+            return [(expectation, filename) for expectation, filename in expected_filenames if self._filesystem.exists(filename)]
 
-        filename = self._filesystem.join(self.layout_tests_dir(), test_name)
-        if filename not in reftest_list or reftest_list[filename][0] != expectation:
-            return None
-        return reftest_list[filename][1]
+        return reftest_list.get(self._filesystem.join(self.layout_tests_dir(), test_name), [])
 
     def is_reftest(self, test_name):
         reftest_list = self._get_reftest_list(test_name)
@@ -470,14 +467,6 @@
         filename = self._filesystem.join(self.layout_tests_dir(), test_name)
         return filename in reftest_list
 
-    def reftest_expected_filename(self, test_name):
-        """Return the filename of reference we expect the test matches."""
-        return self._reference_file_for(test_name, '==')
-
-    def reftest_expected_mismatch_filename(self, test_name):
-        """Return the filename of reference we don't expect the test matches."""
-        return self._reference_file_for(test_name, '!=')
-
     def test_to_uri(self, test_name):
         """Convert a test name to a URI."""
         LAYOUTTEST_HTTP_DIR = "http/tests/"

Modified: trunk/Tools/Scripts/webkitpy/layout_tests/port/base_unittest.py (101844 => 101845)


--- trunk/Tools/Scripts/webkitpy/layout_tests/port/base_unittest.py	2011-12-02 21:08:40 UTC (rev 101844)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/port/base_unittest.py	2011-12-02 21:42:32 UTC (rev 101845)
@@ -328,12 +328,14 @@
         "",
         "# some comment",
         "!= test-2.html test-notref.html # more comments",
-        "== test-3.html test-ref.html"])
+        "== test-3.html test-ref.html",
+        "== test-3.html test-ref2.html",
+        "!= test-3.html test-notref.html"])
 
         reftest_list = Port._parse_reftest_list(port.host.filesystem, 'bar')
-        self.assertEqual(reftest_list, {'bar/test.html': ('==', 'bar/test-ref.html'),
-            'bar/test-2.html': ('!=', 'bar/test-notref.html'),
-            'bar/test-3.html': ('==', 'bar/test-ref.html')})
+        self.assertEqual(reftest_list, {'bar/test.html': [('==', 'bar/test-ref.html')],
+            'bar/test-2.html': [('!=', 'bar/test-notref.html')],
+            'bar/test-3.html': [('==', 'bar/test-ref.html'), ('==', 'bar/test-ref2.html'), ('!=', 'bar/test-notref.html')]})
 
 
 class VirtualTest(unittest.TestCase):

Modified: trunk/Tools/Scripts/webkitpy/layout_tests/port/test.py (101844 => 101845)


--- trunk/Tools/Scripts/webkitpy/layout_tests/port/test.py	2011-12-02 21:08:40 UTC (rev 101844)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/port/test.py	2011-12-02 21:42:32 UTC (rev 101845)
@@ -190,6 +190,17 @@
     tests.add('reftests/foo/test.html')
     tests.add('reftests/foo/test-ref.html')
 
+    tests.add('reftests/foo/multiple-match-success.html', actual_checksum='abc', actual_image='abc')
+    tests.add('reftests/foo/multiple-match-failure.html', actual_checksum='abc', actual_image='abc')
+    tests.add('reftests/foo/multiple-mismatch-success.html', actual_checksum='abc', actual_image='abc')
+    tests.add('reftests/foo/multiple-mismatch-failure.html', actual_checksum='abc', actual_image='abc')
+    tests.add('reftests/foo/multiple-both-success.html', actual_checksum='abc', actual_image='abc')
+    tests.add('reftests/foo/multiple-both-failure.html', actual_checksum='abc', actual_image='abc')
+
+    tests.add('reftests/foo/matching-ref.html', actual_checksum='abc', actual_image='abc')
+    tests.add('reftests/foo/mismatching-ref.html', actual_checksum='def', actual_image='def')
+    tests.add('reftests/foo/second-mismatching-ref.html', actual_checksum='ghi', actual_image='ghi')
+
     # The following files shouldn't be treated as reftests
     tests.add_reftest('reftests/foo/unlistedtest.html', 'reftests/foo/unlistedtest-expected.html', same_image=True)
     tests.add('reftests/foo/reference/bar/common.html')
@@ -266,6 +277,21 @@
 
     add_file(files, 'reftests/foo/reftest.list', """
 == test.html test-ref.html
+
+== multiple-match-success.html mismatching-ref.html
+== multiple-match-success.html matching-ref.html
+== multiple-match-failure.html mismatching-ref.html
+== multiple-match-failure.html second-mismatching-ref.html
+!= multiple-mismatch-success.html mismatching-ref.html
+!= multiple-mismatch-success.html second-mismatching-ref.html
+!= multiple-mismatch-failure.html mismatching-ref.html
+!= multiple-mismatch-failure.html matching-ref.html
+== multiple-both-success.html matching-ref.html
+== multiple-both-success.html mismatching-ref.html
+!= multiple-both-success.html second-mismatching-ref.html
+== multiple-both-failure.html matching-ref.html
+!= multiple-both-failure.html second-mismatching-ref.html
+!= multiple-both-failure.html matching-ref.html
 """)
 
     # FIXME: This test was only being ignored because of missing a leading '/'.

Modified: trunk/Tools/Scripts/webkitpy/layout_tests/run_webkit_tests_integrationtest.py (101844 => 101845)


--- trunk/Tools/Scripts/webkitpy/layout_tests/run_webkit_tests_integrationtest.py	2011-12-02 21:08:40 UTC (rev 101844)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/run_webkit_tests_integrationtest.py	2011-12-02 21:42:32 UTC (rev 101845)
@@ -185,6 +185,11 @@
     return test_batches
 
 
+# Update this magic number if you add an unexpected test to webkitpy.layout_tests.port.test
+# FIXME: It's nice to have a routine in port/test.py that returns this number.
+unexpected_tests_count = 11
+
+
 class MainTest(unittest.TestCase):
     def test_accelerated_compositing(self):
         # This just tests that we recognize the command line args
@@ -426,10 +431,6 @@
         self._url_opened = None
         res, out, err, user = logging_run(tests_included=True)
 
-        # Update this magic number if you add an unexpected test to webkitpy.layout_tests.port.test
-        # FIXME: It's nice to have a routine in port/test.py that returns this number.
-        unexpected_tests_count = 8
-
         self.assertEqual(res, unexpected_tests_count)
         self.assertFalse(out.empty())
         self.assertFalse(err.empty())
@@ -454,20 +455,6 @@
         self.assertTrue(json_string.find('"num_flaky":0') != -1)
         self.assertTrue(json_string.find('"num_missing":1') != -1)
 
-    def test_missing_and_unexpected_results(self):
-        # Test that we update expectations in place. If the expectation
-        # is missing, update the expected generic location.
-        fs = unit_test_filesystem()
-        res, out, err, _ = logging_run(['--no-show-results', 'reftests/foo/'], tests_included=True, filesystem=fs, record_results=True)
-        file_list = fs.written_files.keys()
-        file_list.remove('/tmp/layout-test-results/tests_run0.txt')
-        self.assertEquals(res, 1)
-        json_string = fs.read_text_file('/tmp/layout-test-results/full_results.json')
-        self.assertTrue(json_string.find('"unlistedtest.html":{"expected":"PASS","is_missing_text":true,"actual":"MISSING","is_missing_image":true}') != -1)
-        self.assertTrue(json_string.find('"num_regressions":1') != -1)
-        self.assertTrue(json_string.find('"num_flaky":0') != -1)
-        self.assertTrue(json_string.find('"num_missing":1') != -1)
-
     def test_missing_and_unexpected_results_with_custom_exit_code(self):
         # Test that we update expectations in place. If the expectation
         # is missing, update the expected generic location.
@@ -735,6 +722,17 @@
         tests_run = get_tests_run(['passes/mismatch.html'], tests_included=True, flatten_batches=True, include_reference_html=True)
         self.assertEquals(['passes/mismatch.html', 'passes/mismatch-expected-mismatch.html'], tests_run)
 
+    def test_reftest_should_not_use_naming_convention_if_not_listed_in_reftestlist(self):
+        fs = unit_test_filesystem()
+        res, out, err, _ = logging_run(['--no-show-results', 'reftests/foo/'], tests_included=True, filesystem=fs, record_results=True)
+        file_list = fs.written_files.keys()
+        file_list.remove('/tmp/layout-test-results/tests_run0.txt')
+        json_string = fs.read_text_file('/tmp/layout-test-results/full_results.json')
+        self.assertTrue(json_string.find('"unlistedtest.html":{"expected":"PASS","is_missing_text":true,"actual":"MISSING","is_missing_image":true}') != -1)
+        self.assertTrue(json_string.find('"num_regressions":4') != -1)
+        self.assertTrue(json_string.find('"num_flaky":0') != -1)
+        self.assertTrue(json_string.find('"num_missing":1') != -1)
+
     def test_additional_platform_directory(self):
         self.assertTrue(passing_run(['--additional-platform-directory', '/tmp/foo']))
         self.assertTrue(passing_run(['--additional-platform-directory', '/tmp/../foo']))
@@ -778,10 +776,6 @@
         fs = unit_test_filesystem()
         res, out, err, user = logging_run(record_results=True, tests_included=True, filesystem=fs)
 
-        # Update this magic number if you add an unexpected test to webkitpy.layout_tests.port.test
-        # FIXME: It's nice to have a routine in port/test.py that returns this number.
-        unexpected_tests_count = 8
-
         self.assertEquals(res, unexpected_tests_count)
         results = self.parse_full_results(fs.files['/tmp/layout-test-results/full_results.json'])
 
@@ -791,6 +785,26 @@
         # Check that we attempted to display the results page in a browser.
         self.assertTrue(user.opened_urls)
 
+    def test_reftest_with_two_notrefs(self):
+        # Test that we update expectations in place. If the expectation
+        # is missing, update the expected generic location.
+        fs = unit_test_filesystem()
+        res, out, err, _ = logging_run(['--no-show-results', 'reftests/foo/'], tests_included=True, filesystem=fs, record_results=True)
+        file_list = fs.written_files.keys()
+        file_list.remove('/tmp/layout-test-results/tests_run0.txt')
+        json_string = fs.read_text_file('/tmp/layout-test-results/full_results.json')
+        json = self.parse_full_results(json_string)
+        self.assertTrue("multiple-match-success.html" not in json["tests"]["reftests"]["foo"])
+        self.assertTrue("multiple-mismatch-success.html" not in json["tests"]["reftests"]["foo"])
+        self.assertTrue("multiple-both-success.html" not in json["tests"]["reftests"]["foo"])
+        self.assertEqual(json["tests"]["reftests"]["foo"]["multiple-match-failure.html"],
+            {"expected": "PASS", "ref_file": "reftests/foo/second-mismatching-ref.html", "actual": "IMAGE", 'is_reftest': True})
+        self.assertEqual(json["tests"]["reftests"]["foo"]["multiple-mismatch-failure.html"],
+            {"expected": "PASS", "ref_file": "reftests/foo/matching-ref.html", "actual": "IMAGE", "is_mismatch_reftest": True})
+        self.assertEqual(json["tests"]["reftests"]["foo"]["multiple-both-failure.html"],
+            {"expected": "PASS", "ref_file": "reftests/foo/matching-ref.html", "actual": "IMAGE", "is_mismatch_reftest": True})
+
+
 class RebaselineTest(unittest.TestCase):
     def assertBaselines(self, file_list, file, extensions, err):
         "assert that the file_list contains the baselines."""
_______________________________________________
webkit-changes mailing list
[email protected]
http://lists.webkit.org/mailman/listinfo.cgi/webkit-changes

Reply via email to