Title: [146657] trunk
Revision
146657
Author
[email protected]
Date
2013-03-22 14:05:38 -0700 (Fri, 22 Mar 2013)

Log Message

NRWT: Enable pixel tests when retrying tests
https://bugs.webkit.org/show_bug.cgi?id=112898

Reviewed by Dirk Pranke.

Tools: 

* Scripts/webkitpy/layout_tests/controllers/manager.py:
(Manager.run): Call _force_pixel_tests_if_needed before retrying tests and set pixel_tests False
if we've forced pixel tests in the retry.
(Manager._run_tests):
(Manager._clean_up_run): Fixed the capitalizations.
(Manager._force_pixel_tests_if_needed): Added.

* Scripts/webkitpy/layout_tests/models/test_run_results.py:
(_interpret_test_failures): Now that this function maybe called multiple times, only set
'image_diff_percent' if it doesn't already exist.

(summarize_results): When the first attempt resulted in a text failure and the second attempt
resulted in image and text failures and we've forced to run pixel tests in the retry run,
treat this as a regular regression instead of a flakiness.

Also update test_dict with retry_result so that image_diff_percent maybe filled in if retry
run had image diffs.

* Scripts/webkitpy/layout_tests/models/test_run_results_unittest.py:
(summarized_results):

* Scripts/webkitpy/layout_tests/run_webkit_tests_integrationtest.py:
(parse_full_results): Moved out of EndToEndTest to be used in test_retrying_force_pixel_tests.
Also removed some dead code.

(RunTest.test_retrying_force_pixel_tests): Added. Assert that we generate and only generate
pixel results in retries when pixel tests is turned off. Also assert that image_diff_percent
is set and pixel_tests_enabled is set to false.
(EndToEndTest.test_reftest_with_two_notrefs):

LayoutTests: 

Link to images and image diff in retries when they're only available in retries.

* fast/harness/resources/results-test.js: Added a test case.
* fast/harness/results-expected.txt:
* fast/harness/results.html:
(imageResultsCell): Extracted from tableRow.
(tableRow): Split the actual result into two tokens (first attempt and retry),
and then check if the image failure was detected in the first attempt. If not,
add hyperlinks to the actual results in the retry run.

Modified Paths

Diff

Modified: trunk/LayoutTests/ChangeLog (146656 => 146657)


--- trunk/LayoutTests/ChangeLog	2013-03-22 21:00:21 UTC (rev 146656)
+++ trunk/LayoutTests/ChangeLog	2013-03-22 21:05:38 UTC (rev 146657)
@@ -1,3 +1,20 @@
+2013-03-22  Ryosuke Niwa  <[email protected]>
+
+        NRWT: Enable pixel tests when retrying tests
+        https://bugs.webkit.org/show_bug.cgi?id=112898
+
+        Reviewed by Dirk Pranke.
+
+        Link to images and image diff in retries when they're only available in retries.
+
+        * fast/harness/resources/results-test.js: Added a test case.
+        * fast/harness/results-expected.txt:
+        * fast/harness/results.html:
+        (imageResultsCell): Extracted from tableRow.
+        (tableRow): Split the actual result into two tokens (first attempt and retry),
+        and then check if the image failure was detected in the first attempt. If not,
+        add hyperlinks to the actual results in the retry run.
+
 2013-03-22  Philip Rogers  <[email protected]>
 
         Update svg/custom/marker-orient-auto-expected test expectation.

Modified: trunk/LayoutTests/fast/harness/resources/results-test.js (146656 => 146657)


--- trunk/LayoutTests/fast/harness/resources/results-test.js	2013-03-22 21:00:21 UTC (rev 146656)
+++ trunk/LayoutTests/fast/harness/resources/results-test.js	2013-03-22 21:05:38 UTC (rev 146657)
@@ -772,6 +772,13 @@
         assertTrue(flaggedTestsTextbox.innerText == 'foo/bar1.html\nfoo/bar2.html');
     });
 
+    results = mockResults();
+    results.tests['foo/bar-image.html'] = mockExpectation('PASS', 'TEXT IMAGE+TEXT');
+    results.pixel_tests_enabled = false;
+    runTest(results, function() {
+        assertTrue(document.querySelector('tbody td:nth-child(3) a').getAttribute('href') == 'retries/foo/bar-image-diffs.html');
+    });
+
     document.body.innerHTML = '<pre>' + g_log.join('\n') + '</pre>';
 }
 

Modified: trunk/LayoutTests/fast/harness/results-expected.txt (146656 => 146657)


--- trunk/LayoutTests/fast/harness/results-expected.txt	2013-03-22 21:00:21 UTC (rev 146656)
+++ trunk/LayoutTests/fast/harness/results-expected.txt	2013-03-22 21:05:38 UTC (rev 146657)
@@ -251,3 +251,4 @@
 TEST-45: PASS
 TEST-45: PASS
 TEST-45: PASS
+TEST-46: PASS

Modified: trunk/LayoutTests/fast/harness/results.html (146656 => 146657)


--- trunk/LayoutTests/fast/harness/results.html	2013-03-22 21:00:21 UTC (rev 146656)
+++ trunk/LayoutTests/fast/harness/results.html	2013-03-22 21:05:38 UTC (rev 146657)
@@ -563,6 +563,37 @@
     return html;
 }
 
+function imageResultsCell(testObject, testPrefix, actual) {
+    var row = '';
+
+    if (actual.indexOf('IMAGE') != -1) {
+        globalState().hasImageFailures = true;
+
+        if (testObject.reftest_type && testObject.reftest_type.indexOf('!=') != -1) {
+            row += resultLink(testPrefix, '-expected-mismatch.html', 'ref mismatch html');
+            row += resultLink(testPrefix, '-actual.png', 'actual');
+        } else {
+            if (testObject.reftest_type && testObject.reftest_type.indexOf('==') != -1) {
+                row += resultLink(testPrefix, '-expected.html', 'ref html');
+            }
+            if (globalState().shouldToggleImages) {
+                row += resultLink(testPrefix, '-diffs.html', 'images');
+            } else {
+                row += resultLink(testPrefix, '-expected.png', 'expected');
+                row += resultLink(testPrefix, '-actual.png', 'actual');
+            }
+
+            var diff = testObject.image_diff_percent;
+            row += resultLink(testPrefix, '-diff.png', 'diff (' + diff + '%)');
+        }
+    }
+
+    if (actual.indexOf('MISSING') != -1 && testObject.is_missing_image)
+        row += resultLink(testPrefix, '-actual.png', 'png result');
+
+    return row;
+}
+
 function tableRow(testObject)
 {    
     var row = '<tbody'
@@ -574,66 +605,39 @@
 
     row += '<td>' + testLinkWithExpandButton(testObject.name) + '</td>';
 
-    var test_prefix = stripExtension(testObject.name);
+    var testPrefix = stripExtension(testObject.name);
     row += '<td>';
     
     var actual = testObject.actual;
-    var expected = testObject.expected;
     if (actual.indexOf('TEXT') != -1) {
         globalState().hasTextFailures = true;
-        row += textResultLinks(test_prefix);
+        row += textResultLinks(testPrefix);
     }
     
     if (actual.indexOf('AUDIO') != -1) {
-        row += resultLink(test_prefix, '-expected.wav', 'expected audio');
-        row += resultLink(test_prefix, '-actual.wav', 'actual audio');
+        row += resultLink(testPrefix, '-expected.wav', 'expected audio');
+        row += resultLink(testPrefix, '-actual.wav', 'actual audio');
     }
 
     if (actual.indexOf('MISSING') != -1) {
-        expected = '';
         if (testObject.is_missing_audio)
-            row += resultLink(test_prefix, '-actual.wav', 'audio result');
+            row += resultLink(testPrefix, '-actual.wav', 'audio result');
         if (testObject.is_missing_text)
-            row += resultLink(test_prefix, '-actual.txt', 'result');
+            row += resultLink(testPrefix, '-actual.txt', 'result');
     }
 
-    row += '</td><td>';
+    var actualTokens = actual.split(/\s+/);
+    var cell = imageResultsCell(testObject, testPrefix, actualTokens[0]);
+    if (!cell && actualTokens.length > 1)
+        cell = imageResultsCell(testObject, 'retries/' + testPrefix, actualTokens[1]);
 
-    if (actual.indexOf('IMAGE') != -1) {
-        globalState().hasImageFailures = true;
+    row += '</td><td>' + cell + '</td>';
 
-        if (testObject.reftest_type && testObject.reftest_type.indexOf('!=') != -1) {
-            row += resultLink(test_prefix, '-expected-mismatch.html', 'ref mismatch html');
-            row += resultLink(test_prefix, '-actual.png', 'actual');
-        } else {
-            if (testObject.reftest_type && testObject.reftest_type.indexOf('==') != -1) {
-                row += resultLink(test_prefix, '-expected.html', 'ref html');
-            }
-            if (globalState().shouldToggleImages) {
-                row += resultLink(test_prefix, '-diffs.html', 'images');
-            } else {
-                row += resultLink(test_prefix, '-expected.png', 'expected');
-                row += resultLink(test_prefix, '-actual.png', 'actual');
-            }
-
-            var diff = testObject.image_diff_percent;
-            row += resultLink(test_prefix, '-diff.png', 'diff (' + diff + '%)');
-        }
-    }
-
-    if (actual.indexOf('MISSING') != -1) {
-        expected = '';
-        if (testObject.is_missing_image)
-            row += resultLink(test_prefix, '-actual.png', 'png result');
-    }
-
-    row += '</td>';
-
     if (globalState().results.uses_expectations_file || actual.indexOf(' ') != -1)
         row += '<td>' + actual + '</td>';
 
     if (globalState().results.uses_expectations_file)
-        row += '<td>' + expected + '</td>';
+        row += '<td>' + (actual.indexOf('MISSING') == -1 ? testObject.expected : '') + '</td>';
 
     row += '</tr></tbody>';
     return row;

Modified: trunk/Tools/ChangeLog (146656 => 146657)


--- trunk/Tools/ChangeLog	2013-03-22 21:00:21 UTC (rev 146656)
+++ trunk/Tools/ChangeLog	2013-03-22 21:05:38 UTC (rev 146657)
@@ -1,3 +1,40 @@
+2013-03-22  Ryosuke Niwa  <[email protected]>
+
+        NRWT: Enable pixel tests when retrying tests
+        https://bugs.webkit.org/show_bug.cgi?id=112898
+
+        Reviewed by Dirk Pranke.
+
+        * Scripts/webkitpy/layout_tests/controllers/manager.py:
+        (Manager.run): Call _force_pixel_tests_if_needed before retrying tests and set pixel_tests False
+        if we've forced pixel tests in the retry.
+        (Manager._run_tests):
+        (Manager._clean_up_run): Fixed the capitalizations.
+        (Manager._force_pixel_tests_if_needed): Added.
+
+        * Scripts/webkitpy/layout_tests/models/test_run_results.py:
+        (_interpret_test_failures): Now that this function maybe called multiple times, only set
+        'image_diff_percent' if it doesn't already exist.
+
+        (summarize_results): When the first attempt resulted in a text failure and the second attempt
+        resulted in image and text failures and we've forced to run pixel tests in the retry run,
+        treat this as a regular regression instead of a flakiness.
+
+        Also update test_dict with retry_result so that image_diff_percent maybe filled in if retry
+        run had image diffs.
+
+        * Scripts/webkitpy/layout_tests/models/test_run_results_unittest.py:
+        (summarized_results):
+
+        * Scripts/webkitpy/layout_tests/run_webkit_tests_integrationtest.py:
+        (parse_full_results): Moved out of EndToEndTest to be used in test_retrying_force_pixel_tests.
+        Also removed some dead code.
+
+        (RunTest.test_retrying_force_pixel_tests): Added. Assert that we generate and only generate
+        pixel results in retries when pixel tests is turned off. Also assert that image_diff_percent
+        is set and pixel_tests_enabled is set to false.
+        (EndToEndTest.test_reftest_with_two_notrefs):
+
 2013-03-22  Tim Horton  <[email protected]>
 
         Make it possible to run layout tests on Retina MBP

Modified: trunk/Tools/Scripts/webkitpy/layout_tests/controllers/manager.py (146656 => 146657)


--- trunk/Tools/Scripts/webkitpy/layout_tests/controllers/manager.py	2013-03-22 21:00:21 UTC (rev 146656)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/controllers/manager.py	2013-03-22 21:05:38 UTC (rev 146657)
@@ -192,17 +192,23 @@
             return test_run_results.RunDetails(exit_code=-1)
 
         start_time = time.time()
+        enabled_pixel_tests_in_retry = False
         try:
             initial_results = self._run_tests(tests_to_run, tests_to_skip, self._options.repeat_each, self._options.iterations,
                 int(self._options.child_processes), retrying=False)
 
             tests_to_retry = self._tests_to_retry(initial_results, include_crashes=self._port.should_retry_crashes())
             if self._options.retry_failures and tests_to_retry and not initial_results.interrupted:
+                enabled_pixel_tests_in_retry = self._force_pixel_tests_if_needed()
+
                 _log.info('')
                 _log.info("Retrying %d unexpected failure(s) ..." % len(tests_to_retry))
                 _log.info('')
                 retry_results = self._run_tests(tests_to_retry, tests_to_skip=set(), repeat_each=1, iterations=1,
                     num_workers=1, retrying=True)
+
+                if enabled_pixel_tests_in_retry:
+                    self._options.pixel_tests = False
             else:
                 retry_results = None
         finally:
@@ -218,7 +224,7 @@
             self._look_for_new_crash_logs(retry_results, start_time)
 
         _log.debug("summarizing results")
-        summarized_results = test_run_results.summarize_results(self._port, self._expectations, initial_results, retry_results)
+        summarized_results = test_run_results.summarize_results(self._port, self._expectations, initial_results, retry_results, enabled_pixel_tests_in_retry)
         self._printer.print_results(end_time - start_time, initial_results, summarized_results)
 
         if not self._options.dry_run:
@@ -243,20 +249,29 @@
             for test in tests_to_run:
                 for _ in xrange(repeat_each):
                     test_inputs.append(self._test_input_for_file(test))
-
         return self._runner.run_tests(self._expectations, test_inputs, tests_to_skip, num_workers, needs_http, needs_websockets, retrying)
 
     def _clean_up_run(self):
-        """Restores the system after we're done running tests."""
-        _log.debug("flushing stdout")
+        _log.debug("Flushing stdout")
         sys.stdout.flush()
-        _log.debug("flushing stderr")
+        _log.debug("Flushing stderr")
         sys.stderr.flush()
-        _log.debug("stopping helper")
+        _log.debug("Stopping helper")
         self._port.stop_helper()
-        _log.debug("cleaning up port")
+        _log.debug("Cleaning up port")
         self._port.clean_up_test_run()
 
+    def _force_pixel_tests_if_needed(self):
+        if self._options.pixel_tests:
+            return False
+
+        _log.debug("Restarting helper")
+        self._port.stop_helper()
+        self._options.pixel_tests = True
+        self._port.start_helper()
+
+        return True
+
     def _look_for_new_crash_logs(self, run_results, start_time):
         """Since crash logs can take a long time to be written out if the system is
            under stress do a second pass at the end of the test run.

Modified: trunk/Tools/Scripts/webkitpy/layout_tests/models/test_run_results.py (146656 => 146657)


--- trunk/Tools/Scripts/webkitpy/layout_tests/models/test_run_results.py	2013-03-22 21:00:21 UTC (rev 146656)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/models/test_run_results.py	2013-03-22 21:05:38 UTC (rev 146657)
@@ -109,14 +109,15 @@
     if test_failures.FailureMissingImage in failure_types or test_failures.FailureMissingImageHash in failure_types:
         test_dict['is_missing_image'] = True
 
-    for failure in failures:
-        if isinstance(failure, test_failures.FailureImageHashMismatch) or isinstance(failure, test_failures.FailureReftestMismatch):
-            test_dict['image_diff_percent'] = failure.diff_percent
+    if 'image_diff_percent' not in test_dict:
+        for failure in failures:
+            if isinstance(failure, test_failures.FailureImageHashMismatch) or isinstance(failure, test_failures.FailureReftestMismatch):
+                test_dict['image_diff_percent'] = failure.diff_percent
 
     return test_dict
 
 
-def summarize_results(port_obj, expectations, initial_results, retry_results):
+def summarize_results(port_obj, expectations, initial_results, retry_results, enabled_pixel_tests_in_retry):
     """Returns a dictionary containing a summary of the test runs, with the following fields:
         'version': a version indicator
         'fixable': The number of fixable tests (NOW - PASS)
@@ -187,6 +188,11 @@
             elif retry_results:
                 retry_result_type = retry_results.unexpected_results_by_name[test_name].type
                 if result_type != retry_result_type:
+                    if enabled_pixel_tests_in_retry and result_type == test_expectations.TEXT and retry_result_type == test_expectations.IMAGE_PLUS_TEXT:
+                        num_regressions += 1
+                    else:
+                        actual.append(keywords[retry_result_type])
+                        num_flaky += 1
                     actual.append(keywords[retry_result_type])
                     num_flaky += 1
                 else:
@@ -199,6 +205,11 @@
 
         test_dict.update(_interpret_test_failures(result.failures))
 
+        if retry_results:
+            retry_result = retry_results.unexpected_results_by_name.get(test_name)
+            if retry_result:
+                test_dict.update(_interpret_test_failures(retry_result.failures))
+
         # Store test hierarchically by directory. e.g.
         # foo/bar/baz.html: test_dict
         # foo/bar/baz1.html: test_dict

Modified: trunk/Tools/Scripts/webkitpy/layout_tests/models/test_run_results_unittest.py (146656 => 146657)


--- trunk/Tools/Scripts/webkitpy/layout_tests/models/test_run_results_unittest.py	2013-03-22 21:00:21 UTC (rev 146656)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/models/test_run_results_unittest.py	2013-03-22 21:05:38 UTC (rev 146657)
@@ -84,7 +84,7 @@
     else:
         retry_results = None
 
-    return test_run_results.summarize_results(port, initial_results.expectations, initial_results, retry_results)
+    return test_run_results.summarize_results(port, initial_results.expectations, initial_results, retry_results, enabled_pixel_tests_in_retry=False)
 
 
 class InterpretTestFailuresTest(unittest.TestCase):

Modified: trunk/Tools/Scripts/webkitpy/layout_tests/run_webkit_tests_integrationtest.py (146656 => 146657)


--- trunk/Tools/Scripts/webkitpy/layout_tests/run_webkit_tests_integrationtest.py	2013-03-22 21:00:21 UTC (rev 146656)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/run_webkit_tests_integrationtest.py	2013-03-22 21:05:38 UTC (rev 146657)
@@ -158,6 +158,12 @@
     return all_results
 
 
+def parse_full_results(full_results_text):
+    json_to_eval = full_results_text.replace("ADD_RESULTS(", "").replace(");", "")
+    compressed_results = json.loads(json_to_eval)
+    return compressed_results
+
+
 class StreamTestingMixin(object):
     def assertContains(self, stream, string):
         self.assertTrue(string in stream.getvalue())
@@ -649,6 +655,21 @@
         self.assertTrue(host.filesystem.exists('/tmp/layout-test-results/failures/flaky/text-actual.txt'))
         self.assertFalse(host.filesystem.exists('retries'))
 
+    def test_retrying_force_pixel_tests(self):
+        host = MockHost()
+        res, err, _ = logging_run(['--no-pixel-tests', 'failures/unexpected/text-image-checksum.html'], tests_included=True, host=host)
+        self.assertEqual(res, 1)
+        self.assertTrue('Retrying' in err.getvalue())
+        self.assertTrue(host.filesystem.exists('/tmp/layout-test-results/failures/unexpected/text-image-checksum-actual.txt'))
+        self.assertFalse(host.filesystem.exists('/tmp/layout-test-results/failures/unexpected/text-image-checksum-actual.png'))
+        self.assertTrue(host.filesystem.exists('/tmp/layout-test-results/retries/failures/unexpected/text-image-checksum-actual.txt'))
+        self.assertTrue(host.filesystem.exists('/tmp/layout-test-results/retries/failures/unexpected/text-image-checksum-actual.png'))
+        json_string = host.filesystem.read_text_file('/tmp/layout-test-results/full_results.json')
+        json = parse_full_results(json_string)
+        self.assertEqual(json["tests"]["failures"]["unexpected"]["text-image-checksum.html"],
+            {"expected": "PASS", "actual": "TEXT IMAGE+TEXT", "image_diff_percent": 1})
+        self.assertFalse(json["pixel_tests_enabled"])
+
     def test_retrying_uses_retries_directory(self):
         host = MockHost()
         res, err, _ = logging_run(['--debug-rwt-logging', 'failures/unexpected/text-image-checksum.html'], tests_included=True, host=host)
@@ -825,25 +846,15 @@
 
 
 class EndToEndTest(unittest.TestCase):
-    def parse_full_results(self, full_results_text):
-        json_to_eval = full_results_text.replace("ADD_RESULTS(", "").replace(");", "")
-        compressed_results = json.loads(json_to_eval)
-        return compressed_results
-
-        # Check that we recorded the test run times and ordering. Note that
-        # pretty much none of the actual values can be guaranteed, but at least
-        # we can test that they're there.
-        stats = json.loads(host.filesystem.read_text_file('/tmp/layout-test-results/stats.json'))
-        self.assertEqual(len(stats['http']['tests']['passes']['image.html']['results']), 5)
-
     def test_reftest_with_two_notrefs(self):
         # Test that we update expectations in place. If the expectation
         # is missing, update the expected generic location.
         host = MockHost()
         res, _, _ = logging_run(['--no-show-results', 'reftests/foo/'], tests_included=True, host=host)
         file_list = host.filesystem.written_files.keys()
+
         json_string = host.filesystem.read_text_file('/tmp/layout-test-results/full_results.json')
-        json = self.parse_full_results(json_string)
+        json = parse_full_results(json_string)
         self.assertTrue("multiple-match-success.html" not in json["tests"]["reftests"]["foo"])
         self.assertTrue("multiple-mismatch-success.html" not in json["tests"]["reftests"]["foo"])
         self.assertTrue("multiple-both-success.html" not in json["tests"]["reftests"]["foo"])
_______________________________________________
webkit-changes mailing list
[email protected]
https://lists.webkit.org/mailman/listinfo/webkit-changes

Reply via email to