On Tue, Jul 5, 2011 at 1:21 PM, Ryosuke Niwa <rn...@webkit.org> wrote: > On Tue, Jul 5, 2011 at 12:29 PM, Dirk Pranke <dpra...@chromium.org> wrote: >> >> The problem with your idea is I think what brought this idea up in the >> first place: if you just track that the test is failing using the >> test_expectations.txt file, but don't track *how* it is failing (by >> using something like the -failing.txt idea, or a new -expected.txt >> file), then you cannot tell when the failing output changes, and you >> may miss significant regressions. > > Even -failing.txt/png won't solve this problem completely if there are > multiple regressions. Consider the following senario: > > Port has a bug R1 so checks in my-test-failing.png for my-test.html > New regression R2 is introduced; and new my-test-failing.png is checked in. > R2 is fixed > > Now what? The only way to know whether R2 was really fixed is by comparing > the current result with the result checked in step 1 by by checking out that > the png committed in step 2.
This is true ... you head down the path of tracking a graph of "fails like X with bug Y still brokenness" and it's not clear where the end of that path is. > However, we can do the same with the existing testing framework since we can > associate a test with a bug by adding a line like this: > BUGWK????? my-test.html = PASS You lost me here ... -- Dirk _______________________________________________ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev