On Tue, Jul 5, 2011 at 12:29 PM, Dirk Pranke <dpra...@chromium.org> wrote:
>
> The problem with your idea is I think what brought this idea up in the
> first place: if you just track that the test is failing using the
> test_expectations.txt file, but don't track *how* it is failing (by
> using something like the -failing.txt idea, or a new -expected.txt
> file), then you cannot tell when the failing output changes, and you
> may miss significant regressions.
>

Even -failing.txt/png won't solve this problem completely if there are
multiple regressions.  Consider the following senario:

   1. Port has a bug R1 so checks in my-test-failing.png for my-test.html
   2. New regression R2 is introduced; and new my-test-failing.png is
   checked in.
   3. R2 is fixed

Now what?  The only way to know whether R2 was really fixed is by comparing
the current result with the result checked in step 1 by by checking out that
the png committed in step 2.

However, we can do the same with the existing testing framework since we can
associate a test with a bug by adding a line like this:
BUGWK????? my-test.html = PASS

- Ryosuke
_______________________________________________
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev

Reply via email to