On 8/31/06, Jean-Charles VERDIE <[EMAIL PROTECTED]> wrote:
We imagine a solution which is to drop our goal to fix the problem, and
instead generate  "linux"-based -expected.txt files and to modify dump
render tree to either chose osx or linux -based expected files, depending on
the platform we are testing on.

This solution has a major drawback, which is that it will force the
community to maintain two version of expected files for every single test.

Before starting to work in this direction, I'd appreciate some feedback on
the feeling about this solution, and may be, fortunately, others ideas on
how to remove this roadblock.

It would probably be an ugly hack, but looking at the diff it looks
like all differences are for sizes and fall within 1-2 pixels. How
about a fuzz factor (either as percent of the total original size or
in pixels) and accepting failures for sizes orginated from text
rendering if they fall within fuzz factor? Fuzz factor would be best
determined empirically (i.e. by comparing current linux vs. mac
differences and choosing a factor that makes them pass).

The thinking is that a major breakage would still be detected as
falling outside of fuzz factor.

-- kjk
_______________________________________________
webkit-dev mailing list
webkit-dev@opendarwin.org
http://www.opendarwin.org/mailman/listinfo/webkit-dev

Reply via email to