On 07/09/14 12:34, Aryeh Gregor wrote:
> On Fri, Sep 5, 2014 at 8:23 PM, James Graham <ja...@hoppipolla.co.uk> wrote:
>> I think Ms2ger has a better answer here, but I believe it obsoletes most
>> of them, except a few that never got submitted to web-platform-tests
>> (the editing tests are in that class, because the spec effort sort of died).
> 
> FWIW, the editing tests are still very useful for regression-testing.
> They often catch unintended behavior changes when changing editor
> code, just because they test quite a lot of code paths.  I think it
> would be very valuable for web-platform-tests to have a section for
> "tests we don't know are even vaguely correct, so don't try to use
> them to improve your conformance, but they're useful for regression
> testing anyway."  That might not help interop, but it will help QoI,
> and it makes sense for browsers to share in that department as well.

Well, it would also make sense to have interop for editing of course :)
I would certainly be in favour of someone pushing those tests through
review so that they can land in web-platform-tests, but historically we
haven't been that successful in getting review for large submissions
where no one is actively working on the code (e.g. [1] which has a lot
of tests, mostly written by me, for document loading). I don't really
know how to fix that other than say "it's OK to land stuff no one has
looked at because we can probably sort it out post-hoc", which has some
appeal, but also substantial downsides if no one is making even basic
checks for correct usage of the server or for patterns that are known to
result in unstable tests.

> (This is leaving aside the fact that the editing tests are
> pathologically large and should be chopped up into a lot of smaller
> files.  I have a vague idea to do this someday.  They would also
> benefit from only being run by the Mozilla testing framework on
> commits that actually touch editor/, because it's very unlikely that
> they would be affected by code changes elsewhere that don't fail other
> tests as well.  I think.)

In the long term I'm hopeful that we can end up with a much smarter
testing system that uses a combination of human input and recorded data
to prioritise the tests most likely to break for a given commit. For
example a push only changing code in editor/ to Try, with default
settings, might first run the editing tests and then, once they passed,
run some additional tests from, say dom, or whatever else turns out to
be likely to regress for broken patches in the changed code. On inbound
a somewhat larger set of tests would run, and then on m-c we'd do a full
testrun.

Obviously we're a long way from that at the moment, but it's a
reasonable thing to aim for and I think that some of the pieces are
starting to come together.

[1] https://critic.hoppipolla.co.uk/r/282

_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to