They'll sometimes get disabled due to webkit updates, other times they'll
get disabled due to other things (for example, we changed the valgrind bots
to fail noisily if individual tests fail, regardless of whether they
actually generate valgrind errors - this meant that previously-silent worker
test failures suddenly started causing redness, leading to sheriffs
disabling them).

But, yeah, it'd be nice to have ui_tests run by the webkit.org bots,
although in the case of flaky tests I'm not sure whether that'd help (not
sure if the gardener would pick up on a 25% failure on the FYI bots).

At this point, I'm just trying to figure out what people are *supposed* to
do when disabling tests - should they always log a bug and add a comment?
I'd say yes, that if you have time to babysit a CL through the build bot
process, you have time to log a bug/add a comment, even if you don't know
the correct owner.

-atw

On Mon, Dec 14, 2009 at 10:28 AM, Darin Fisher <da...@chromium.org> wrote:

> I presume it is frequently the case that these tests get disabled after a
> webkit update?
>
> Only the "Linux Perf (webkit.org)" bot appears to run the ui_tests.
>  Perhaps that is not sufficient?
>
> -Darin
>
>
>
> On Mon, Dec 14, 2009 at 8:54 AM, Drew Wilson <atwil...@chromium.org>wrote:
>
>> I spent a few hours last week and this weekend trying to untangle the mess
>> that was the worker ui_tests. The problem is that the tests have been
>> sporadically flakey due to various causes, and so various sheriffs/good
>> samaritans have disabled them to keep the trees green. Some of the tests
>> were disabled due to failing under valgrind, but now that we have a way to
>> disable tests specifically for valgrind and some of the worker bugs have
>> been fixed upstream I figured it was a good time to clean house a bit and
>> re-enable some of the tests.
>>
>> I found when I was going through the worker_uitest file that it was hard
>> to figure out why a given test was disabled, so it was unclear whether it
>> was safe to re-enable them - some of the tests had comments pointing at a
>> tracking bug, but some of them didn't, and it was a pain to track the root
>> cause especially since the specific lines of code had sometimes been touched
>> multiple times (adding new platforms to disable, etc).
>>
>> Anyhow, what's our best practices for disabling tests? I think ideally
>> we'd always log a tracking bug and add a comment, akin to what we do in the
>> test_expectations file for layout tests. This might be too much of a burden
>> on sheriffs, so an alternative is for people who are working on various
>> areas (like workers) track checkins to the associated files and add some
>> documentation after the fact. Or we can just live with the SVN logs, in
>> which case I need to get better about figuring out how to track through the
>> SVN/git history of the various files :)
>>
>> -atw
>>
>> --
>> Chromium Developers mailing list: chromium-dev@googlegroups.com
>> View archives, change email options, or unsubscribe:
>> http://groups.google.com/group/chromium-dev
>>
>
>  --
> Chromium Developers mailing list: chromium-dev@googlegroups.com
> View archives, change email options, or unsubscribe:
> http://groups.google.com/group/chromium-dev
>

-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
    http://groups.google.com/group/chromium-dev

Reply via email to