Since I just excluded a lot of tests in valgrind last week, I'd like to give
a background here.
About a week ago, we noticed that we're ignoring failing tests in our
valgrind tests, which means
that the test results are not accurate. Even worse, unit test of
valgrind/test was crashing in the middle,
tuns, skipping a lot of testss. Last friday, I landed the change to fix the
valgrind tests, and
it uncovered a lot of faling tests as well as valgrind errors. I made series
of changes to make
valgrind test green with place holder bug, and am going to file bugs for
each groups to keep track of
them. I believe this is improvement because we now know the tests that are
failing.

 I agree that we shouldn't have a lot of disabled tests nor suppressions
because they'd easily
be forgotten nor untracked. One thing we can do better is make the test
result more visible, say have
stats of disable/flaky/suppressions and coverage of the tests. That will
help us understand how good
(or bad) the situation is. Another thing is that since we now have many of
these, it may make sense to
have FixIt day or week to reduce the number of these tests. Just my 2c.

- oshima

On Mon, Dec 14, 2009 at 10:24 AM, Scott Hess <sh...@chromium.org> wrote:

> Additionally, keep in mind that obscure bugs may take longer to find
> an owner than the buildbots keep logs.  So a bug like "Disable to
> green the buildbots, <link>" is not sufficient, include enough info so
> that someone can figure out WHY you're disabling things.
>
> Also, last time I was looking through some valgrind suppressions, I
> found that often enough the new suppression NEVER fired.  I know this
> is asking a lot, but if you throw down some new suppressions during
> your tour of duty, it would be great if you check back in a few days
> or a week later and if they aren't being applied, back them out (maybe
> with a current sheriff as reviewer in case they pop back up).
> [valgrind prints out the list of suppressions which fired at the end
> of the run.]
>
> -scott
>
>
> On Mon, Dec 14, 2009 at 10:00 AM, John Abd-El-Malek <j...@chromium.org>
> wrote:
> > (this topic has been on my mind a lot, so here's my vent :) )
> > I think we shouldn't allow any test to be disabled without a bug to track
> it
> > that includes an initially assigned owner. This shouldn't
> > I've seen it happen too often that a test gets disabled to quickly turn
> the
> > tree green, and it stays disabled for a long time without anyone
> noticing.
> >  This destroys the whole point of tests, since it's trivial to keep a
> tree
> > green by disabling each test when it fails.  It's also a large burden to
> > expect that people monitor checkins to files they're familiar with and
> > create bugs when they find a disabled test.  It's harder to enforce this,
> > and it's also unclear who should be doing this when multiple people touch
> an
> > area.
> > Filing a bug and looking at the annotations on viewvc to pick an initial
> > owner shouldn't take more than a minute or two so I don't think it's a
> large
> > burden.
> > On Mon, Dec 14, 2009 at 8:54 AM, Drew Wilson <atwil...@chromium.org>
> wrote:
> >>
> >> I spent a few hours last week and this weekend trying to untangle the
> mess
> >> that was the worker ui_tests. The problem is that the tests have been
> >> sporadically flakey due to various causes, and so various sheriffs/good
> >> samaritans have disabled them to keep the trees green. Some of the tests
> >> were disabled due to failing under valgrind, but now that we have a way
> to
> >> disable tests specifically for valgrind and some of the worker bugs have
> >> been fixed upstream I figured it was a good time to clean house a bit
> and
> >> re-enable some of the tests.
> >> I found when I was going through the worker_uitest file that it was hard
> >> to figure out why a given test was disabled, so it was unclear whether
> it
> >> was safe to re-enable them - some of the tests had comments pointing at
> a
> >> tracking bug, but some of them didn't, and it was a pain to track the
> root
> >> cause especially since the specific lines of code had sometimes been
> touched
> >> multiple times (adding new platforms to disable, etc).
> >> Anyhow, what's our best practices for disabling tests? I think ideally
> >> we'd always log a tracking bug and add a comment, akin to what we do in
> the
> >> test_expectations file for layout tests. This might be too much of a
> burden
> >> on sheriffs, so an alternative is for people who are working on various
> >> areas (like workers) track checkins to the associated files and add some
> >> documentation after the fact. Or we can just live with the SVN logs, in
> >> which case I need to get better about figuring out how to track through
> the
> >> SVN/git history of the various files :)
> >>
> >> -atw
> >>
> >> --
> >> Chromium Developers mailing list: chromium-dev@googlegroups.com
> >> View archives, change email options, or unsubscribe:
> >> http://groups.google.com/group/chromium-dev
> >
> > --
> > Chromium Developers mailing list: chromium-dev@googlegroups.com
> > View archives, change email options, or unsubscribe:
> > http://groups.google.com/group/chromium-dev
>
> --
> Chromium Developers mailing list: chromium-dev@googlegroups.com
> View archives, change email options, or unsubscribe:
>    http://groups.google.com/group/chromium-dev
>

-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
    http://groups.google.com/group/chromium-dev

Reply via email to