Hi Folks,

Here's the process I'd like to see:

For small diffs from a reliably green tree, people should back out
checkins that fail (or quickly check in the relevant fix).  I know I
could do a better job of putting a note in Tinderbox when I'm on top of
a problem like this.

I think it's inevitable, though, that we'll have occasional unforeseen
tinderbox down time.  This week the root of the problem was network
outage, but tinderbox, subversion, and functional tests are just complex
systems.  They're bound to fail occasionally in ways that aren't
indicative of bugs in the product.

When we have problems like this, should we stop checking in code?
Probably so, but it's painful to do so.  It's a psychological issue for
me.  Storing diffs here and there to be checked in at some unknown point
in the future just feels liable to cause significantly more work,
because I'll have to context switch back into the problem at hand when
checking in the change.

> What I don't understand is the extreme displeasure some people express
> even at the mention of backing out bad changes. It is easy to backout,
> and easy to recommit with fixes, and it will make the tree green
> quickly. Why is this bad?

Again, I think the issue is psychological.  Looking back on last week
when I backed out a preview area feature, it really didn't take that
much time to back out my change or check in a fix.

But unfortunately, it really lowered my morale last week to have to back
out the fix I'd put a lot of work into.  I had to get back in the brain
space to understand what my patch did before I checked it back in, and
feeling like I'd lost momentum for implementing features for alpha4 just
felt bad.

Part of the problem here is that it's common for functional tests to
break in a way that's really exposing a weakness in the functional
tests, not a bug in the code that was checked in.

I've been delighted several times in the last few months when functional
tests found real issues in code I was writing.  Fixing those failures
was actually a pleasure, I was pleased someone had done a good job of
test coverage of the area in question.  If a functional test is exposing
a genuine application bug that didn't exist beforehand, they're doing a
marvelous thing.

Functional test failures that stem from problems in the functional test
framework, however, feel very different.  My brain is focused on a
feature, the feature's working, but then rather than getting the
expected gratification of completing a task, I have to switch contexts
completely to understand the functional test world, and frequently end
up spending hours or days fixing the functional tests.

In cases like this, I'd really, really prefer to just disable the
relevant tests and file a bug to fix the functional test framework.  I
don't want to couple my momentum and satisfaction to an obligation to
fixing the framework.

Now, I'm not saying we shouldn't be fixing functional tests, and
generally have more developer attention to improving the framework.  I
think that functional tests are a good use of our resources.  I just
don't want to feel like I'm beholden to doing that work immediately if
it's really not the feature I've been working on that's at fault.

Sincerely,
Jeffrey
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

Open Source Applications Foundation "chandler-dev" mailing list
http://lists.osafoundation.org/mailman/listinfo/chandler-dev

Reply via email to