On Saturday, January 1, 2005, at 5:06:23 PM, Robert Watkins wrote:

> Ron Jeffries wrote:
>> Use of "many resources" will not make the tests better.
>> 
>> Running them asynchronously is done because they take a long time.
>> The fact that it takes a long time to know if you've done good work
>> is the bug. Fix it.

> I would posit that certain tests, by their very nature, have to take a long
> time. Performance profiling and endurance tests come to mind. Also,
> acceptance tests (which typically use the system in a deployed state, not
> an abstracted one like unit tests) take longer to run, particularly if a
> database is involved. Even if you stub out part of the infrastructure for
> the acceptanace tests, you need to have at least some end-to-end tests to
> ensure that everything does hang together.

Yes, well, that's the problem. To me, "posit" means "assume" or
"accept". That means it's OK for certain tests to be long. That
leads to acceptance of the fact that tests take a long time. That
leads to slower feedback.

I don't like slow feedback. Therefore, I do not "posit" that the
nature of things is that tests are slow. I posit that a slow test is
a bad test until proven otherwise.

Not surprisingly, approaching slow tests with this attitude leads to
tests that run faster. Not all of them, of course, because some of
them are innocent even if assumed guilty.

> I would also posit that certain QA practices, such as determining test
> coverage levels, take more time than a developer wants to spend, and don't
> make much sense for a developer to run anyway.

Again, that's one way to be certain that I'll never have a coverage
tool that helps me when I'm coding. I think we can do better.

If I'm doing TDD, working in small increments, I'm perhaps only
interested in incremental coverage, on the code I'm working on. A
decent tool could tell me that. Imagine if my IDE highlighted all
the code not yet executed by my tests. That would be very valuable.

> A layered build approach (which CruiseControl supports "out of the
> box") makes a lot of sense in such a scenario. Does this describe
> every team? No.

It would not surprise me to find that every real team has tests that
they don't run all the time. But my reaction would not be "let's
never commit the code until a thousand years of testing has
elapsed." It would be "let's find a way to do enough testing in
minutes, then use the thousand years to find out whether we've made
a mistake."

If more than one pair release has taken place, we really don't know
what caused the problem when the thousand year tests fail. That's
not good. If our thoughts have moved on, that's not good.

Therefore -- in my opinion -- the vector should point in the
direction of getting all the necessary info instantly, not in the
direction of tolerating and accommodating slow feedback.

> Note that reducing build times is still important when using a build
> server. The build status is feedback information; you do want it as soon as
> possible.

Yes, exactly.

Ron Jeffries
www.XProgramming.com
How do I know what I think until I hear what I say? --  E M Forster




To Post a message, send it to:   [EMAIL PROTECTED]

To Unsubscribe, send a blank message to: [EMAIL PROTECTED]

ad-free courtesy of objectmentor.com 
Yahoo! Groups Links

<*> To visit your group on the web, go to:
    http://groups.yahoo.com/group/extremeprogramming/

<*> To unsubscribe from this group, send an email to:
    [EMAIL PROTECTED]

<*> Your use of Yahoo! Groups is subject to:
    http://docs.yahoo.com/info/terms/
 



Reply via email to