On Sun, 2008-04-20 at 19:28 -0700, James Keenan via RT wrote:
> As to the question of whether people know how to write tests:  I think
> there is empirical evidence that people know how to write tests for the
> configuration steps.  For example, when François earlier submitted
> config/auto/crypto.pm, it came with a very nice set of test files right
> from the start.  In the past I have worked with people around this and I
> pledge to continue to do so in the future.

I disagree.  Yes, there is empirical evidence that SOME people
understand how to do this.  I didn't, and couldn't figure out what to do
in a time frame that wouldn't discourage me from doing the actual work:
coding the configure steps themselves, which I already found unclear and
difficult.

I am strongly in favor of testing, but before we expect new contributors
to create their own config tests, documentation must exist about how to
do so properly.  And we can't lose sight of the fact that the tests are
not the product, and in fact there is only one true test of the
configure steps: do they result in a functioning build of Parrot?  Any
other type of config test in some sense boils down to stupidity checks.
For example, can we quickly detect that a change broke something obvious
without having to run the entire Parrot test suite across all platforms?

My experience so far has been that jumping through all the hurdles
necessary to contribute to Parrot is difficult and time consuming.  It
has taken me *months* of part time effort to get to the stage that I
could meaningfully contribute, and get my work accepted.  I'm sure that
there are others who will not put in that effort, and I personally think
that's a bad thing.

chromatic often points out that it's hard to get high velocity on a
project in which everyone is working gratis or nearly so.  I agree.  But
that's not the only thing reducing Parrot's velocity.  It's a REALLY
complex beast, and the barrier to entry is relatively high, both in
terms of how much must be understood before work can commence, and in
terms of essentially artificial barriers such as standards, processes,
and required "check off" items like config tests.

Note that I am *NOT* saying that standards and processes are a bad thing
in general -- far from it -- merely that we need to be aware of the fact
that they are definitely barriers to new contributors and may slow the
work of existing contributors.  Thus, each new hurdle must be balanced
against the value that comes of it.

chromatic had these things to say earlier:

> I suspect that people aren't adding test files for configuration
> steps because:
> 
> 1) they don't know how
> 2) the existing tests for configuration steps are big blobs of messy,
>    duplicated code

>From my personal experience, right on both counts.

> I'm all for getting as much value out of our tests as possible (I
> could write another book about it), but I keep thinking we're heading
> down the wrong path with our configuration tests.  Our configuration
> system ought to be getting easier to understand, to maintain, and
> modify.  Is it?

Not so far as I've seen.  I basically got my new configure steps working
by reverse engineering.  Then when I got the error for no config tests
for the new steps, I looked at the existing config tests, thought "You
have GOT to be kidding me; it will take eons to do this properly, and I
don't even know what I want to test *for*", copied and pasted stubs to
silence the error, and went on with my day.

After my code was accepted and committed, kid51 began to write tests for
my configure steps, but he ran across the very problem I had decided to
avoid.  Because I did not know in advance the proper way to write
cross-platform config *steps* for OpenGL, I could not write proper
*tests*.  As various of us continued to modify the OpenGL config steps
to work on more platforms, some of kid51's tests were invalidated.  To
me, this was a perfect example of tests getting in the way, rather than
helping.  Effort was wasted attempting to write tests for an amorphous
blob.

As the production release of Parrot nears, having copious tests for
details of the config process will no doubt help us maintain quality
during the final stages.  For now, however, I think the requirement to
have 100% coverage of config tests should be deemphasized somewhat.


-'f


Reply via email to