On Tue, Dec 04, 2007 at 04:05:07PM -0800, Michael G Schwern wrote:

I've written a couple of replies to this thread with similar content,
but not sent them for one reason or another.  Perhaps I can be more
succinct here.

> Then there are folks who embrace the whole test first thing and write out lots
> and lots of tests beforehand.  Maybe you decide not to implement them all
> before shipping.  Rather than delete or comment out those tests, just wrap
> them in TODO blocks.  Then you don't have to do any fiddling with the tests
> before and after release, something which leads to an annoying shear between
> the code the author uses and the code users use.

I believe that everyone is comfortable with this use of TODOs.

> There is also the "I don't think feature X works in Y environment" problem.
> For example, say you have something that depends on symlinks.  You could hard
> code in your test to skip if on Windows or some such, but that's often too
> broad.  Maybe they'll add them in a later version, or with a different
> filesystem (it's happened on VMS) or with some fancy 3rd party hack.  It's
> nice to get that information back.

Here is where opinion seems to diverge.  I tend to agree with Fergal
here and say you should check whether symlinks are available and skip
the test if they are not.  But I can see your use case and wouldn't
necessarily want to forbid it.

    When people use a variable for two different purposes ...
        we tell them to use another variable.

    When people use a subroutine to do two different things ...
        we tell them to split it up.

    When people use a TODO test for two different situations ...
        we can't agree on its behaviour.

So perhaps we need another type of test.  Eric seems to be suggesting we
need another N types of test.  I suspect YAGNI, but what do I know?  I'd
have said that to this "second" use of TODO too.

-- 
Paul Johnson - [EMAIL PROTECTED]
http://www.pjcj.net

Reply via email to