I also am an extremely strong supporter of unit testing. In fact, unit
testing helps precisely answer the questions Trevor raises.

For example, if you use a library for the first time, you will be probably
writing some "throw-away" code (maybe in ipython/jupyter) to see how it
behaves. Make those a unit test! You will have a test on your assumptions.
You might or might not discover a bug in this (or future) release of the
library, but you will make clear what assumptions you have on it. If a test
against the library fails, you either have found a bug on the library
itself, or find a bug in your assumptions. Frankly, I could tease Trevor
attitude and say "you can never know if computers are right, so why bother
writing good code? just write what you can and hope for the best".

Regarding the argument Greg makes in that blog post, to me it applies more
to integration tests, not unit tests, but even if it did apply fully, it
still make sense to write unit tests for three reasons:

1) you may not know NOW what is the "right" precision you need for your
code, but you will be likely learn that in the future if you keep using
your code, and you can backport those numbers. This is the weakest
argument, but it is still valid in my opinion

2) and even if you do not know what is the "right" precision and you are
certain that you never will, writing code that is unit-testable is
different than writing code without unit tests in mind. If you are writing
unit tests, you will write smaller functions with less arguments to pass,
that do a single, specific task each, without side effects. Unless you are
really squeezing out the very last drop of performance out of LINPACK (and
sometimes I am), your code should look like this, and not like a "soup".
Even if your tests are "useless" because you can't know what is the right
precision, code written this way will be easier to deal with in the future,
port to other architectures, debug, etc.

3) Moreover, if you also do TDD or at least "test first", I firmly believe
that you be get even greater benefits. In this development practice, you
write the test as first thing you do when creating a new function. That
forces you to think about:

a) what the function should do
b) what information you need to pass to it for doing what it needs to do
c) what you expect it to return, if anything (or how you would expect it to
modify the objects you passed to it)
d) DO NOT WORRY about HOW you will do any of the above to begin with (i.e.
when writing the test), but only when the test, i.e. the expected behevior
"looks good".

Moreover, you will be freer to think about different HOWs (alternative
implementations) when the one you started gets "out of hands", with
automatic assurance than the "what" will not change (or not change
"significantly" if there is a numerical threshold).

Since I started writing codes separating the "what" from the "how" in the
way described above, I've been much better at it. I always do it that way,
and always will. What I've found is that people have very much troubles
with 2 and 3, and the sooner one starts the less ingrained they become in
the "wrong way", so I always teach unit tests with test first when I teach
programming. Now, of course you get to get to functions, but you can put
those pretty earlier, e.g. even before list comprehension (for staying in
python)

Cheers,
Davide

On Wed, Mar 9, 2016 at 10:34 AM, W. Trevor King <[email protected]> wrote:

> On Wed, Mar 09, 2016 at 01:25:22PM +0100, Peter Steinbach wrote:
> > On 03/09/2016 01:15 PM, Ashwin Trikuta Srinath wrote:
> > > I'm in favour of teaching testing - even if it is in the style of
> > > the regression tests you mentioned. But I'm not so sure about unit
> > > tests, or even if application code built on libraries can/should
> > > be "unit" tested.
> >
> > Well, I would very emotionally start arguing against this. As unit
> > tests are my day-to-day tool that I use for application and for
> > library code and they have rescued my !*&^% multiple times. As most
> > of my tasks involve accelerating applications, I always say no
> > speed-up is of use if the results are wrong.
> >
> > But maybe I don't see your point, can you please elaborate!
>
> I'd guess the issue is “if the application unit test fails, was the
> application code wrong or was the library code wrong?”.  But that's
> not specific to the application ↔ library interface.  If your machine
> code unit test fails, is it a bug in your machine code or in your
> processor?  If your C library unit test fails, is it a bug in your
> library code or in your C compiler?  Trying to draw a hard line
> between “unit testing” and “integration testing” doesn't seem
> particularly useful.
>
> I think the solution is to prefer lower layers that have good test
> coverage and/or wide deployment.  And if they're open source, you can
> usually contribute patches to improve poor testing (although this
> takes time; see [1]).
>
> Cheers,
> Trevor
>
> [1]: http://ivory.idyll.org/blog/2016-containerization-disaster.html
>
> --
> This email may be signed or encrypted with GnuPG (http://www.gnupg.org).
> For more information, see http://en.wikipedia.org/wiki/Pretty_Good_Privacy
>
> _______________________________________________
> Discuss mailing list
> [email protected]
>
> http://lists.software-carpentry.org/mailman/listinfo/discuss_lists.software-carpentry.org
>
_______________________________________________
Discuss mailing list
[email protected]
http://lists.software-carpentry.org/mailman/listinfo/discuss_lists.software-carpentry.org

Reply via email to