On 26/11/2017 14:23, Chris Angelico wrote:
On Mon, Nov 27, 2017 at 1:11 AM, bartc <b...@freeuk.com> wrote:
The way I write code isn't incrementally top down or bottom up. It's
backwards and forwards. Feedback from different parts means the thing
develops as a whole. Sometimes parts are split into distinct sections,
sometimes different parts are merged.
Sometimes you realise you're on the wrong track, and sections have to be
redone or a different approach used, which can be done in the earlier
stages.
If I had to bother with such systematic tests as you suggest, and finish and
sign off everything before proceeding further, then nothing would ever get
done. (Maybe it's viable if working from an exacting specification that
someone else has already worked out.)
Everyone in the world has the same problem, yet many of us manage to
write useful tests. I wonder whether you're somehow special in that
testing fundamentally doesn't work for you, or that you actually don't
need to write tests. Or maybe tests would still be useful for you too.
Could go either way.
Testing everything comprehensively just wouldn't be useful for me who
works on whole applications, whole concepts, not just a handful of
functions with well-defined inputs and outputs. And even then, it might
work perfectly, but be too slow, or they take up too much space.
Take one example of a small program I've mentioned in the past, a jpeg
decoder. I had to port this into several languages (one of which was
Python actually).
It was hard because I didn't know how it was meant to work. Only as a
whole when the input is a .jpeg file, and the output might be a .ppm
file that ought to look like the one produced by a working program, or
the original jpeg displayed by a working viewer. (Which is not possible
in all applications.)
How to do an automatic test? Directly doing a binary compare on the
output doesn't work because in jpeg, there can be differences of +/- 1
bit in the results. And even if the test detected a mismatch, then what?
I now know there is a problem, but I could figure that out by looking at
the output!
And actually, after it ostensibly worked, there WAS a minor problem:
some types of images exhibited excessive chroma noise around sharp
transitions.
The problem was traced to two lines that were in the wrong order (in the
original program). I can't see how unit tests can have helped in any way
at all, and it would probably have taken much longer.
And THIS was a small, well-defined task which had already been written.
Except the actual chip didn't work. As for the printout, the designer took
it home and used it as an underlay for a new carpet. A rather expensive
underlay.
So there was something else wrong with the chip. I'm not sure what
your point is.
The extensive testing was like unit testing, but needed to be even more
thorough because of the commitment involved. It failed to spot a problem.
And actually I had a similar problem with a new car. I took it back to
the dealer, and they said they plugged the on-board computer into their
analyser, which did all sorts of tests and said there was nothing wrong
with it. But there was, and the problem has persisted for a decade [to
do with the central locking].
I'm saying you can rely too much on these tests, and waste too much time
on them.
Perhaps that is a necessity in a large organisation or in a large team,
where there is a leader to look at the big picture. It doesn't work for
individuals working on one project.
--
bartc
--
https://mail.python.org/mailman/listinfo/python-list