At 01:24 AM 10/28/2005, Jonathan Kohl wrote:
> > You may still want to use begin/rescue to catch the
> > NavigationException, but i don't see any reason to put an
> > assert in the begin/rescue/end blocks.
>That's what we were doing to print out what was going on to a log file. I
>believe assertions throw an exception on failure, so that's how we were
>figuring out that an assertion failed so we could log it to a file.
>
>What is an alternative to begin/rescue/end when we want to log results to a
>file? (I still don't understand why this is such a bad idea to use.)

The other method would be to modify the test harness to do the logging. 
That's a little more work, except that you only have to do it once, instead 
of having to instrument every test. It's not necessary when you are first 
learning a tool, but it is necessary (in my opinion) when you want to put a 
test suite into production use.

There are two issues with using begin/assert/rescue. The first is simply 
that many people find it confusing. It really is an unnecessarily complex 
form of flow control. Most people find if/then much easier to understand.

Secondly, it is easy to make errors that lead to undetected failures. These 
are tests that actually find a problem but then are coded in a way so that 
the notification of the problem is suppressed. Any one who is serious about 
automated testing needs to know -- be certain -- that their tests don't 
contain these kinds of errors. Personally, that means that i only use 
begin/rescue in code that i have unit tests. (There are few begin/rescue 
blocks in the Watir library that others contributed which don't have unit 
tests. I added an item in the tracker that we need to add tests for these.) 
It is too easy to make a mistake, so you really need to have unit tests. I 
never write unit tests for regular tests, so that means that i avoid this 
kind of coding in my tests.

The technical terms for these kinds of errors is false negatives (although 
it is hard not to think of them as false positives) or type II errors. I 
have called these errors silent horrors, because that is easier to remember 
and presents a more apt figure.

False alarms (failures caused by bugs in tests) are, too some degree, 
unavoidable with automated tests. But silent horrors (passes causes by bugs 
in tests) truly impugn the integrity of a test suite. I once found a silent 
horror bug in a test suite. When i fixed it, hundreds of tests that had 
been passing were now failing. In this case, the problem was that their was 
bad data in the tests. And the test framework was just ignoring bad data. 
It was only when i corrected the framework to fail tests with bad data, 
that we came to realize that the scope of our achieved testing was actually 
much less that what we had thought.

In fact, i have found the potential for silent horrors in almost every test 
suite that i've ever reviewed. This is why i discourage the use of 
begin/rescue unless it is being used with rigourous unit testing. And even 
then, it should only rescue the specific Exception in question. An open 
rescue clause is another red flag.

Bret


_____________________
  Bret Pettichord
  www.pettichord.com

_______________________________________________
Wtr-general mailing list
[email protected]
http://rubyforge.org/mailman/listinfo/wtr-general

Reply via email to