To me, I think the economics of the issue are about the following -
i) what are tests FOR -
to which the answer is - tests are for failing as often (early) as possible, in order to show up implementation errors
ii) what do we do when they fail -
to which the answer isn't quite so clear - although generally involves going hunting for what the cause is that made them fail. The tests are valuable if the cost of hunting down the failure in a test case is considerably less than the cost of hunting down the failure in real use.

That said, we have to balance our efforts wisely, since errors are likely to occur at any level of the framework... I feel it's best if tests are spread between all levels, starting with unit tests to test elementary functionality, more complex test to test compound functionality, and higher and higher levels of integration tests ending up with integration tests for entire applications.

What we hope will happen is that on encountering a test failure at any level, we'll be able to rule out failures at any lower level by checking for failures in simpler tests first. What happens when finding a panel full of randomly failing tests (as I frequently do this week!) is to home in on fixing the tests which fail at the lowest level first, and then progressively work up to higher level test failures - very often most if not all of these will be resolved by fixing the lower level tests. We expect there to be a constant effort at characterising the underlying reason for failures in higher level tests, if these can actually be explained in terms of a framework failure for which there is no existing lower level test, and writing that corresponding lower level test - although this is all naturally controlled by what time budget we have available overall....

So what do I think this means for this particular dependency?

In terms of point i) - is there any reasonable chance that our tests may fail less often - I think there almost certainly isn't - changes in code which had previously passing tests are overwhelmingly likely to make tests fail, rather than the other way round. Also, the parts of the framework which jqUnit now depends on are extremely simple and already have good tests - these only consist of a few simple functional programming utilities and fluid.registerNamespace. The IoC testing framework is another matter - but I think we already conceded that it would be impossible to have an IoC testing framework that didn't depend on the IoC infrastructure. But under the terms of point ii), again, we have moderately good tests for IoC itself, which so far do NOT themselves depend on IoC - that is, our IoC tests depend on plain jqUnit, which in turn depends only on the simple parts of Fluid.js, which themselves have adequate tests, etc.

So my assessment is that the "onion of testing" which we necessarily depend on in case ii) isn't dangerously prejudiced by this new dependency. We can always stand to have better tests, but the area we most urgently need them isn't here - it's in the area of having more plain IoC tests - but this can wait until the implementation stabilises some more and we have a firm idea of what the IoC system is meant to do in each situation.

On 21/01/2013 09:56, Justin Obara wrote:
        • Will this make our tests more brittle, since changes to the framework 
could potentially affect it?

Can you elaborate on the kinds of brittleness that you're concerned about? 
Perhaps some examples?
How, specifically, is it a big issue?

I'm not sure I have any specific concrete examples. I probably don't know 
enough about the inner workings of either the framework or the jqUnit extension 
to really be able to do that. I can try to make some hypothetical ones, 
hopefully they  may lead to further thoughts and start to tease out some actual 
potential issues, or prove that the fear is unwarranted.

I suppose the everything is broken cases should be pretty obvious to someone 
making a change to the framework, so those should be less of an issue. I'd be 
more worried about the more subtle ones that may cause some tests to appear to 
work or not just because of some framework issue. Maybe a change in the 
framework might affect the jqunit extensions sequencing of tests, merging of 
options, or instantiation of components, component creation order or something 
else.

The obvious worry is that because the debugging is so difficult right now, if 
something does arise, it will be hard to track. At the end of the day I just 
want to make sure that we have thought things through and have our bases 
covered so that we can prevent our tests from becoming unreliable. If we can do 
that, I'm fine with this dependency.

Thanks
Justin

_______________________________________________________
fluid-work mailing list - [email protected]
To unsubscribe, change settings or access archives,
see http://lists.idrc.ocad.ca/mailman/listinfo/fluid-work

Reply via email to