On Dec 26, 2006, at 5:12 PM, Grant Baillie wrote:
On 26 Dec, 2006, at 17:01, Mikeal Rogers wrote:
First, I want to get rid of the previous pass/fail approach and
just use asserts. It's fairly easy to trap asserts and it has the
advantage of being easy to program, easy to catch in the debugger,
and we can give python tracebacks on each test failure.
Personally, I'd vote against both asserts (which don't fire if
you're running optimized, as we'd want for the performance tests)
and the previous
There probably isn't much benefit in running tests with python
optimization turned on, otherwise all the benefit of the testing code
that uses asserts is lost. And if you replace asserts with some other
check that is run with optimized python, that's pretty much
equivalent to using asserts and running non-optimized Python.
That being said, the rest of our C/C++ code is compiled differently
in debug and release, e.g. code optimization. So in this case it
makes sense to run both release and debug since bugs can crop up in
either case.
pass/fail approach (which is baroque and error-prone). Instead, why
not have tests inherit from unittest.TestCase, since:
1) That has a pretty expressive API for validating state
(TestCase.failUnless, TestCase.failUnlessEqual, etc)
2) Tests that depend on setting up shared data can use inheritance,
usually in conjunction with setUp() methods.
3) Less wheel reinvention, less having developers have to learn new
APIs.
It may not matter too much what APIs we use for testing since I
expect almost all the testing code will be automatically generated by
the script recorder.
?
--Grant
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
Open Source Applications Foundation "chandler-dev" mailing list
http://lists.osafoundation.org/mailman/listinfo/chandler-dev
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
Open Source Applications Foundation "chandler-dev" mailing list
http://lists.osafoundation.org/mailman/listinfo/chandler-dev