On 26 Dec, 2006, at 17:38, John Anderson wrote:

Personally, I'd vote against both asserts (which don't fire if you're running optimized, as we'd want for the performance tests) and the previous
There probably isn't much benefit in running tests with python optimization turned on, otherwise all the benefit of the testing code that uses asserts is lost

If the performance tests are supposed to be a reflection of end-user performance perceptions, they should be run with optimization enabled. In general, performance tests aren't supposed to be testing functionality that isn't already covered by other tests (functional, unit).

That being said, the rest of our C/C++ code is compiled differently in debug and release, e.g. code optimization. So in this case it makes sense to run both release and debug since bugs can crop up in either case.

Again, performance tests are not so much about discovering bugs.

And if you replace asserts with some other check that is run with optimized python, that's pretty much equivalent to using asserts and running non-optimized Python.

It isn't equivalent: The behaviour of asserts outside the test code is different.

...
It may not matter too much what APIs we use for testing since I expect almost all the testing code will be automatically generated by the script recorder.

Tests without verifying data aren't usually so useful. Or am I missing something here?

--Grant


_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

Open Source Applications Foundation "chandler-dev" mailing list
http://lists.osafoundation.org/mailman/listinfo/chandler-dev

Reply via email to