Am 23.06.2012 17:38, schrieb Aleksandar Makelov:
We want to make sure that the right thing is done with the output from the
RNGs, so we manually supply as an additional argument to a given function
some particular choice for all the variables inside the function that come
from RNGs.

Ah, I see.
I'm not convinced that it's the best way to design such a thing. Adding parameters to a function that are purely there for testing purposes is going to confuse people who aren't into testing. It's also in contradiction to the "keep interfaces as narrow as possible" principle - a narrow interface means less things that need to be remembered by programmers, less things that need to be set up by the caller, less things that might be misinterpreted. Also, it'd adding code to the functions. Which means adding bugs - which may affect the function if it's running in production. Which kind of defeats the purpose of testing in the first place.

> The reason that we use certain precomputed values is that doing
the test with some randomly generated set of values as an additional
argument is essentially going to have to repeat the calculations in the
function itself (which we want to test) - whereas for concrete values we
know the answer right away. Does that make sense?

Not very much, I fear.
As Stefan said, repeating a calculation in test code isn't a useful unit test, even if you place the unit test into another module. Or if you're doing the calculation by hand - unless those calculations have been done by experts in the field and verified by other experts in the field, of course.

Expanding on Stefan's example.
Assuming you're testing an array-inversion routine.

We agree on the worst approach to test it: repeat the array inversion algorithm in the test and see whether it gives the same result as the code in SymPy. Actually this kind of test isn't entirely pointless - if the test code remains stable but the SymPy code evolves into optimizations, this could serve a useful purpose. On the other hand, you still don't write this kind of test code until you actually do the optimization.

The other approach would be to add an "expected result" parameter, and fail if the result isn't the expected one.
This has two problems:
a) It adds an unwanted dependency to the testing modules. At least if you want to give better diagnostics than just throwing an exception (for example, you may want to test internal workings that throw exceptions which get caught). b) You're supplying precomputed results. You'd still need to explain why the results are correct. Somebody has to verify that they are, indeed, correct.

My approach for that would be to test the defining property of the function:
  (matrix_inv(A) * A).is_unit_matrix()
(sorry for ad-hoc invention of matrix functions)

I.e. you're testing the purpose of the function, not its inner workings.

Oh, and this kind of testing can uncover more bugs.
For example, the above reasoning overlooks that not all matrices can be inverted. If I'm testing the algorithm, I'll simply overlook the case of a singular matrix because I'm all thinking inside the algorithm. If I write my test code with the purpose in mind, I have a better chance to stumble over the singular case - either because I'm thinking about matrix theory instead of my algorithm, or because some tests mysteriously fail, namely, when the RNG happens to generate a singular matrix.

I hope that's all understandable.
And I hope I'm not missing the point entirely :-)

--
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.

Reply via email to