On Friday, December 3, 2010, Johan S. R. Nielsen
<j.s.r.niel...@mat.dtu.dk> wrote:
>> On the topic of verifying tests, I think internal consistency checks
>> are much better, both pedagogically and for verifiability, than
>> external checks against other (perhaps inaccessible) systems. For
>> example, the statement above that checks a power series against its
>> definition and properties, or (since you brought up the idea of
>> factorial) factorial(10) == prod([1..10]), or taking the derivative to
>> verify an integral. Especially in more advanced math there are so many
>> wonderful connections, both theorems and conjectures, that can be
>> verified with a good test. For example, computing all the BSD
>> invariants of an elliptic curve and verifying that the BSD formula
>> holds is a strong indicator that the invariants were computed
>> correctly via their various algorithms.
>>
>> - Robert
>
> Also a huge +1 from me. This is something I have been thinking a lot
> about how to utilise most elegantly, and I think one could take it a
> step further than doctests. I often myself write "parameterised
> tests": tests for properties of the output of functions based on
> "random" input. For example, say I have a library of polynomials over
> fields. Then a useful property to test is for any polynomials a,b to
> satisfy
>   a*b == b*a
> I could write a test to randomly generate 100 different pairs of
> polynomials a,b to check with, over "random" fields. I know that some
> people sometimes write such tests, and it is also suggested in the
> Developer's Guide somewhere.
> I love the Haskell test-suite QuickCheck, which allows one to write
> such tests extremely declaratively and succinctly. Haskell is way cool
> when it comes to types, so it provides an elegant way of specifying
> how to randomly generate your input. Transfering this directly Python
> or Sage can't be as elegant, but I have been working on a small python-
> script -- basically an extension to unittest -- which could make it at
> least easier to write these kinds of tests. It's not done yet and can
> be improved in many ways, but I use it all the time on my code; it's
> quite reassuring to have written a set of involved functions over
> bivariate polynomials over fields and then check their internal
> consistency with 100-degree polynomials over +1000 cardinality
> fields :-D
> My thought is that doctests are nice for educational purposes and
> basic testing, but I myself like to test my code better while writing
> it. I don't want to introduce more bureaucracy, so I don't suggest
> that we should _require_ such tests, but it would be nice to have a
> usual/standard way of writing such tests, if an author or reviewer
> felt like it.

I think nosetest is a superb framework for writing such unittests,
which really do encourage a completely different kind of testing than
doctests.

> More importantly, if it could be done in a systematic
> way, all such tests could share the random generating functions: for
> example, all functions working over any field would need a "generate a
> random field"-function, and if there was a central place for these in

I wrote such a thing.  See rings.tests or test or rando_ring (i am
sending from a cell phone).



> Sage, the most common structures would quickly be available, making
> parameterised test writing even easier.
>
> - Johan
>
> --
> To post to this group, send an email to sage-devel@googlegroups.com
> To unsubscribe from this group, send an email to 
> sage-devel+unsubscr...@googlegroups.com
> For more options, visit this group at 
> http://groups.google.com/group/sage-devel
> URL: http://www.sagemath.org
>

-- 
William Stein
Professor of Mathematics
University of Washington
http://wstein.org

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org

Reply via email to