I realize this thread is 4 months old, but let me respond to this one
technical question:
On Thu, Mar 4, 2010 at 2:10 AM, Simon King simon.k...@nuigalway.ie wrote:
Hi!
On Mar 4, 8:24 am, Robert Bradshaw rober...@math.washington.edu
wrote:
I believe there is also some randomized testing that
There are the Wester tests, which we ship and test (the ones we can do
at least)
http://hg.sagemath.org/sage-main/file/8c4f10086e20/sage/calculus/wester.py
I believe there is also some randomized testing that is done in the
category code that takes random elements and verifies they have the
Hi!
On Mar 4, 8:24 am, Robert Bradshaw rober...@math.washington.edu
wrote:
I believe there is also some randomized testing that is done in the
category code that takes random elements and verifies they have the
correct properties (e.g. commutativity, associativity, etc.) that has
Robert Bradshaw wrote:
As I've mentioned before, internal consistency checks
can be better than comparing against commercial programs, so that way
anyone can run and verify them, and they often illustrate interesting
math (e.g. verification of deep, abstract theorems for specific examples).
Hi David,
Although it is true that not everyone can run tests against commercial
software, I would have thought a significant proportion of Sage users
could. There is already an interface to Mathematica. Many Sage users and
developers work in universities, which often have
On Mar 4, 2010, at 2:07 AM, Dr. David Kirkby wrote:
Robert Bradshaw wrote:
As I've mentioned before, internal consistency checks can be better
than comparing against commercial programs, so that way anyone can
run and verify them, and they often illustrate interesting math
(e.g.
On 03/04/2010 04:07 AM, Dr. David Kirkby wrote:
Anyway, it seems my view is a minority one here.
I don't think that's necessarily the case (I agree with you that
randomized testing is a good thing). However, I also agree with others
that writing doctests is more important for those that
If this is a call for a vote ;-), let me tell that I completely agree with the
point of view that in an ideal world, tests should be written *before* the
code and by a *different* person (extreme/peer programming).
In an ideal world test would be extracted from theorems of theoretical
papers.
Jason Grout wrote:
On 03/04/2010 04:07 AM, Dr. David Kirkby wrote:
Anyway, it seems my view is a minority one here.
I don't think that's necessarily the case (I agree with you that
randomized testing is a good thing). However, I also agree with others
that writing doctests is more
about test suites - random or not so maybe slightly offtopic but
didn't wanted to open new topic for something so close - I just
wonder, had anyone tested Sage against http://eqworld.ipmnet.ru/ exact
solution database? It's basic database of exact solutions for
integrals, ODEs and much more - but
On Mar 4, 4:01 am, Dr. David Kirkby david.kir...@onetel.net wrote:
.
BTW, playing around I found this bug in Mathematica, by picking some extreme
cases.
In[3]:= Sin[2^900.23]
Out[3]= 0.938865 // This agrees with Sage.
In[4]:= Sin[2^5000.0]
Out[4]= 0.
It seems that for any
There has been some previous discussion about this on sage-devel, I
can't find exactly the thread I remember but here's a somewhat related
one:
http://groups.google.com/group/sage-devel/browse_thread/thread/b91c51672ae0f475/
Personally I think it makes sense to put the most effort into getting
mhampton wrote:
There has been some previous discussion about this on sage-devel, I
can't find exactly the thread I remember but here's a somewhat related
one:
http://groups.google.com/group/sage-devel/browse_thread/thread/b91c51672ae0f475/
Thank you.
Personally I think it makes sense to
13 matches
Mail list logo