#10792: Upgrade numpy to 1.5.1
----------------------------+-----------------------------------------------
Reporter: jason | Owner: tbd
Type: task | Status: needs_info
Priority: major | Milestone: sage-4.7
Component: packages | Keywords:
Author: | Upstream: N/A
Reviewer: David Kirkby | Merged:
Work_issues: |
----------------------------+-----------------------------------------------
Changes (by drkirkby):
* reviewer: => David Kirkby
Comment:
Replying to [comment:19 evanandel]:
> Replying to [comment:18 drkirkby]:
>
> > This is where it would be helpful doctests actually documented why the
particular value is correct. I've seen ''sooooo'' many doctests where the
"expected value" is whatever someone got on their computer and is not
substantiated in any way as a comment in the code.
>
> Unfortunately to my knowledge, this is the only extant tool that
performs this sort of Riemann map. I believe that there are one or two
cases where the analytic map is known, so I can probably add some tests
that check accuracy against that.
So essentially the "test" in its current form just demonstrates that the
result on different machines is approximately the same. It does not
demonstrate that the result on these machines is approximately correct -
the code could be incorrect and be giving results 1000 times what they
should be.
I'd feel a lot happier if the doctest used an example where the result is
known, and cited a reference to the result.
Consider for example the approach I took to testing some finite difference
software I wrote for computing the impedance of electrical transmission
lines of arbitrary cross section.
http://atlc.sourceforge.net/
Although there are a few '''examples''' demonstrating the answer from
strange cross sections, where I have no chance of computing the result
analytically
http://atlc.sourceforge.net/examples.html
for the purposes of ''testing''
http://atlc.sourceforge.net/accuracy.html
the tests were based on cases where analytical results were known. The
results were checked on a variety of CPUs (AMD, Cray, Intel Itanium, Intel
x86, PA-RISC, PPC, Sun SPARC etc) on a variety of operating systems (AIX,
Irix, HP-UX, OS X, Solaris, tru64, Unicos, Unixware, plus various Linux
distributions)
> > Also, if the algorithm, or its implementation in Sage is has poor
numerical stability, this should be documented.
>
> As far as I've seen, it's not unstable in the sense of dramatically
losing accuracy, but the many numerical calculations are sensitive to
slight differences in machine-level implementation. This results in slight
differences in the final error. I should be able to do some error analysis
and see if these deviations are within the bounds of the algorithm.
>
> > Could this be computed with Mathematica or Wolfram|Alpha to arbitrary
precision? Just as thought. If so, that could be documented - we have
permission from Wolfram Research to use Wolfram|Alpha for the purpose of
comparing results and documenting those compassions.
>
> Not without complete reimplementation, and I know of no reason why their
performance should be better than ours. You can increase the numerical
precision of the computation by increasing N (the number of collocation
points on the boundary.) I'll can create a couple of comparison tests that
can be run on different machines to see if that decreases the numerical
deviation.
Well as far as I know Numpy will use machine precision, whereas anything
you do in Mathematica can be done to arbitrary precision. There are many
things in Sage that can only be done on an FPU, so use of an arbitrary
precision floating point maths is not supported. Mathematica does not
suffer that limitation.
Lets say for example a doctest of Sage's factorial function used 50! as an
example, and gave the result
{{{30414093201713378043612608166064768844377641568960522000000000000}}}.
If every machine it was tried on, using a variety of CPUs (PPC, SPARC,
Intel x86, AMD x86, Intel Itanium processors) gave the same result, it
does not prove Sage is correct. That would only demonstrate the
reproducibility of Sage's factorial function, which is something very
different from a good ''test'' in my opinion. Performing the same
calculation in MATLAB, which can only do the calculation on the floating
point processor, would increase confidence in the result. Performing the
calculation on Mathematica would show a difference in one of the digits,
which would lead one to question what of the two packages is wrong. (I
purposely changed one of the digits, introducing a relative error of 3.28
x 10^-52^, which would not be seen on an FPU).
I was reading only recently about when a new Mersenne prime was found.
That was found on a x86 chip, but it was also verified by other software
on an x86 chip and also by yet more software on a SPARC processor.
It seems to me that many doctests in Sage, just show reproducibly, and
don't actually test the algorithm or the implementation. This appears to
be one such ''test''.
PS, the "author" field needs to be filled in.
Dave
--
Ticket URL: <http://trac.sagemath.org/sage_trac/ticket/10792#comment:25>
Sage <http://www.sagemath.org>
Sage: Creating a Viable Open Source Alternative to Magma, Maple, Mathematica,
and MATLAB
--
You received this message because you are subscribed to the Google Groups
"sage-trac" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to
[email protected].
For more options, visit this group at
http://groups.google.com/group/sage-trac?hl=en.