On 9/19/10 7:09 AM, KvS wrote:
Alright, many thanks for the clear and extensive answer Thierry.
Bottomline is thus that I'll have to live with it.
On a sidenote, I must admit it surprises me. I'm only an amateur
programmer, let alone that I know anything about the subtleties of how
cpus interact exactly with code, but you would somehow expect that it
should be possible when you add two arbitrary precision numbers say,
you break them up in smaller chunks (like 2115+3135 =
(21+31)*100+(15+35)) so that each of the smaller chunks are
essentially doubles to be added and can hence be performed at optimal
speed. You would get the overhead (breaking the numbers up in chunks
and putting them together again) plus the cycles necessary to make the
additions, but this would be a lot less than a factor 100 I'd have
guessed.
Well, anyhow, of course the above is layman talk and I will be missing
essential points, but that's probably why I'd have expected arbitrary
precision speed to be a lot closer to double speed.
It's more than that. You can see at
http://www.mpfr.org/mpfr-current/mpfr.html#Introduction-to-MPFR that
MPFR guarantees a lot of stuff happens (modulo bugs, of course) in a
cross-platform manner.
To see the sort of thing you mention above, you might look at the
quad-double library (this isn't arbitrary precision, though):
Jason
--
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to
[email protected]
For more options, visit this group at
http://groups.google.com/group/sage-support
URL: http://www.sagemath.org