On Wednesday, November 20, 2013 11:13:32 AM UTC-8, Felix Breuer wrote:
>
> I don't think NumPy will help, as NumPy works with machine precision 
> throughout, as far as I was able to figure out.
>

I think you can put arbitrary fixed length types in there, which would 
include multiprecision integers because they have a pointer to their 
variable-size bit. That would only save you one level of indirection, 
though, so it may not be worth the hassle.
 

> Regarding Cython, I'll have to find out if there is a way to work with 
> Cython and still use arbitrary precision arithmetic.
>

You can always use the python types. Doing so limits the performance gains 
somewhat. You could also break open the sage Integer types and work with 
the GMP interface directly.
 

> Question in general: When using machine precision arithmetic (be it via 
> Cython or NumPy), is there a way to tell whether at some point throughout a 
> long computation there were numerical overflows - without paying a huge 
> speed penalty for making this check?
>

C doesn't specify a standard way of doing so, but your specific 
compiler/architecture might provide access to the integer overflow flag of 
the CPU.
By default, Cython will raise an OverflowException if you try to put a 
python integer into a fixed-length type where  it doesn't fit. If you're 
just dividing by GCD, there is no further overflow problem.

-- 
You received this message because you are subscribed to the Google Groups 
"sage-support" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/sage-support.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to