On Sunday, December 4, 2016 at 12:00:58 AM UTC+1, Pierre wrote:
>
> I tried naive things like setting x and y to be integers of a certain 
> type, and then
>
> sage: %timeit x^y
>
> for example, but I always get ""  The slowest run took 59.81 times longer 
> than the fastest. This could mean that an intermediate result is being 
> cached. ""
>

Yes, the first computation is relevant so instead of timeit I would use 
time.
 

> This makes sense, but I'm not sure what else to try. Individual " %time 
> x^y " statements seem to show no difference between ZZ and numpy.int, for 
> example, which puzzles me (overhead?). Exact same issues when defining the 
> factorial via
>

Both ZZ and numpy use libgmp internally so if you subtract other factors 
the times should be similar.
 

> fac= lambda n : 1 if n == 0 else n*fac(n-1)
>

Note however that most of the time for fac is used for the Python loop.
 
sage: %time _=fac(500)
CPU times: user 397 µs, sys: 0 ns, total: 397 µs
Wall time: 344 µs
sage: %time _=factorial(500)
CPU times: user 14 µs, sys: 6 µs, total: 20 µs
Wall time: 21.9 µs

Also note that time for converting the internal result to string output can 
be significant as well.

So here is my question: does anybody know of a basic test/piece of code 
> that would illustrate the difference in speed between various types of 
> integers and/or floats?
>

As shown the differences are irrelevant if you allow other factors.

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.

Reply via email to