On Thursday, September 24, 2015 at 8:05:52 PM UTC, Jeffrey Sarnoff wrote:
>
> It could be that integer powers are done with binary shifts in software 
> and the floating point powers are computed in the fpu.
>

I suspect not. [At least in this case here, where the numbers to be raised 
to a power are not an integer. It would not make much sense to force 
results of cos or sin to be integers.. :) ]


int^int could avoid the FPU.

float^int would I think (at least for low integers) be slow with binary 
shifts (and more that is needed) as floating point representation is 
hard/slow to emulate in software.


What could be done (and maybe is), is to handle:

float^2, float^3 and up to some small integer and change to floating point 
multiplication. In general, would LLVM take care of such optimizations or 
would Julia have to do it/help?

I'm not sure how fast pow is in an FPU, probably not(?) optimized for these 
simple cases, needs to be general, while MULs can be issued every cycle in 
FPUs commonly (may have some latency, for using result right away).

Such as for float^2.0 or any other literal "float" that is actually an int, 
can be treated my the compiler as an int.

-- 
Palli.

Reply via email to