On Jan 4, 2008 8:24 AM, Luc Le Blanc <[EMAIL PROTECTED]> wrote:
> I am trying to optimize repeated math operations. If I have
>
> UInt16  x, y;
>
> and need to divide x by 100. Is it easier on the CPU to write
>
> y = x / 100;
>
> than
>
> y = x * 0.01;
>
> I am assuming the former does an integer division, while the latter does a 
> floating-point multiplication, and that any floating-point operation takes 
> longer than an integer operation. Am I right?

Yes

> Lastly, if I write
>
> y = x / 100.0;
>
> will the OS perform a floating-point division because of the 100.0, or will 
> it perform an integer division because both x and y are integers?

The C language defines how math values are transformed.  Because the
range of a double is greater than an int, x will be converted to a
double, the division will be done in FP, then the result will be
converted back.

-- 
For information on using the ACCESS Developer Forums, or to unsubscribe, please 
see http://www.access-company.com/developers/forums/

Reply via email to