On 2007-03-24, Ralf Hildebrandt <[email protected]> wrote:

>>> r = (uint32_t)u1 * u2;
>> 
>> GCC generates the exact same code (as it should).
>
> O.k, then GCC is smart,

Almost.  But that's nothign to do with my quuestion: For the
above code why are u1 and u2 converted to 32-bit values before
being passed to the 16x16=>32 multiply routine?

> but don't expect other compilers to show the same behavior.

I expect any decent compiler to recognize that it only needs to
do a 16x16=>32 multiply, but that's got nothing to do with my
question.

-- 
Grant Edwards                   grante             Yow!  Is it FUN to be
                                  at               a MIDGET?
                               visi.com            


Reply via email to