N. Coesel wrote:

Indeed.  It can be proven mathematically that when the upper 16
bits of both operands are 0, the 32 bit result is the same for
a 16*16 multiply as it is for a 32*32 multiply. Therefore, the
compiler is free to do a 16*16 multiply, and many compilers
quite correctly do so.


I don't think this is true for signed multiplications. Can't reproduce the
exact why and how right now, but I had some problems with signed
multiplication in FPGA designs in which I had to extend the sign bit to all
unused bits.

You are right - signed and unsigned multiplication is different. Sign-extension is important.

For a pure software multiplication GCC does sign extension for signed multiplication and therefore this results in a 32*32=>32 bits multiplication.

The MSP430 hardware multiplier is capable of both signed an unsigned multiplication with 16*16=>32 bits. The compiler knows that and choses the right type. Sign extension is done in hardware (e.g. with Booth Encoding or the Baugh-Wooley Algorithm).

A stupid compiler would infer two 16 bit hardware mulitplications in a chain, if it would not know about the features of the MSP430 multiplier. GCC is smart enough to use a single 16*16=>32 bit hardware multiplication - singed or unsigned as needed.

Ralf

Reply via email to