--- In [email protected], "Pedro Izecksohn" <[EMAIL PROTECTED]> wrote:
>
> OK, I wrote a bad piece of code. Let me try to codify my problem again:
> 
> #include <limits.h>
> #include <stdio.h>
> 
> int main (void) {
> unsigned short int a;
> unsigned long long int b, c;
> a = USHRT_MAX;
> b = (a*a);
> c = ((unsigned int)a*(unsigned int)a);
> printf ("Why %llx != %llx ?\n", b, c);
> return 0;
> }
> 
> When I execute it I get:
> Why fffffffffffe0001 != fffe0001 ?
> 
> b is wrong.

I see what you are saying - ignoring what C says should happen, and
assuming USHRT_MAX = 0xffff, you have:

b = a * a
  = 0xffff * 0xffff
  = 0xfffe0001

So why do you get 0xfffffffffffe0001?

Good question - my guess is that at some point the result is a
negative (ie. signed) value, and this gets sign-extended.

For example, if you have -1 as a signed short, and want to convert it
to a signed int, the result should still be -1. But if you like at
these values in hex, you have 0xffff (short) becoming 0xffffffff
(int), which looks similar to what has happened to b in your program.

But hopefully some can quote the relevant bit of the Standard.

John

Reply via email to