Garst R. Reese wrote:

        uint16_t msize;
        uint32_t maxblocks;


        maxblocks = (csd.bm.C_SIZE + 1) * (1 << (csd.bm.C_SIZE_MULT + 2));
//      maxblocks = 1960*64;
16 bit * 16 bit signed multiply overflows. Result undefined

        msize = maxblocks/64;
32 bit / 16 bit division of the undefined number. Result is stored as a 16 bit unsigned integer

//      printf("%u",(1 << (csd.bm.C_SIZE_MULT + 2)));
Number in range, so it presumably is really 64

//      printf("%u",csd.bm.C_SIZE + 1);
Number in range, so it presumably is really 1960

        printf("%u",maxblocks/64);
Some random overflowed 16 bit number stored in a 32 bit integer / 64 makes a 32 bit answer. This is then treated as a 16 bit value by printf, since a %u parameter is used.

//      printf("%u",msize);
Similar to the last printf, except a real 16 bit unsigned value is passed to printf as a 16 bit value.

When enabled, the first printf prints 64.
the second printf prints 1960
the third and fourth printfs print 65448.

Substituting maxblocks = 1960*64 for the longer experession did not
help.

Am I missing something?
Some incorrect mised length operations, I think :-)

Regards
Steve



Reply via email to