Paul Eggert <[EMAIL PROTECTED]> writes:

> Simon Josefsson <[EMAIL PROTECTED]> writes:
>
>> -#define K1 0x5a827999L
>> -#define K2 0x6ed9eba1L
>> +#define K1 0x5a827999
>> +#define K2 0x6ed9eba1
>
> That fixes the bug on hosts where int is 32 bits and long is 64 bits,
> but if I understand the problem you found, then surely the bug remains
> on hosts where int and long are both 64 bits.

Yes, probably.  OTOH, if 'short' on that platform was 32 bit, wouldn't
those constant be 'short', and there wouldn't be a problem?  Then it
seems that this fix also work:

#define K1 ((uint32_t)0x5a827999)
#define K2 ((uint32_t)0x6ed9eba1)

However, if the platform doesn't have ANY normal integer type that is
32 bits, there would be problems.  But can such a platform exist?
uint32_t must exist on it, afterall.

> -  return crc32_update_no_xor (crc ^ 0xffffffffL, buf, len) ^ 0xffffffffL;
> +  return crc32_update_no_xor (crc ^ 0xffffffff, buf, len) ^ 0xffffffff;

Looks fine to me, I'm installing it.

> -#define rol(x,n) ( ((x) << (n)) | ((x) >> (32-(n))) )
> +#define rol(x, n) (((x) << (n)) | ((uint32_t) (x) >> (32 - (n))))
...
> -#define rol(x, n) (((x) << (n)) | ((x) >> (32 - (n))))
> +#define rol(x, n) (((x) << (n)) | ((uint32_t) (x) >> (32 - (n))))

I don't understand this.  Why would a left-shifted uint32_t become any
other type?

/Simon


Reply via email to