That is because UV (and I presumed UD) work on two's complement integers, so the top bit flips you between a positive and a negative integer. This is why I work the algorithm with 8 bit blocks, so that bit 1 in the 32 bit integer is never 1 and so BITAND works as expected.
The problem was it was using signed 64 bit integers everywhere except in these functions which seem to use 32 bit integers, as an example program on unidata:
ININT=2147483648 CRT "ININT = ":ININT CRT "BITAND(ININT, 1) = ":BITAND(ININT, 1) CRT "BITAND(ININT, 2^31 -1) = ":BITAND(ININT, 2147483647) CRT "BITAND(ININT, 2 ^31) = ":BITAND(ININT, 2147483648)
:TEST ININT = 2147483648 // = 2 ^ 31 BITAND(ININT, 1) = 1 // should be 0 BITAND(ININT, 2^31 -1) = 2147483647 // should be 0 BITAND(ININT, 2 ^31) = 2147483647 // should be 2^32
In your program you have BITAND(ININT, 255) to strip out the smallest byte which will work if and only if ININT < 2^31
I probably should report this to ibm support, but ibm "outsourced" our support contract recently to a third party and i am very nervous about calling them up:
http://www.salon.com/tech/feature/2004/02/23/no_support/index_np.html
(Sorry about the ad, but article is one of the funniest i have seen this year).
- Robert
-- u2-users mailing list [EMAIL PROTECTED] http://www.oliver.com/mailman/listinfo/u2-users
