I'm implementing data type U for binary unsigned. (Mucking about in TCP/IP headers tends to instill such a desire fairly quickly.)
What should happen when an output field has type U and the input field has type D, F, or P and a negative sign? For twos complement input, I plan to treat the number as unsigned without changing any bits, that is, -1 becomes 2**32-1. This is in line with the C language treatment. OK? What about float and packed? Should the sign be dropped silently or should this condition attract an error message? Thanks, j. PS: The G3 level set is getting closer (that is, it has occurred for a stage I just added, but not yet across the board). Those who would suffer from such a move, please speak up now or send me mail off the list.
