Martin schrieb:
During above calculation ("or") a sign extension is required because
the result *must* have a definite sign. Else a following comparison of
e.g. (x or j)>0 could not determine a result.
This is casting a "set of bits" (neither signed, nor unsigned - a set is
not a number at all) into a number. This only needs to have a
definition, if it should cast to signed or unsigned type.
A set of bits should be incompatible with ordinal values, as is, then
everything is fine. The compiler would flag any mix as an error, instead
of doing something unwanted.
Unsigned integer types can be represented as subranges of the next
bigger signed type, then all problems with unwanted sign extension go
away. When e.g. a file size exceeds 31 bits, it's not a good idea to
store it as an unsigned 32 bit value, which will be too small again a
few weeks later.
The operation could be done with 32 bit
set_of_bits := s32 or u32;
and if you do
set_of_bits > int
set_of_bits + int
or anything then set_of_bits must be cast to an int type. It needs to be
defined if that should cast in to signed or unsigned.
Right. The intended cast should be inserted by the coder.
DoDi
_______________________________________________
fpc-devel maillist - fpc-devel@lists.freepascal.org
http://lists.freepascal.org/mailman/listinfo/fpc-devel