That's useful to know - thanks Florian.  So it's possible to forego the -1 check if no downsizing occurs.

I suppose it makes sense... if we go by the signed 64-bit equivalents, $FFFFFFFF80000000 div $FFFFFFFFFFFFFFFF = $0000000080000000, which when typecast to a LongInt does result in a signed overflow that isn't checked without -Co, so the answer is $80000000.  On the flip side, $0000000080000000 div $FFFFFFFFFFFFFFFF = $FFFFFFFF80000000, which when typecast to a LongInt is also equal to $80000000, this time without an overflow.  Even though the original operand of $0000000080000000 is out of range for a LongInt, it's perfectly fine as an Int64.

If my logic is correct, the -1 check is not needed in the following conditions:

* The division is being upsized.
* There is no change in size and neither source is unsigned (e.g. Cardinal --> LongInt would require the check).

Currently, as given in my original code example, LongInt(LongInt div LongInt) is given the -1 check (assuming Integer = LongInt), and this just seems like a waste.  Would you approve this change Florian? (I'll also add a comment to explain about the downsizing situation)

Kit

----

On 19/05/2023 21:55, Florian Klämpfl via fpc-devel wrote:
Am 19.05.23 um 21:14 schrieb J. Gareth Moreton via fpc-devel:
So I need to ask... should the check for a divisor of -1 still be performed?

Yes. This is the result of "down sizing" a division. In case of

longint(int64 div int64) can be converted only into longint(int64) div longint(int64) if this check is carried out. longint($80000000 div $ffffffff) must silently result in $8000000 in this case.

The case of doing "min_int div -1", even with unsigned-to-signed typecasting, seems very contrived and the programmer should expect problems if "min_int" and "-1" appear as the operands.  Is there a specific example where this implicit check is absolutely necessary?  As others have pointed out, silently returning "min_int" as the answer seems more unexpected (granted this is just the behaviour of an optimisation that converts the nodes equating to "x div -1" to "-x", and Intel's NEG instruction doesn't return an error if min_int is its input operand, but I can't be sure if the same applies to non-Intel processors and their equivalent instructions).

Kit

On 17/05/2023 09:51, J. Gareth Moreton via fpc-devel wrote:
Logically yes, but using 16-bit as an example, min_int is -32,768, and signed 16-bit integers range from -32,768 to 32,767. So -32,768 ÷ -1 = 32,768, which is out of range.  This is where the problem lies.

Internally, negation involves inverting all of the bits and then adding 1 (essentially how you subtract a number using two's complement), so min_int, which is 1000000000000000, becomes 0111111111111111 and then, after incrementing, 1000000000000000, which is min_int again.

Kit

On 16/05/2023 13:13, Jean SUZINEAU via fpc-devel wrote:
Le 16/05/2023 à 01:47, Stefan Glienke via fpc-devel a écrit :
min_int div -1

"min_int div -1"  should give  "- min_int" ?
_______________________________________________
fpc-devel maillist  -  [email protected]
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-devel

_______________________________________________
fpc-devel maillist  -  [email protected]
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-devel

_______________________________________________
fpc-devel maillist  -  [email protected]
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-devel

_______________________________________________
fpc-devel maillist  -  [email protected]
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-devel

_______________________________________________
fpc-devel maillist  -  [email protected]
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-devel

Reply via email to