On Wednesday, 16 July 2014 at 21:26:41 UTC, Gary Willoughby wrote:
This was asked a few years ago and i could find a definitive answer.

http://forum.dlang.org/thread/[email protected]

On Saturday, 5 May 2012 at 04:57:48 UTC, Alex Rønne Petersen wrote:
I don't think the language really makes it clear whether overflows and underflows are well-defined. Do we guarantee that for any integral type T, T.max + 1 == T.min and T.min - 1 == T.max?

What is the current situation of integer overflow and underflow?

My understanding:

Every machine D will feasibly support will do T.max + 1 == T.min and T.min - 1 == T.max for native integral operations, signed or unsigned.

BUT:

We are using C(++) backends, which may assumed an undefined result for signed over/underflow. Optimisers could cause us pain here.

BUT:

It's probably not taken advantage of due to the amount of C code that assumes the expected semantics. Also, there may be ways of telling backends that it is defined and we may be using those ways, I don't know.

Reply via email to