Walter:

> This is not a small penalty. Adds, multiplies, and subtracts are the bread and
> butter of what the executable code is.

I have used overflow tests in production Delphi code. I am aware of the 
performance difference they cause.


> becomes 2 instructions:
>      ADD EAX,3
>      JC overflow

Modern CPUs have speculative execution. That JC has a very low probability, and 
the CPU executes several instructions under it. That speculative execution 
branch is almost never discarded, because the overflows are very rare, the 
result is that from that code you see a significant slowdown only on simpler 
CPUs like Atoms. On normal good CPUs the performance loss is not too much large 
(probably less than 50% for the code most filled up with integer operations 
I've found).


> I agree that overflow is a pretty rare issue and way down the list. With 64 
> bit 
> targets, it's probably several orders of magnitude even less of an issue.

Every time I am comparing a signed with an unsigned I have an overflow risk in 
D. And overflow tests are a subset of more general range tests (see the recent 
thread about bound/range integers).


> Of far more interest are improving abstraction abilities to prevent logic 
> errors.

I am working on this too :-) I have proposed some little extensions of the D 
type system, they are simple @ annotations. But they are all additive changes, 
so they may wait for D3 too.


> It's also possible in D to build a "SafeInt" library type that will check for 
> overflow. These classes exist for C++, but nobody seems to have much interest 
> in 
> them.

People use integer overflows in Delphi code, I have seen them used even in code 
written by other people too. But those tests are uncommon in the C++ code I've 
seen. Maybe the cause it's because using a SafeInt is a pain and it's not 
handy. How many people are using array bound tests in C? Not many, despite 
surely some people desire to do that. How many people use array bound tests in 
D? Well, probably everyone. Because it's built-in and you just need a switch to 
disable them.
In C++ vector you even need a different way to use bound tests:
http://www.cplusplus.com/reference/stl/vector/at/
I have seen C++ code that uses at(), but it's probably much less common than 
transparent array bound tests as done in D, that don't need a different syntax.


> Another way to deal with this is to use Don's most excellent std.bigint 
> arbitrary precision integer data type. It can't overflow.

Then I'd like a compiler switch that works very well to automatically change 
all integral numbers in a program into bigints (and works well with all the 
int, short, ubyte and ulong etc type annotations too, of course).

Have you tried to use the current bigints as a replacement for all ints in a 
program? They don't cast automatically to size_t (and there are few other 
troubles, time ago I have started a thread about this), so every time you use 
them as array indexes you need casts or more. And you can't even print them 
with a writeln. You care for the performance loss coming from replacing an "ADD 
EAX,3" with an "ADD EAX,3 JC overflow" but here you suggest me to replace 
integers with heap-allocated bigints.

Bye,
bearophile

Reply via email to