Andrei:
> > unsafe(overflows) { // code here }
> This approach has a number of issues.
This approach is the one used by Delphi, Ada and C# (C# has a way to specify
even inside a single expression), so somehow it's doable.
>First, addressing transitivity is difficult. If the code in such a scope calls
>a function, either every function has two versions, or chooses one way to go
>about it. Each choice has obvious drawbacks.<
I think in C# if you call a function from an overflow-safe zone, the code of
the function has to be overflow-safe. I think the overflow-unsafe code is
allowed to call overflow-safe code, so you don't need both functions.
>Second, programmers are notoriously bad at choosing which code is affecting
>bottom line performance, yet this feature explicitly puts the burden on the
>coder. So code will be littered with amends, yet still be overall slower. This
>feature has very poor scalability.<
You are looking at it from the wrong point of view. The overflow-safety of the
code is not about performance, it's about safety, that is about the level of
safety you accept in a part of the code. It's about trust. If you can't accept
(trust) a piece of code to be overflow-unsafe, then you can't accept it,
regardless the amount of performance you desire. The global switch is about
performance, and debugging, but when you annotate a part of the code with
unsafe(overflows) {} you are stating you accept a lower level of safety for
this part of the code (or maybe in this part of the code you are just porting
code that uses the modular nature of unsigned numbers).
> Of course they're not the same thing. Commonalities and differences.
I meant that safety is not the same thing as shifting the definition of a
range. Overflow tests are not going to produce isomorphic code.
> Well they also are a solid way to slow down all code.
The slowdown doesn't touch floating point numbers, some Phobos libraries that
contain trusted code, memory accesses, disk and net accesses, it doesn't
influence GUI code a lot, and in practice it's usually acceptable to me,
especially while I am developing/debugging code.
>You are using a different version of safety than D does. D defines very
>precisely safety as memory safety. Your definition is larger, less precise,
>and more ad-hoc.<
D has @safe that is about memory safety. But in D there is more than just
@safe, the D Zen is about the whole D language, and in D there are many
features that help to make it safer beyond memory safety.
>Probably one good thing to get past is the attitude that in such a discussion
>the other is "wrong".<
In a recent answer I have tried to explain why you can't implement compile-time
overflows in library code:
http://www.digitalmars.com/webnews/newsgroups.php?art_group=digitalmars.D&article_id=139950
Here I am right or wrong, it's not a matter of POV.
I have also explained that taking an arbitrary D code written by you and
replacing all instances of int and uint with safeInt and safeUint is not going
to happen. It's like array bounds in D code. If you need to add a safeArray
type to spot all array bound overflows, then it's a lost cause, people are not
going to do it (well, the case with a safeArray is better, because I am using a
safe array in C++).
>These are simple matters of which understanding does not require any amount of
>special talent or competence.<
Right, but several people don't have experience with using languages with
integral overflows, so while their intelligence is plenty to understand what
those tests do, they don't have the experience about the bugs they avoid and
the slowdown they cause in practice in tens of practical situations. Some
ratical experience about a feature is quite important when you want to judge
it, I have seen it many times, with dynamic typing, good tuples, array bound
tests, pure functions, and so on, so I suspect the same happens with integral
overflows.
>So the first step is to understand that some may actually value a different
>choice with different consequences than yours because they find the costs
>unacceptable.<
I am here to explain the base of my values :-) I have tried to use rational
arguments where possible.
>As someone who makes numerous posts and bug reports regarding speed of D code,
>you should definitely have more appreciation for that view.<
I'm not a one trick pony, I am a bit more flexible than you think :-) In this
case I think that fast but wrong/buggy programs are useless. Correctness comes
first, speed second. I think D design goals agree with this view of mine :-)
Bye,
bearophile