On Monday, 25 August 2014 at 11:09:48 UTC, Marc Schütz wrote:
I believe this proposal is close to the best we can achieve without resorting to data flow analysis.

I agree with bearophile, perhaps data flow analysis would be desirable. So I think it would be a good idea to hold the door open on that.

I'm unfortunately not familiar with the theoretical foundations of type systems, and formal logic. So I'm not sure what exactly

I don't know all that much about linear type systems, but this is an opportunity to learn more for all of us! :-)

you mean here. It seems to me the problems with @safe are with the way it was implemented (allow everything _except_ certain things that are known to be bad, instead of the opposite way: allow nothing at first, but lift restrictions where it's safe to do so), they are not conceptual.

Because real programming languages are hard to reason about it is difficult to say where things break down. So one usually will map the constructs of the language onto something simpler and more homogeneous that is easier to reason about.

When it comes to @safe I think the main problem is that D makes decisions on constructs and not on the "final semantics" of the program. If you have a dedicated high level IR you can accept any program segment that can be proven to be memory safe in terms of the IR. The point is to have an IR that is less complicated than the full language, but that retains needed information that is lost in a low level IR and which you need to prove memory safety.

memset() is not unsafe per se, it is unsafe with the wrong parameters. So you have to prove that the parameters are in the safe region. Same thing with freeing memory and borrowed pointers etc. You need a correctness proof even if you give up on generality.

You may reject many safe programs but at least verify as many simple safe programs as you can.

A bonus of having a high level IR is that you more easily can combine languages with fewer interfacing problems. That would be an advantage if you want a DSL to cooperate with D.

But if you see potential problems, or know a way to avoid them, this is exactly the kind of thing I'd like to see discussed.

The problem is that D is too complex to reason about with any reasonable level of confidence. So you need to reduce the language to a level where you can reason about it with confidence and build the other constructs on top of that.

(That way you don't have to reason about the combinatorial explosion of constructs, just the building blocks).

Ola.

Reply via email to