On Wednesday, 21 November 2018 at 07:47:14 UTC, Jonathan M Davis wrote:

IMHO, requiring something in the spec like "it must segfault when dereferencing null" as has been suggested before is probably not a good idea is really getting too specific (especially considering that some folks have argued that not all architectures segfault like x86 does), but ultimately, the question needs to be discussed with Walter. I did briefly discuss it with him at this last dconf, but I don't recall exactly what he had to say about the ldc optimization stuff. I _think_ that he was hoping that there was a way to tell the optimizer to just not do that kind of optimization, but I don't remember for sure.

The issue is not specific to LDC at all. DMD also does optimizations that assume that dereferencing [*] null is UB. The example I gave is dead-code-elimination of a dead read of a member variable inside a class method, which can only be done either if the spec says that`a.foo()` is UB when `a` is null, or if `this.a` is UB when `this` is null.

[*] I notice you also use "dereference" for an execution machine [**] reading from a memory address, instead of the language doing a dereference (which may not necessarily mean a read from memory). [**] intentional weird name for the CPU? Yes. We also have D code running as webassembly...

-Johan

Reply via email to