On 2/1/14, 1:11 PM, Walter Bright wrote:
On 2/1/2014 12:09 PM, Andrei Alexandrescu wrote:
On 2/1/14, 2:14 AM, Jonathan M Davis wrote:
How is it unsafe? It will segfault and kill your program, not corrupt
memory.
It can't even read any memory. It's a bug to dereference a null
pointer or
reference, but it's not unsafe, because it can't access _any_ memory,
let
alone memory that it's not supposed to be accessing, which is
precisely what
@safe is all about.

This has been discussed to death a number of times. A field access
obj.field
will use addressing with a constant offset. If that offset is larger
than the
lowest address allowed to the application, unsafety may occur.

The amount of low-address memory protected is OS-dependent. 4KB can
virtually
always be counted on. For fields placed beyond than that limit, a
runtime test
must be inserted. There are few enough 4KB objects out there to make this
practically a non-issue. But the checks must be there.

Another way to deal with it is to simply disallow @safe objects that are
larger than 4K (or whatever the size is on the target system).

This seems like an arbitrary limitation, and one that's hard to work around without significant code surgery. I think the true solution is runtime checks for fields beyond the 4K barrier. They will be few and far between and performance-conscious coders already know well to lay out all hot data at the beginning of the object.

Andrei

Reply via email to