http://d.puremagic.com/issues/show_bug.cgi?id=5176



--- Comment #10 from Michel Fortin <michel.for...@michelf.com> 2012-03-20 
07:45:08 EDT ---
(In reply to comment #6)
> One possibility is to allow arbitrary sizes but have the compiler insert 
> checks
> for all field accesses through pointer or reference when the field offset is
> beyond the OS's protected area.

That'd work. It'd introduce uneven performance characteristics, but that could
be the price to pay for safety. (To not pay this price we'd need
statically-enforced non-nullable pointers.)

On second thought, it needs a little tweak. Take this example:

struct A {
  /* a lot of fields */
  int f; // offset is 3 Kb - 4 bytes
}
struct B {
  A a1; // field offset is 0
  A a2; // field offset is 3 Kb
}

int foo(ref A a) {
  return a.f; // offset is 3K-4, considered safe?
}
int bar(B * b) {
  return foo(b.a2); // offset is 3K, considered safe?
}

Here, if you call bar(null), it'll call foo() with address null + 3K. foo()
will in turn add 3K-4 to the pointer, making it 6K-4 bytes, beyond the 4K
protected range.

What you need to take into account when deciding to insert the null check is
not whether the offset is beyond the protected range, but whether the memory
area of the field is entirely encompassed in the protected range. So you need
to use field.offsetof+field.sizeof < 4K (or whatever the protected range is for
a specific machine).

Note that you also need to do this same check when taking the address of a
field:

B * baz() {
  return null;
}
void coz() {
  B * b = baz();
  A * a = &b.a2; // b.a2 spans on 3K..6K range, unsafe
  int f = a.f; // address of field is 6K-4
}

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
------- You are receiving this mail because: -------

Reply via email to