On 04/12/2014 01:06 PM, Michel Fortin wrote:
On 2014-04-12 10:29:50 +0000, "Kagamin" <[email protected]> said:
On Saturday, 12 April 2014 at 03:02:56 UTC, Michel Fortin wrote:
2- after the destructor is run on an object, wipe out the memory
block with zeros. This way if another to-be-destructed object has a
pointer to it, at worse it'll dereference a null pointer. With this
you might get a sporadic crash when it happens, but that's better
than memory corruption.
Other objects will have a valid pointer to zeroed out block and will
be able to call its methods. They are likely to crash, but it's not
guaranteed, they may just fine corrupt memory. Imagine the class has a
pointer to a memory block of 10MB size, the size is an enum and is
encoded in the function code (won't be zeroed), the function may write
to any region of that block of memory pointed to by null after the
clearing.
Well, that's a general problem of @safe when dereferencing any
potentially null pointer. I think Walter's solution was to insert a
runtime check if the offset is going to be beyond a certain size. But
there has been discussions on non-nullable pointers since then, and I'm
not sure what Walter thought about them.
The runtime check would help in this case, but not non-nullable pointers.
Yes, they would help (eg. just treat every pointer as potentially null
in a destructor.)