On 5/31/17 5:30 PM, Ali Çehreli wrote:
On 05/31/2017 02:00 PM, Steven Schveighoffer wrote:
On 5/31/17 3:17 PM, Moritz Maxeiner wrote:

It is not that accessing the array out of bounds *leading* to data
corruption that is the issue here, but that in general you have to
assume that the index *being* out of bounds is itself the *result* of
*already occurred* data corruption;

To be blunt, no this is completely wrong.

Blunter: Moritz is right. :)

I'll ignore this section of the debate :)


Memory corruption *already having happened* can cause any
number of errors.

True.

The point of bounds checking is to prevent memory corruption in
the first place.

That's just one goal. It also maintains an invariant of arrays: The
index value must be within bounds.

But the program cannot possibly know which variable is an index. So it cannot maintain the invariant until it's actually used.

At that point, it can use throwing an Error to say that something isn't right, or it can use throwing an Exception. D chose Error, and the consequences of that choice are that you have to check before D checks or else your entire program is killed.


I could memory corrupt the length of the array also (assuming a
dynamic array), and bounds checking merrily does nothing to
stop further memory corruption.

That's true but the language provides no tool to check for that. The
fact that program correctness is not achievable in general should not
have any bearing on bounds checking.

My point simply is that assuming corruption is not a good answer. It's a good *excuse* for the current behavior, but doesn't really satisfy any meaningful requirement.

To borrow from another subthread here, imagine if when you attempted to open a non-existent file, the OS assumed that your program must have been memory corrupted and killed it instead of returning ENOENT? It could be a "reasonable" assumption -- memory corruption could have caused that filename to be corrupt, hence you have sniffed out a memory corruption and stopped it in its tracks! Well, actually not really, but you saw the tracks. Or else, maybe someone made a typo?

and if data corruption occurred for
the index, you *cannot* assume that *only* the index has been affected.
The runtime cannot simply assume the index being out of bounds is not
the result of already occurred data corruption, because that is
inherently unsafe, so it *must* terminate asap as the default.

The runtime should not assume that crashing the whole program is
necessary when an integer is out of range. Preventing actual corruption,
yes that is good. But an Exception would have done the job just fine.

How could an Exception work in this case? Catch it and repeat the same
bug over and over again? What would the program be achieving? (I assume
the exception handler will not arbitrarily decrease index values.)

Just like it works for all other exceptions -- you print a reasonable message to the offending party (in this case, it would be a 500 error I think), and continue executing other things. No memory corruption has occurred because bounds checking stopped it, therefore the program is still sane.

-Steve

Reply via email to