On Tuesday, 21 July 2020 at 12:34:14 UTC, Adam D. Ruppe wrote:
With the null `a`, the offset to the static array is just 0 +
whatever and the @safe mechanism can't trace that.
So the arbitrary limit was put in place to make it more likely
that such a situation will hit a protected page and segfault
instead of carrying on. (most low addresses are not actually
allocated by the OS... though there's no reason why they
couldn't, it just usually doesn't, so that 16 MB limit makes
the odds of something like this actually happening a lot lower)
I don't recall exactly when this was discussed but it came up
in the earlier days of @safe, I'm pretty sure it worked before
then.
If that's the case I would consider this 16MB limit unnecessary.
Most operating systems put a guard page at the very bottom of the
stack (which is usually 1MB - 4MB, usually 1MB on Linux). Either
the array will hit that page during initialization or something
else during the execution.
Let's say someone puts a 15MB array on the stack, then we will
have a page fault instead for sure and this artificial limit
there for nothing. With 64-bits or more and some future crazy
operating system, it might support large stack sizes like 256MB.
This is a little like a 640kB limit.