Adrian Hey wrote:
John Meacham wrote:
I believe it is because a stack cannot be garbage collected, and must be
traversed as roots for every garbage collection. I don't think there are
any issues with a huge stack per se, but it does not play nice with
garbage collection so may hurt your performance and memory usage in
unforeseen ways.
I'm still not convinced :-(
I also don't believe it's in anybodys interest to have programs
failing for no good reason. A good reason to fail is if overall
memory demands are getting stupid. Failing because the stack has
grown beyond some arbitrary (and typically small) size seems
bad to me.
I know that this is to a certain extent this is controllable
using RTS options, but this is no use to me as a library
writer tying to chose between stackGobbler and heapGobbler.
The stuff should "just work" and not be dependent on the right
RTS incantations being used when the final program is run.
I'm more than happy to change the defaults, if there's some agreement on what
the defaults should be. The current choice is somewhat historical - we used to
have a bound on both heap size and stack size, but the heap size bound was
removed because we felt that on balance it made life easier for more people, at
the expense of a bit more pain when you write a leaky program.
Also, it used to be the case that the OOM killer could be a bit unpredictable,
killing vital system processes instead of the leaky Haskell program. I'm not
sure if this is still true for current OS incarnations.
Cheers,
Simon
_______________________________________________
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users