On Fri, Apr 19, 2019 at 3:16 PM Chet Ramey <chet.ra...@case.edu> wrote: > On 4/19/19 4:21 AM, Ole Tange wrote: > > Reading https://www.gnu.org/prep/standards/standards.html#Semantics > > > > """Avoid arbitrary limits on the length or number of any data > > structure, including file names, lines, files, and symbols, by > > allocating all data structures dynamically.""" > > > > You could argue that Bash being a GNU tool, it should do like Perl: > > Run out of memory before failing. > > You've obviously overlooked the FUNCNEST variable and its effects,
I tried with FUNCNEST: $ FUNCNEST=100000000 $ re() { t=$((t+1)); if [[ $t -gt 8000000 ]]; then echo foo; return; fi; re; }; re Warning: Program '/bin/bash' crashed. So even by setting FUNCNEST it still crashes. The man page 4.4.19(1) says: "By default, no limit is imposed on the number of recursive calls." which I cannot see as being correct. It may not be limited by FUNCNEST, but it is clearly limited. > I am curious about this point. Why do you think bash would exceed some > kind of memory resource limit before it exceeds a stack size limit? I expect bash to behave similar to other interpreted languages by either telling me that I have reached the limit of recursion (like Zsh/Ksh) or by running out of memory (like Perl). I expect this to happen without setting FUNCNEST. I do not expect bash to segfault ever, and I cannot remember ever seeing a program that segfaulted by design. All the segfaults I have experienced were due to bugs in the program. I imagine the segfault happens because we have a pointer outside the stack size limit. Maybe just give an error and exit ("out of stack space" or similar - preferably also telling me how to raise this limit in case this was not an mistake), whenever such a pointer is encountered instead of segfaulting? Had I seen that, I would not have assumed it was a bug. /Ole