Peng Yu wrote: > My point is why bash terminates when it runs an external command that > requires a large memory. Shouldn't bash return an exit code on behalf > of the failed command and continue to the next command?
Note that I am going to say something without looking at the source code of bash to say it. Therefore I might be wrong about bash. But it is a general statement about shells. > ... runs an external command that requires a large memory If bash is running an external comand then it will be doing so in a fork()'d off subshell using exec(). It won't be bash anymore but the other process. If it exits after the fork but before the exec then it will be the subshell child process that exits not the main bash execution thread. If the bash process itself has grown very large then the fork() might fail on systems that do not use memory overcommit. (Traditional Unix kernels or Linux kernels with overcommit disabled for robustness.) If the fork fails then I am not sure what bash should do in that case. It could potentially loop and retry a few times but if all fail then exiting is probably the only reasonable action. If bash itself is running out of memory then it is almost certainly due to inefficient shell scripting. For example like one that I analyzed in this thread. Of course that is just one of many possibilities. http://lists.gnu.org/archive/html/bug-bash/2011-11/msg00189.html Bob