The current developer docs say this: ------------------- Linux has poor default memory overcommit behavior. Rather than failing if it can not reserve enough memory, it returns success, but later fails when the memory can't be mapped and terminates the application with kill -9. To prevent unpredictable process termination, use:
sysctl -w vm.overcommit_memory=3 --------------------- This would be true if the kernel being used had the paranoid mode compiled in. This is not true, AFAICS, of either the stock 2.4 kernels nor of the latest RH kernels. It is true of 2.4.21 *with* the -ac4 (and posibly earlier -ac*) patch. In fact, Alan's patch apparently allows tuning of the amount of overcommitting allowed. As I read the kernel source I got from RH today (2.4.20-19.9), doing this will in fact make the kernel freely allow overcommiting of memory, rather than it trying in a rather unsatisfactory way to avoid it. IOW, with many kernels the advice would make things worse, not better - e.g. the RH source says this in mm/mmap.c: if (sysctl_overcommit_memory) return 1; Rather than give bad advice, it might be better to advise users (1) to run Pg on machines that are likely to be stable and not run into OOM situations, and (2) to check with their vendors about proper overcommit handling. Personally, my advice would be to avoid Linux for mission critical apps until this is fixed, but that's just my opinion, and I'm happily developing on Linux, albeit for something that is not mission critical. cheers andrew ---------------------------(end of broadcast)--------------------------- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly