Greg Stark <[EMAIL PROTECTED]> writes: > I was just trying to clarify the situation since someone made some comment > about it having to do with memory being swapped out and then finding nowhere > to swap in when needed. That's not exactly what's happening.
No. I believe the case that is actually hard for the kernel to predict comes from copy-on-write: when a process forks, you could potentially need twice its current memory image, but in reality you probably won't ever need that much since many of the shared pages won't ever be written by either process. However, a non-overcommitting kernel must assume that worst case, and hence fail the fork() if it doesn't have enough swap space to cover both processes. If it does not, then the crunch comes when one process does touch a shared page. If there is no available swap space at that time, kill -9 is the only recourse, because there is no way in the Unix API to fail a write to valid memory. The reason for having a lot more swap space than you really need is just to cover the potential demand from copy-on-write of pages that are currently shared. But given the price of disk these days, it's pretty cheap insurance. regards, tom lane ---------------------------(end of broadcast)--------------------------- TIP 5: don't forget to increase your free space map settings