One might allocate at least 3.2GB of swap for a 4GB machine, but many of our machines run with no swap, and we're probably not alone. And 200 processes are not a lot. Would you really have over 32GB of swap allocated for a 4GB machine with 2,000 processes?
Programs can use a surprising amount of stack space. A recent notable example is venti/copy when copying from a nightly fossil dump score. I think we want to be generous about maximum stack sizes. I don't think that estimates of VM usage would be an improvement. If we can't get it exactly right, there will always be complaints.
