Peter Humphrey wrote:
On Friday 22 July 2011 19:13:35 Grant wrote:

Wouldn't a sufficiently large swap (100GB for example) completely prevent
out of memory conditions and the oom-killer?
Of course, on any system with more than a few dozen MB of RAM, but I can't
imagine any combination of running programs whose size could add up to even
a tenth of that, with or without library sharing (somebody will be along
with an example in a moment). For instance I'm running four instances of
BOINC projects here, one on each core, with oodles of space to spare and no
swapping. Mind you, I do have 16GB RAM :-)

Having said that I ought to go and shrink my swap partitions, but my disks
are only half-allocated already, so I don't see the point.



This sounds like me. I have 16Gbs here too. I have 1Gb of swap . . . . because it has always worked for me. I should have made it 300Mbs tho. The only reason I want swap is to prevent a crash long enough to maybe do something about it.

This is just my opinion. Unless you are strapped for memory, mobo can't have that much, you only need a few hundred Mbs really. All you need is enough to prevent a crash and let you know when you are running short. If you have a mobo that maxes out at 1Gb or something, then you may want some swap with enough space to make up for the shortage of ram, realizing of course that it is going to slow things down, most likely a lot.

If my main rig starts using swap a lot, I'm going to be very curious. I even used 8Gbs to put portages work directory on tmpfs. I still didn't use any swap. By the way, that doesn't seem to make the compiles any faster. o_O

One other thing, don't forget you can adjust swapiness to control how bad thigns get before it starts using swap. A setting of 100 will use swap in a hurry and put about anything in it. I setting of 10, 20 or something means it will only use swap if it is out of ram and it can't make any available.

Man it's hot here.

Dale

:-)  :-)

Reply via email to