On Friday, June 27, 2003, at 01:17 PM, Jord Tanner wrote:

On Fri, 2003-06-27 at 12:09, Patrick Hatcher wrote:

I have 6gig Ram box.  I've set my shmmax  to 3072000000.  The database
starts up fine without any issues.  As soon as a query is ran
or a FTP process to the server  is done, the used memory shoots up and
appears to never be released.

In my experience Linux likes to allocate almost all available RAM. I've never had any trouble with that. I'm looking at the memory meter on my RH9 development workstation and it is at 95%. Performance is good, so I just trust that the kernel knows what it is doing.

Mem: 6711564K av, 6517776K used, 193788K free, 0K shrd, 25168K
Swap: 2044056K av, 0K used, 2044056K free 6257620K

I've heard anecdotally that Linux has troubles if the swap space is less
than the RAM size. I note that you have 6G of RAM, but only 2G of swap.

I've heard that too, but it doesn't seem to make much sense to me. If you get to the point where your machine is _needing_ 2GB of swap then something has gone horribly wrong (or you just need more RAM in the machine) and it will just crawl until the kernel kills off whatever process causes the swap space to be exceeded. Seems to me that you should only have that much swap if you can't afford more RAM or you've tapped out your machine's capacity, and your application needs that much memory.

---------------------------(end of broadcast)--------------------------- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly

Reply via email to