Richard Yen [EMAIL PROTECTED] writes:
Here is a snippet of my log output (I can give more if necessary):
Sep 5 18:38:57 tii-db2.oaktown.iparadigms.com Out of Memory: Kill
process 11696 (postgres) score 1181671 and children.
My understanding is that if any one postgres process's memory
Richard Yen [EMAIL PROTECTED] writes:
My understanding is that if any one postgres process's memory usage, plus the
shared memory, exceeds the kernel limit of 4GB, then the kernel will kill the
process off. Is this true? If so, would postgres have some prevention
mechanism that would
I've recently run into problems with my kernel complaining that I ran
out of memory, thus killing off postgres and bringing my app to a
grinding halt.
I'm on a 32-bit architecture with 16GB of RAM, under Gentoo Linux.
Naturally, I have to set my shmmax to 2GB because the kernel can't
Richard Yen wrote:
Hi All,
I've recently run into problems with my kernel complaining that I ran
out of memory, thus killing off postgres and bringing my app to a
grinding halt.
I'm on a 32-bit architecture with 16GB of RAM, under Gentoo Linux.
Naturally, I have to set my shmmax to 2GB
On Thu, Sep 06, 2007 at 09:06:53AM -0700, Richard Yen wrote:
My understanding is that if any one postgres process's memory usage,
plus the shared memory, exceeds the kernel limit of 4GB,
On a 32 bit system the per-process memory limit is a lot lower than 4G.
If you want to use 16G
* Gregory Stark:
You might also tweak /proc/sys/vm/overcommit_memory but I don't remember what
the values are, you can search to find them.
2 is the interesting value, it turns off overcommit.
However, if you're tight on memory, this will only increase your
problems because the system fails