"Richard Yen" <[EMAIL PROTECTED]> writes:

> My understanding is that if any one postgres process's memory usage,  plus the
> shared memory, exceeds the kernel limit of 4GB, then the  kernel will kill the
> process off.  Is this true?  If so, would  postgres have some prevention
> mechanism that would keep a particular  process from getting too big?  (Maybe
> I'm being too idealistic, or I  just simply don't understand how postgres 
> works
> under the hood)

I don't think you have an individual process going over 4G. 

I think what you have is 600 processes which in aggregate are using more
memory than you have available. Do you really need 600 processes by the way?

You could try lowering work_mem but actually your value seems fairly
reasonable. Perhaps your kernel isn't actually able to use 16GB? What does cat
/proc/meminfo say? What does it say when this is happening?

You might also tweak /proc/sys/vm/overcommit_memory but I don't remember what
the values are, you can search to find them.

-- 
  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com

---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend

Reply via email to