I've tried any work_mem value from 1gb all the way up to 40gb, with no effect 
on the error.  I'd like to think of this problem as a server process memory 
(not the server's buffers) or client process memory issue, primarily because 
when we tested the error there was no other load whatsoever.  Unfortunately,  
the error doesn't say what kinda memory ran out.

--- Original Message ---

From: "bricklen" <[email protected]>
Sent: November 18, 2013 7:25 PM
To: "Brian Wong" <[email protected]>
Cc: [email protected]
Subject: Re: [GENERAL] ERROR: out of memory DETAIL: Failed on request of size 
???

On Mon, Nov 18, 2013 at 12:40 PM, Brian Wong <[email protected]> wrote:

> We'd like to seek out your expertise on postgresql regarding this error
> that we're getting in an analytical database.
>
> Some specs:
> proc: Intel Xeon X5650 @ 2.67Ghz dual procs 6-core, hyperthreading on.
> memory: 48GB
> OS: Oracle Enterprise Linux 6.3
> postgresql version: 9.1.9
> shared_buffers: 18GB
>
> After doing a lot of googling, I've tried setting FETCH_COUNT on psql
> AND/OR setting work_mem.  I'm just not able to work around this issue,
> unless if I take most of the MAX() functions out but just one.
>

What is your work_mem set to?
Did testing show that shared_buffers set to 18GB was effective? That seems
about 2 to 3 times beyond what you probably want.

Reply via email to