Em 19/11/2013 02:30, Brian Wong escreveu:
I've tried any work_mem value from 1gb all the way up to 40gb, with no
effect on the error. I'd like to think of this problem as a server
process memory (not the server's buffers) or client process memory
issue, primarily because when we tested the error there was no other
load whatsoever. Unfortunately, the error doesn't say what kinda
memory ran out.
--- Original Message ---
From: "bricklen" <brick...@gmail.com>
Sent: November 18, 2013 7:25 PM
To: "Brian Wong" <bwon...@hotmail.com>
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] ERROR: out of memory DETAIL: Failed on request
of size ???
On Mon, Nov 18, 2013 at 12:40 PM, Brian Wong <bwon...@hotmail.com
<mailto:bwon...@hotmail.com>> wrote:
We'd like to seek out your expertise on postgresql regarding this
error that we're getting in an analytical database.
Some specs:
proc: Intel Xeon X5650 @ 2.67Ghz dual procs 6-core, hyperthreading on.
memory: 48GB
OS: Oracle Enterprise Linux 6.3
postgresql version: 9.1.9
shared_buffers: 18GB
After doing a lot of googling, I've tried setting FETCH_COUNT on
psql AND/OR setting work_mem. I'm just not able to work around
this issue, unless if I take most of the MAX() functions out but
just one.
Excuse me (or just ignore me) if it is a stupid question, but have you
configured sysctl.conf accordingly?
For instance, to use larget memory settings, I had to configure my EL as
follows:
# Controls the maximum shared segment size, in bytes
kernel.shmmax = 68719476736
# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 4294967296
Regards,
Edson
What is your work_mem set to?
Did testing show that shared_buffers set to 18GB was effective? That
seems about 2 to 3 times beyond what you probably want.