Hi,
On Wednesday 04 March 2009 02:37:42 Scott Marlowe wrote:
If some oddball query really needs a lot of work_mem,
and benchmarks show something larger work_mem helps, consider raising
the work_mem setting for that one query to something under 1G (way
under 1G) That makes it noticeably
- Scott Marlowe scott.marl...@gmail.com escreveu:
Oh my lord, that is a foot gun waiting to go off. Assuming 2k
connections, and somehow a fair number of them went active with big
sorts, you'd be able to exhaust all physical memory with about 8 to
16 connections. Lower work_mem now. To
Running PG 8.1.11 on AIX 5.3
Trying to track down the cause of long running commits on one of our DB
servers.
I can rule checkpoints out (I've set log_min_messages to debug2 and the commits
are not happening during checkpoints).
We have this problem over a period of ~ 35 minutes per day
You may have decreased performance in your batch jobs with the lower work_mem
setting.
Additionally, the fact that you haven't had swap storm issues so far means that
although there is certain risk of an issue, its probably a lot lower than what
has been talked about here so far.
Without a
On Wed, Mar 4, 2009 at 11:18 AM, Scott Carey sc...@richrelevance.com wrote:
You may have decreased performance in your batch jobs with the lower
work_mem setting.
That would be why I recommended benchmarking queries that need more
memory and setting work_mem for those queries alone.
On Tue, Mar 3, 2009 at 5:34 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Comparisons on
numerics aren't terribly fast though (in either release). I wonder
whether you could change the key columns to int or bigint.
I changed the affected columns from numeric to integers and I was unable to
get any
Query and first part of the table descriptions
Query:
explain analyze
select distinct on
(t8.id) t8.id,
t8.payment_id,
t8.amount_id,
t8.active,
2nd part of table descriptions
Table public.company
Column|Type | Modifiers
--+-+
id | integer | not null