We are having terrible performance issues with a production instance of
PostgreSQL version 7.4.5, but are struggling with which parameters in
the postgresql.conf to change.
Our database server is an Apple G5 (2 x 2GHz CPU, 2GB RAM). The
operating system is Mac OS X 10.3.
The database seems to fine to start with, but then as the load increases
it seems to reach a threshold where the number of non-idle queries in
pg_stat_activity grows heavily and we appear to get something similar to
a motorway tail back with up to perhaps 140 queries awaiting processing.
At the same time the virtual memory usage (reported by the OS) appears
to grow heavily too (sometimes up to 50GB). The CPUs do not seems to be
working overly hard nor do the disks and the memory monitor reports
about 600MB of inactive memory. Once in this situation, the database
never catches up with itself and the only way to put it back on an even
keel is to stop the application and restart database.
The system memory settings are:
We have unlimited the number of processes and open files for the user
running PostgreSQL (therefore max 2048 processes and max 12288 open files).
Non default postgresql parameters are:
tcpip_socket = true
max_connections = 500
unix_socket_directory = '/Local/PostgreSQL'
shared_buffers = 8192 # min 16, at least max_connections*2,
sort_mem = 2048 # min 64, size in KB
wal_buffers = 32 # min 4, 8KB each
effective_cache_size = 100000 # typically 8KB each
random_page_cost = 2 # units are one sequential page fetch cost
log_min_error_statement = info # Values in order of increasing severity:
log_duration = true
log_pid = true
log_statement = true
log_timestamp = true
stats_command_string = true
although on the last restart I changed the following (since the current
config clearly isn't working):
shared_buffers = 16384 # min 16, at least max_connections*2,
effective_cache_size = 10000 # typically 8KB each
We don't know whether these have helped yet - but we should get a good
idea around 10am tomorrow morning.
We currently have the application limited to a maximum of 40 concurrent
connections to the database.
Our application produces a fairly varied mix of queries, some quite
complex and plenty of them. We seem to average about 400,000 queries per
hour. At first I thought it might be one or two inefficient queries
blocking the CPUs but the CPUs don't seem to be very stretched. My guess
is that we have our postgresql memory settings wrong, however, the is
lots of conflicting advice about what to set (from 1000 to 100000 shared
Does this heavy use of VM and query tail back indicate which memory
settings are wrong? Presumably if there are 140 queries in
pg_stat_activity then postgresql will be trying to service all these
queries at once? I also presume that if VM usage is high then we are
paging a vast amount to disk. But I am not sure why.
Has anyone seen this behaviour before and can anyone point me in the
---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not