Marc Mamin wrote:
Postgres configuration for 64 CPUs, 128 GB RAM...
there are probably not that much installation out there that large -
comments below
Hello,
We have the oppotunity to benchmark our application on a large server. I
have to prepare the Postgres configuration and I'd appreciate some
comments on it as I am not experienced with servers of such a scale.
Moreover the configuration should be fail-proof as I won't be able to
attend the tests.
Our application (java + perl) and Postgres will run on the same server,
whereas the application activity is low when Postgres has large
transactions to process.
There is a large gap between our current produtcion server (Linux, 4GB
RAM, 4 cpus) and the benchmark server; one of the target of this
benchmark is to verify the scalability of our application.
[...]
Posgres version: 8.2.1
upgrade to 8.2.4
File system:
_http://en.wikipedia.org/wiki/ZFS_
way more important is what kind of disk-IO subsystem you have attached ...
Planned configuration:
--------------------------------
# we don't expect more than 150 parallel connections,
# but I suspect a leak in our application that let some idle connections
open
max_connections=2000
ssl = off
#maximum allowed
shared_buffers= 262143
this is probably on the lower side for a 128GB box
# on our current best production server with 4GB RAM (not dedicated to
Postgres), work_mem is set to 600 MB
# this limitation is probably the bottleneck for our application as the
files in pgsql_tmp grows up to 15 GB
# during large aggregations (we have a locking mechanismus to avoid
parallel processing of such transactions)
work_mem = 31457280 # (30 GB)
this is simply ridiculous - work_mem is PER SORT - so if your query
requires 8 sorts it will feel free to use 8x30GB and needs to be
multiplied by the number of concurrent connections.
# index creation time is also an issue for us; the process is locking
other large processes too.
# our largest table so far is 13 GB + 11 GB indexes
maintenance_work_mem = 31457280 # (30 GB)
this is ridiculous too - testing has shown that there is not much point
in going beyond 1GB or so
# more than the max number of tables +indexes expected during the benchmark
max_fsm_relations = 100000
max_fsm_pages = 1800000
this is probably way to low for a database the size of yours - watch the
oputput of VACUUM VERBOSE on a database wide vacuum for some stats on that.
# don't know if I schoud modify this.
# seems to be sufficient on our production servers
max_stack_depth = 2MB
# vacuum will be done per hand between each test session
autovacuum = off
# required to analyse the benchmark
log_min_duration_statement = 1000
max_prepared_transaction = 100
# seems to be required to drop schema/roles containing large number of
objects
max_locks_per_transaction = 128
# I use the default for the bgwriter as I couldnt find recommendation on
those
#bgwriter_delay = 200ms # 10-10000ms between rounds
#bgwriter_lru_percent = 1.0 # 0-100% of LRU buffers
scanned/round
#bgwriter_lru_maxpages = 5 # 0-1000 buffers max written/round
#bgwriter_all_percent = 0.333 # 0-100% of all buffers
scanned/round
#bgwriter_all_maxpages = 5 # 0-1000 buffers max written/round
#WAL
fsync = on
#use default
#wal_sync_method
# we are using 32 on our production system
wal_buffers=64
values up to 512 or so have been reported to help on systems with very
high concurrency
what is missing here is your settings for:
effective_cache_size
and
random_page_cost
Stefan
---------------------------(end of broadcast)---------------------------
TIP 7: You can help support the PostgreSQL project by donating at
http://www.postgresql.org/about/donate