Hello all 

In a dedicated server with 16 cores and 16GB of RAM running PostgreSQL 8.2.5 we 
have a database with basically two kinds of transactions: 
- short transactions with a couple of updates and inserts that runs all the 
day; 
- batch data loads with hundreds of inserts that runs several times a day; 
- one delete for thousands of lines after each batch; 
- selects are made when users need reports, low concurrency here. 

Today the max_connections is ~2500 where the application is a cluster of JBoss 
servers with a pool a bit smaller then this total. 
work_mem = 1GB 
maintenance_work_mem = 1GB 
shared_buffers = 4GB 

autovacuum takes a lot of time running in the largest tables (3 large tables in 
50) causing some connections to have to wait for it to finish to start 
transactioning again. 

I see a few processes (connections) using 10 ~ 20% of total system memory and 
the others using no more then 1%. 

What I want to ask is: is it better to keep the work_mem as high as it is today 
or is it a safe bet triyng to reduce this number, for example, to 1 or 2MB so I 
can keep the distribution of memory more balanced among all connections? 

Thanks! 

Flavio Henrique A. Gurgel 

Reply via email to