On Aug 19, 2013, at 9:55 AM, Dzmitry <dzmitry.nikit...@gmail.com> wrote:

> No, I am not using pgbouncer, I am using pgpool.
> 
> Total - I have 440 connections to postgres(I have rails application
> running on some servers - each application setup 60 connections to DB and
> keep if forever(until will not be killed), also I have some machines that
> do background processing, that keep connections too).
> 
> Part that do a lot of writes(that update jobs from xml feed every night) -
> have 40 threads and keep 40 connections.

That's extreme, and probably counter-productive.

How many cores do you have on those rails servers? Probably not 64, right? Not 
32? 16? 12? 8, even? Assuming <64, what advantage do you expect from 60 
connections? Same comment applies to the 40 connections doing the update 
jobs--more connections than cores is unlikely to be helping anything, and more 
connections than 2x cores is almost guaranteed to be worse than fewer. 

Postgres connections are of the heavy-weight variety: process per connection, 
not thread per connection, not thread-per core event-driven. In particular, I'd 
worry about work_mem in your configuration. You've either got to set it really 
low and live with queries going to disk too quickly for sorts and so on, or 
have it a decent size and have the risk that too many queries at once will 
trigger OOM.

Given your configuration, I wouldn't even start with pgbouncer for connection 
pooling. I'd first just slash the number of connections everywhere by 1/2, or 
even 1/4 and see what effect that had. Then as a second step I'd look at where 
connection pooling might be used effectively.

-- 
Scott Ribe
scott_r...@elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice






-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Reply via email to