"Jignesh K. Shah" <[EMAIL PROTECTED]> writes:

> Note with this no think time concept, each clients can be about 75% CPU busy
> from what I observed. running it I found the clients scaling up saturates at
> about 60  now (compared to 500 from the original test). The peak throughput 
> was
> at about 50 users (using synchrnous_commit=off)

So to get the maximum throughput on the benchmark with think times you want to
aggregate the clients about 10:1 with a connection pooler or some middleware
layer of some kind, it seems.

It's still interesting to find the choke points for large numbers of
connections. But I think not because it's limiting your benchmark results --
that would be better addressed by using fewer connections -- just for the sake
of knowing where problems loom on the horizon.

-- 
  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com
  Ask me about EnterpriseDB's PostGIS support!

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to