Folks,

I've run into this a number of times with various PostgreSQL users, so we 
tested it at Sun.  What seems to be happening is that at some specific number 
of connections average throughput drops 30% and response time quadruples or 
worse.  The amount seems to vary per machine; I've seen it as variously 95, 
1050, 1700 or 2800 connections.  Tinkering with postgresql.conf parameters 
doesn't seem to affect this threshold.

As an example of this behavior:

Users   Txn/User  Resp. Time
50      105.38  0.01
100     113.05  0.01
150     114.05  0.01
200     113.51  0.01
250     113.38  0.01
300     112.14  0.01
350     112.26  0.01
400     111.43  0.01
450     110.72  0.01
500     110.44  0.01
550     109.36  0.01
600     107.01  0.02
650     105.71  0.02
700     106.95  0.02
750     107.69  0.02
800     106.78  0.02
850     108.59  0.02
900     106.03  0.02
950     106.13  0.02
1000    64.58   0.15
1050    52.32   0.23
1100    49.79   0.25

Tinkering with shared_buffers has had no effect on this threholding (the above 
was with 3gb to 6gb of shared_buffers).   Any ideas on where we should look 
for the source of the bottleneck?

-- 
Josh Berkus
PostgreSQL @ Sun
San Francisco

---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?

               http://archives.postgresql.org

Reply via email to