On 29 June 2016 at 18:47, Sachin Kotwal <kotsac...@gmail.com> wrote:
> I am testing pgbench with more than 100 connections.
> also set max_connection in postgresql.conf more than 100.
> Initially pgbench tries to scale nearby 150 but later it come down to 100
> connections and stable there.
> It this limitation of pgbench? or bug? or i am doing it wrong way?
What makes you think this is a pgbench limitation?
It sounds like you're benchmarking the client and server on the same
system. Couldn't this be a limitation of the backend PostgreSQL server?
It also sounds like your method of counting concurrent connections is
probably flawed. You're not allowing for setup and teardown time; if you
want over 200 connections really running at very high rates of connection
and disconnection you'll probably need to raise max_connections a bit to
allow for the ones that're starting up or tearing down at any given time.
Really, though, why would you want to do this? I can measure my car's speed
falling off a cliff, but that's not a very interesting benchmark for a car.
I can't imagine any sane use of the database this way, with incredibly
rapid setup and teardown of lots of connections. Look into connection
pooling, either client side or in a proxy like pgbouncer.
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services