Jeroen T. Vermeulen wrote:
> On Thu, July 12, 2007 23:03, Fei Liu wrote:
>
> Hello Liu,
>
>   
>> My implementation is now using a 120 pool size (server is configured to
>> allow 128 concurrently), my bottle neck now is the database. My
>> connection pool is quickly exhausted in my test. What's a good strategy
>> to improve the performance? I am thinking of queueing the database
>> requests and only execute them when a certain number (say 50 sql
>> statements) is reached?
>>     
>
> I think 120 is probably still too many.  One of the points of having a
> connection pool is that you can create fewer connections and reuse them. 
> Many web applications get by on <20, as far as I know.  Don't create this
> many unless your testing shows that (1) you need it and (2) it really
> helps.
>
> What you can do is make sure your pooled connections are released
> regularly, so hopefully you don't need so many.  You could allocate them
> on a per-transaction basis: lock pool, grab connection, unlock pool, open
> transaction, do work, commit, destroy transaction, lock pool, release
> connection, unlock pool.  "Grabbing" or "releasing" a connection could be
> as simple as taking it out of a list or setting an "I own it" flag
> somewhere.
>
>
>   
>> In my case the postgres server simply stops responding after a while and
>> allocated connection is not released from client...But CPU load is never
>> really very high...Maybe there is something else wrong.
>>     
>
> That's beginning to sound more like a postgres issue; people at
> [EMAIL PROTECTED] may know more about it.
>
>
> Jeroen
>
>
>   

Thanks for the comments, I'll do some digging and hopefully have 
something to report back.

Fei
_______________________________________________
Libpqxx-general mailing list
Libpqxx-general@gborg.postgresql.org
http://gborg.postgresql.org/mailman/listinfo/libpqxx-general

Reply via email to