Jeroen T. Vermeulen wrote:
> On Wed, July 18, 2007 01:56, Fei Liu wrote:
>
>
>> It appears my multi-thread application (100 connections every 5 seconds)
>> is stalled when working with postgresql database server. I have limited
>> number of connections in my connection pool to postgresql to 20. At the
>> begining, connection is allocated and released from connection pool as
>> postgres serves data request. The pool can recover from exhaustion. But
>> very quickly (after about 400 client requests), it seems postgres server
>> stops serving and connection to postgres server is not released any more
>> resulting a resource exhausting for clients.
>>
>
> It looks like either your threads get "stuck" somehow, or your code leaks
> connections.
>
> In this case you can check for leaks by allowing fewer threads than you
> have connections. If you still run out of connections in your pool, the
> problem is that some threads complete their work without releasing their
> connections to the pool. If the program simply stops, then the problem is
> that the threads get stuck.
>
> In the latter case, try to figure out where they get stuck. One way of
> doing that is by using a debugger; another is to add some instrumentation
> code to each thread that opens a file and writes log information to it.
>
>
>
>> I am using a slightly model to protect thread generation. I have another
>> thread pool that limits number of concurrent threads to about 30. I have
>> enclosed my wrapper interface to libpqxx. Maybe I did something wrong
>> with my wrapper? My client code uses strictly the transactor interface.
>> What's the implementation strategy of pqxx::transactor<> ?
>>
>
> That really shouldn't make a difference, since it doesn't affect the
> connection. The implementation for transactor execution (the code for
> pqxx::connection_base::perform(), which is an inline template function) is
> in include/pqxx/transactor.hxx.
>
> As you can see there, it starts a transaction, executes your transactor,
> and tries to commit. Apart from transparent re-connection when needed and
> allowed, this shouldn't really affect the lifetime of the connection.
>
>
> Jeroen
>
>
>
It appears that service threads are stuck in perform...Could this
potentially be a libpqxx bug? Following is the perform code generated
the log. Is there any multi-thread libpqxx example or test code I can try?
I have extensively tested my resource pool implementation, it's well
behaved...
Many thanks for your time,
template <typename pgdb_op>
void pqdb::perform(const pgdb_op & op){
unsigned int id = 0;
try{
pqxx::connection & c = pool.alloc(id);
std::cerr << "before perform" << id << std::endl;
c.perform(op);
std::cerr << "after perform" << id << std::endl;
id = pool.release(id);
}
catch(pqxx::integrity_constraint_violation & e){
// ignore duplicate key violation
}
catch (const pqxx::sql_error &e)
{
// If we're interested in the text of a failed query, we
can write separate
// exception handling code for this type of exception
std::cerr << "SQL error: " << e.what() << std::endl
<< "Query was: '" << e.query() << "'" << std::endl;
}
catch (const std::exception &e)
{
// All exceptions thrown by libpqxx are derived from
std::exception
std::cerr << "Exception: " << e.what() << std::endl;
}
catch(...){
// This is really unexpected (see above)
std::cerr << "Unhandled exception" << std::endl;
if(id)
pool.release(id);
}
}
allocating 18
before perform18
releasing 14
after perform17
releasing 17
releasing 12
allocating 12
before perform12
allocating 17
before perform17
allocating 14
before perform14
allocating 19
before perform19
allocating 20
before perform20
allocating 21
before perform21
after perform17
allocating 22
before perform22
after perform19
allocating 23
before perform23
after perform21
allocating 24
before perform24
after perform23
releasing 17
allocating 17
before perform17
allocating 25
before perform25
releasing 19
after perform24
releasing 24
releasing 21
releasing 23
allocating 23 <---------- start
before perform23
allocating 21
before perform21
allocating 24
before perform24
allocating 19
before perform19
allocating 26
before perform26
allocating 27
before perform27
allocating 28
before perform28
allocating 29
before perform29
allocating 30
before perform30
resource pool exhausted, waiting for release ...
resource pool exhausted, waiting for release ...
Stall ...
_______________________________________________
Libpqxx-general mailing list
[email protected]
http://gborg.postgresql.org/mailman/listinfo/libpqxx-general