So I have more questions:
Did you do your test with the JndiDataSourceFactory on the same multi CPU system? On tomcat with the default datasource (BasicDataSource), settings?
Can you send me your complete tomcat/torque datasource configs?
To help debugging please download and use the following file: http://cvs.apache.org/~dirkv/builds/KeyedCPDSConnectionFactory.java
I synchronized all methods and added some debug to stderr.
--- Dirk
Todd Carmichael wrote:
I did perform this test. Perhaps the original email was not clear in
referencing the test of using JndiDataSourceFactory and that no leaks
occurred.
ToddC
-----Original Message-----
From: Juozas Baliuka [mailto:[EMAIL PROTECTED] Sent: Monday, October 27, 2003 9:53 AM
To: Jakarta Commons Developers List
Subject: Re: [dbcp] Trying to track/find a connection leak in DBCP 1.1
Try to run app without pool first to find broken component. Most of Connection leak problems are in applications and it is not easy to trust this test.
----- Original Message ----- From: "Todd Carmichael" <[EMAIL PROTECTED]> To: <[EMAIL PROTECTED]> Sent: Monday, October 27, 2003 7:32 PM Subject: [dbcp] Trying to track/find a connection leak in DBCP 1.1
Our scalability tests against our web application (350 concurrent users in
a
web application using Tomcat 4.1.24 with JDK 1.4.2) is failing due to a continual leak of database connections. The test failed in that the
number
of database connections (reported by the DB server: MS SQL Server) grew to over a thousand during an hour plus test (using the microsoft sql jdbc driver though we have tried the i-net driver with similar results). We
know
that it is not a problem in the application because we have performed identical tests except to configure Torque to use the
JndiDatasourceFactory.
This in passes the connection pooling chores to the appserver. Both
SunOne
appserver and Weblogic appserver maintained a valid number of connections
(<
75). We are using Torque 3.1 with DBCP 1.1 (using the
org.apache.commons.dbcp.datasources.SharedPoolDataSource) and Pool 1.1.
Our settings in our torque.properties (which can passed down to the SHaredPoolDataSource) are:
torque.dsfactory.default.pool.maxActive=100
torque.dsfactory.default.pool.testOnBorrow=0
torque.dsfactory.default.pool.testOnReturn=0
#no setting for maxIdle. It defaults to 8. Not optimal but code should still work: torque.dsfactory.default.pool.timeBetweenEvictionRunsMillis=60000
torque.dsfactory.default.pool.numTestsPerEvictionRun=-1
torque.dsfactory.default.pool.minEvictableIdleTimeMillis=1000
torque.dsfactory.default.pool.testWhileIdle=false
I have begun some debugging of this problem. I have added some debug
output
in the torqueInstance.getConnection to track size of the pool.
iCountConnections++;
if ((iCountConnections % 100)==0)
{
SharedPoolDataSource spds = (SharedPoolDataSource)dsf.getDataSource();
log.warn("Current Num Active Connections=" + String.valueOf(spds.getNumActive()));
log.warn("Current Num Idle Connections=" + String.valueOf(spds.getNumIdle()));
log.warn("Max Active =" + String.valueOf(spds.getMaxActive()));
log.warn("Max Idle =" +
String.valueOf(spds.getMaxIdle()));
log.warn("Max Wait =" +
String.valueOf(spds.getMaxWait()));
}
Here is a sample of the output when I run our scalability tests. At this point the system is under heavy load and the db server is reporting over
100
connections:
[WARN] TorqueInstance - -Current Num Active Connections=3 [WARN] TorqueInstance - -Current Num Idle Connections=5 [WARN] TorqueInstance - -Max Active =100 [WARN] TorqueInstance - -Max Idle =8
[WARN] TorqueInstance - -Max Wait =500
[WARN] TorqueInstance - -Current Num Active Connections=5
[WARN] TorqueInstance - -Current Num Idle Connections=3
[WARN] TorqueInstance - -Max Active =100
[WARN] TorqueInstance - -Max Idle =8
[WARN] TorqueInstance - -Max Wait =500
[WARN] TorqueInstance - -Current Num Active Connections=19
[WARN] TorqueInstance - -Current Num Idle Connections=0
[WARN] TorqueInstance - -Max Active =100
[WARN] TorqueInstance - -Max Idle =8
[WARN] TorqueInstance - -Max Wait =500
[WARN] TorqueInstance - -Current Num Active Connections=32
[WARN] TorqueInstance - -Current Num Idle Connections=0
[WARN] TorqueInstance - -Max Active =100
[WARN] TorqueInstance - -Max Idle =8
[WARN] TorqueInstance - -Max Wait =500
[WARN] TorqueInstance - -Current Num Active Connections=39
[WARN] TorqueInstance - -Current Num Idle Connections=1
[WARN] TorqueInstance - -Max Active =100
[WARN] TorqueInstance - -Max Idle =8
[WARN] TorqueInstance - -Max Wait =500
[WARN] TorqueInstance - -Current Num Active Connections=40
[WARN] TorqueInstance - -Current Num Idle Connections=0
[WARN] TorqueInstance - -Max Active =100
[WARN] TorqueInstance - -Max Idle =8
[WARN] TorqueInstance - -Max Wait =500
[WARN] TorqueInstance - -Current Num Active Connections=39
[WARN] TorqueInstance - -Current Num Idle Connections=2
[WARN] TorqueInstance - -Max Active =100
[WARN] TorqueInstance - -Max Idle =8
[WARN] TorqueInstance - -Max Wait =500
As you can see, the number of connections reported by the pool is not even close to the number connections reported by the database. ToddC
Also we are getting an exception during the test:
java.sql.SQLException: Attempted to use Connection after closed() was called. at
org.apache.commons.dbcp.cpdsadapter.ConnectionImpl.assertOpen(ConnectionImpl
.java:140) at
org.apache.commons.dbcp.cpdsadapter.ConnectionImpl.getMetaData(ConnectionImp
l.java:253) at org.apache.torque.util.Transaction.rollback(Transaction.java:193) at org.apache.torque.util.Transaction.safeRollback(Transaction.java:232) at com.concur.om.base.BaseCtReportEntry.save(Unknown Source)
After this exception occurs, I receive no more output as shown above. The system appears deadlocked. I can send a stack dump (Ctrl-Break on Wintel machines). The stack dump is pretty interesting as it shows where the contention is within dbcp and pool system but I don't know if it will help debug this leak problem (e.g. only one thread can create a db connection
at
time because the connection to the db is created during makeobject). Let
me
know if anyone is interested in seeing the dump.
This problem is critical for us (I had advocated moving to new Torque and DBCP and now am in a tough spot with this problem). I will continue to debug what I can and can apply patches fairly easily and run them.
Thanks for any timely help Todd Carmichael
--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
