NumActive is off-by-one at instantiation and causes premature exhastion
-----------------------------------------------------------------------
Key: DBCP-345
URL: https://issues.apache.org/jira/browse/DBCP-345
Project: Commons Dbcp
Issue Type: Bug
Affects Versions: 1.4
Reporter: Kevin Ross
Attachments: DebugBasicDataSource.java, DebugConnectionPool.java
Scenario: we have some code that we had thought was potentially leaking
connections. In our unitTest/integrationTest environment, we know we can *lock
down connections to a total of 2* and a full run should pass. We had no such
luck with a {{maxActive}} of 2.
We created/attached a {{DebugBasicDataSource}} which initializes a
{{DebugConnectionPool}} for logging purposes and delegates into the DBCP
hierarchy. BTW - consistent use of accessors would have made this a cleaner
affair ;)
{code} // num active starts at one! Here is the original unmodified log
message:
// BORROWING: from abandonedobjectp...@10f0f6ac (1 of 2) 0
idle: threadStats[ ]: all-time uniques{ (empty) }
// SEE! no borrows ever, and the first pre-borrow already has a count
of 1!{code}
Before borrowing the first connection - {{numActive}} is 1!
The gorier details below, I hope they help someone else!
Constraining the pool was the best way to uncover the leakage.
Thinking it was our error, we went after our code to find the problem. We had
such a hard time understanding who was using connections, in which Spring
context. The confusion stemmed from the fact that our unitTests run against
REST resources deployed as Jersey components in a Grizzly container. Where
they using the same connection pool or not? Was the unitTest setup side
exhausting more connections, or was it leaking on the REST service side.
Answers:
1. Our unitTests executing Jersey with in-VM Grizzly container do indeed
utilize the same pool (and same Spring context).
2. Our unitTest (side) was not using more than one connection for data setup,
and it returned the connection for reuse.
3. Our REST service side was only using one connection, but was a Grizzly
threaded container and we have AcitveMQ running as well. Practically, one
server connection could handle everything, but the REST service and ActiveMQ
listener could potentially claim 2.
Note, the attached DebugBasicDataSource was quite useful to determine which
threads were claiming which connections in a leak situation. Certainly do not
configure it on the production side, but it might be nice to see something like
this offered up on the DBCP site somewhere to help developers find or confirm
their misconfiguration or bad code.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.