[ 
https://issues.apache.org/jira/browse/DBCP-345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13886867#comment-13886867
 ] 

Mark Thomas commented on DBCP-345:
----------------------------------

One other thing I thought of to help diagnose this is a debugger. If you put a 
break point on every place where numActive is incremented and then run your 
test you'll find out what is triggering the increment. If attaching a debugger 
isn't an option, you can modify the code, create a new exception every time 
numActive is incremented and then print the stack trace for that exception to 
the console. It isn't the prettiest debugging technique but it will tell what 
is triggering the behaviour you see.

> NumActive is off-by-one at instantiation and causes premature exhaustion
> ------------------------------------------------------------------------
>
>                 Key: DBCP-345
>                 URL: https://issues.apache.org/jira/browse/DBCP-345
>             Project: Commons Dbcp
>          Issue Type: Bug
>    Affects Versions: 1.4
>            Reporter: Kevin Ross
>         Attachments: AssertNumActiveDataSource.java, 
> DebugBasicDataSource.java, DebugConnectionPool.java, dbcp-345.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Scenario: we have some code that we had thought was potentially leaking 
> connections.  In our unitTest/integrationTest environment, we know we can 
> *lock down connections to a total of 2* and a full run should pass.  We had 
> no such luck with a {{maxActive}} of 2.    
> We created/attached a {{DebugBasicDataSource}} which initializes a  
> {{DebugConnectionPool}} for logging purposes and delegates into the DBCP 
> hierarchy.  BTW - consistent use of accessors would have made this a cleaner 
> affair ;) 
> {code}        // num active starts at one! Here is the original unmodified 
> log message:
>         //          BORROWING:  from AbandonedObjectPool@10f0f6ac (1 of 2) 0 
> idle: threadStats[ ]: all-time uniques{ (empty)  }
>         // SEE! no borrows ever, and the first pre-borrow already has a count 
> of 1!{code}
> Before borrowing the first connection - {{numActive}} is 1!  
> The gorier details below, I hope they help someone else!
> Constraining the pool was the best way to uncover the leakage.  
> Thinking it was our error, we went after our code to find the problem.  We 
> had such a hard time understanding who was using connections, in which Spring 
> context.  The confusion stemmed from the fact that our unitTests run against 
> REST resources deployed as Jersey components in a Grizzly container.  Where 
> they using the same connection pool or not?  Was the unitTest setup side 
> exhausting more connections, or was it leaking on the REST service side.
> Answers: 
> 1.  Our unitTests executing Jersey with in-VM Grizzly container do indeed 
> utilize the same pool (and same Spring context).
> 2.  Our unitTest (side) was not using more than one connection for data 
> setup, and it returned the connection for reuse.
> 3.  Our REST service side was only using one connection, but was a Grizzly 
> threaded container and we have AcitveMQ running as well.  Practically, one 
> server connection could handle everything, but the REST service and ActiveMQ 
> listener could potentially claim 2.
> Note, the attached DebugBasicDataSource was quite useful to determine which 
> threads were claiming which connections in a leak situation.  Certainly do 
> not configure it on the production side, but it might be nice to see 
> something like this offered up on the DBCP site somewhere to help developers 
> find or confirm their misconfiguration or bad code.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Reply via email to