[ 
https://issues.apache.org/jira/browse/DBCP-345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12919500#action_12919500
 ] 

Phil Steitz commented on DBCP-345:
----------------------------------

Thanks for including your pool config, which provides some information that may 
be helpful - you have initialSize set to 1.  That means when the datasource is 
created, one connection will be created and placed in the pool. So before the 
first getConnection request, there will be one idle connection in the pool. 
This should not affect numActive, but it will make numIdle = 1.

To determine if this is actually a DBCP (or pool) bug, we need a unit test that 
directly borrows from DBCP and observes numActive.  I don't think this is a 
DBCP bug, since I cannot reproduce the behavior and the following unit test 
succeeds, when added to TestBasicDataSource (the setup creates the datasource):

{code}
    public void testInitialSize2() throws Exception {
        assertEquals(0, ds.getInitialSize());
        assertEquals(0, ds.getNumIdle());
        assertEquals(0, ds.getNumActive());
    }
{code}

There are other unit tests that further verify correct behavior of the pool 
counters.   Apologies if I misunderstood the code, but I can't see where the 
attached code instantiates a datasource and checks numActive directly before 
borrowing a connection.  That is what the test code above does and the result 
is as expected.  Your code appears to add numActive and numIdle, which *will* 
equal 1 when the pool is initialized with initialSize = 1.  If you can produce 
a unit test or other code that demonstrates that numActive is not correctly 
reporting the number of connections checked out to clients, then we can 
investigate; otherwise we need to close this issue as invalid.



> NumActive is off-by-one at instantiation and causes premature exhaustion
> ------------------------------------------------------------------------
>
>                 Key: DBCP-345
>                 URL: https://issues.apache.org/jira/browse/DBCP-345
>             Project: Commons Dbcp
>          Issue Type: Bug
>    Affects Versions: 1.4
>            Reporter: Kevin Ross
>         Attachments: DebugBasicDataSource.java, DebugConnectionPool.java
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Scenario: we have some code that we had thought was potentially leaking 
> connections.  In our unitTest/integrationTest environment, we know we can 
> *lock down connections to a total of 2* and a full run should pass.  We had 
> no such luck with a {{maxActive}} of 2.    
> We created/attached a {{DebugBasicDataSource}} which initializes a  
> {{DebugConnectionPool}} for logging purposes and delegates into the DBCP 
> hierarchy.  BTW - consistent use of accessors would have made this a cleaner 
> affair ;) 
> {code}        // num active starts at one! Here is the original unmodified 
> log message:
>         //          BORROWING:  from abandonedobjectp...@10f0f6ac (1 of 2) 0 
> idle: threadStats[ ]: all-time uniques{ (empty)  }
>         // SEE! no borrows ever, and the first pre-borrow already has a count 
> of 1!{code}
> Before borrowing the first connection - {{numActive}} is 1!  
> The gorier details below, I hope they help someone else!
> Constraining the pool was the best way to uncover the leakage.  
> Thinking it was our error, we went after our code to find the problem.  We 
> had such a hard time understanding who was using connections, in which Spring 
> context.  The confusion stemmed from the fact that our unitTests run against 
> REST resources deployed as Jersey components in a Grizzly container.  Where 
> they using the same connection pool or not?  Was the unitTest setup side 
> exhausting more connections, or was it leaking on the REST service side.
> Answers: 
> 1.  Our unitTests executing Jersey with in-VM Grizzly container do indeed 
> utilize the same pool (and same Spring context).
> 2.  Our unitTest (side) was not using more than one connection for data 
> setup, and it returned the connection for reuse.
> 3.  Our REST service side was only using one connection, but was a Grizzly 
> threaded container and we have AcitveMQ running as well.  Practically, one 
> server connection could handle everything, but the REST service and ActiveMQ 
> listener could potentially claim 2.
> Note, the attached DebugBasicDataSource was quite useful to determine which 
> threads were claiming which connections in a leak situation.  Certainly do 
> not configure it on the production side, but it might be nice to see 
> something like this offered up on the DBCP site somewhere to help developers 
> find or confirm their misconfiguration or bad code.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to