[nightly build] dbcp failed.
Failed build logs: http://vmbuild.apache.org/~commons/nightly/logs//20070724/dbcp.log - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [DBCP] Remove SQLNestedException
On 7/23/07, Dain Sundstrom [EMAIL PROTECTED] wrote: DBCP-143 talks about problem with propagation of SQLNestedException to clients and the comment suggests a conversion to normal Java nested exception when we switch to Java 1.4. Since we made the leap, I did a bit of refactoring to remove this exception class. Basically I replace: new SQLNestedException(msg, e); with: (SQLException) new SQLException(msg).initCause(e); I attached this at a patch to 143 as I'm not 100% sure we want to go this direction. So, should we drop SQLNestedException? This is tempting, but it breaks backward compatibility, so we should probably deprecate in 1.3 and remove in the next major release. I guess the deprecation warning / release notes should just tell people to remove legacy casts in client code, since we never advertise this exception. Phil - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: Moderator volunteer?
On 7/24/07, Henri Yandell [EMAIL PROTECTED] wrote: Thanks Phil :) NP. Impressive response time by Brett. Almost as impressive as the spammers. Now if I can just determine which commons component will help this poor soul get the money he is owed;-) Phil - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [nightly build] dbcp failed.
Should be fixed now. The nightlies run the m1 build and that needs the transitive dependencies to be specified. We could change the nightly to m2, but I think it is a good idea to keep the m1 build working for now and the nightly can alter us if we break it. Phil - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [DBCP] Remove SQLNestedException
On 7/24/07, Rahul Akolkar [EMAIL PROTECTED] wrote: On 7/24/07, Dain Sundstrom [EMAIL PROTECTED] wrote: On Jul 24, 2007, at 7:56 AM, Phil Steitz wrote: On 7/23/07, Dain Sundstrom [EMAIL PROTECTED] wrote: snip/ So, should we drop SQLNestedException? This is tempting, but it breaks backward compatibility, so we should probably deprecate in 1.3 and remove in the next major release. I guess the deprecation warning / release notes should just tell people to remove legacy casts in client code, since we never advertise this exception. Sounds good. I marked the class as deprecated, moved DBCP-143 to 1.4, and added a note to the change log. snap/ He means v2.0 AFAICT. Details [1]. -Rahul [1] http://jakarta.apache.org/commons/releases/versioning.html Yes that's what I meant, following our rules. In this case, that is a little extreme, however, since the only breakage that I can think of is old 1.3 code that includes explicit casts in catch blocks, or direct usage or extension of the since-1.4-obsolete exception class itself. Is this really worth waiting for 2.0? Am I missing something here? Phil Phil - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [DBCP] DBCP-44 Deadlock
On 7/23/07, Dain Sundstrom [EMAIL PROTECTED] wrote: On Jul 20, 2007, at 5:26 PM, Phil Steitz wrote: On 7/20/07, Dain Sundstrom [EMAIL PROTECTED] wrote: On Jul 20, 2007, at 11:26 AM, Dain Sundstrom wrote: On Jul 19, 2007, at 11:19 PM, Phil Steitz wrote: I would love to have a fix for DBCP-44; but that could wait on pool 1.4 if necessary (and Ipersonally see no way to fix it just within dbcp. It would be great if I was wrong on that). I think the makeObject method is over synchronized. Actually, the class doesn't look it's synchronized properly at all. I'll take a shot at fixing this. I attached a patch that fixes the synchronization in PoolableConnectionFactory, but the deadlock still persists. The problem is GenericObjectPool.borrowObject() is synchronized so when it needs to makeObject that method is called while the synchronized block is held. I think this would take major surgery to make GenericObjectPool not perform this way. Thats what I feared. Thanks for looking in any case. Should I commit the patch that removes the excessive synchronization from PoolableConnectionFactory. It won't fix this problem but may alleviate some other ones. +1 to committing the patch. Phil - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: Moderator volunteer?
I will do this. Phil On 7/23/07, Henri Yandell [EMAIL PROTECTED] wrote: Is there anyone who could volunteer to moderate the list? I'm looking to share the load and get off of commons-dev moderating :) Hen - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[jira] Commented: (POOL-97) EVICTION_TIMER is never cancelled.
[ https://issues.apache.org/jira/browse/POOL-97?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12514497 ] Phil Steitz commented on POOL-97: - A more conservative solution would be to revert to the pool 1.2 setup where the Evictor is a thread. This would also break backward compatibility at the package private level, but would revert to a well-tested solution with essentially the same scaling / perfornance challenges as the per instance Timer in the patch. See POOL-56 or 1.2 sources for changes to revert. EVICTION_TIMER is never cancelled. -- Key: POOL-97 URL: https://issues.apache.org/jira/browse/POOL-97 Project: Commons Pool Issue Type: Bug Affects Versions: 1.3 Reporter: Devendra Patil Fix For: 2.0 Attachments: timer.patch The static EVICTION_TIMER (java.util.Timer) used in GenericObjectPool is never cancelled (even after closing the pool). The GenericObjectPool.close() method just cancels the _evictor (TimerTask). I agree this behaviour is ideal if EVICTION_TIMER is to be used across multiple pools. But, In my case, the resources (i.e. jars) are dynamically deployed and undeployed on remote grid-servers. If EVICTION_TIMER thread doesn't stop, the grid-servers fails to undeploy (i.e. delete) the jars. The grid-server doesn't restart during resource deployment/undeployment, so, setting EVICTION_TIMER to daemon doesn't help me. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[jira] Resolved: (DBCP-11) [dbcp] stmt.getConnection() != Connection used to create the statement
[ https://issues.apache.org/jira/browse/DBCP-11?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phil Steitz resolved DBCP-11. - Resolution: Fixed Patch applied. Thanks. [dbcp] stmt.getConnection() != Connection used to create the statement -- Key: DBCP-11 URL: https://issues.apache.org/jira/browse/DBCP-11 Project: Commons Dbcp Issue Type: Bug Affects Versions: 1.2 Environment: Operating System: other Platform: All Reporter: Alexander Rupsch Fix For: 1.3 Attachments: back-pointers.patch Hi, I'm not an expert in implementing connection pools or jdbc itself. But shouldn't the following code work? Connection con = pool.getConnection() PreparedStatement ps = con.prepareStatement() con.equals(ps.getConnection) // returns false! Ok, I don't need it to be equal, but the following also does not work: ps.getConnection().close() con.isClosed() // is false!!! That means, if I have a Statment and want to close its connection, I have to remember the conncetion. Is that the requested behavior? Because of this my pool is running over. The java.sql API says that Statment.getConnection() has to be the connection which created the statement. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: svn commit: r557176 - in /jakarta/commons/proper/dbcp/trunk: src/java/org/apache/commons/dbcp/ src/java/org/apache/commons/dbcp/cpdsadapter/ src/test/org/apache/commons/dbcp/ src/test/org/apache/c
On 7/19/07, Phil Steitz [EMAIL PROTECTED] wrote: On 7/19/07, Dain Sundstrom [EMAIL PROTECTED] wrote: I think passivate() is called automatically when the connection is put back in the pool (due to the _conn.close() call). I think there are tests that check that the statements were closed when the connection is closed. OK, I will look at the tests and verify. The removed passivate is on the DelegatingConnection itself. The statement constructors add the created DelegatingStatements to the AbandonedTrace of the DelegatingConnection and its passivate walks the statements and closes them. _con.close() is on the delegate. You are probably right that the only resources that really matter get cleaned up in any case and if the tests show that, then this is no problem. This is OK. PoolableConnectionFactory.passivateObject invokes passivate on the DelegatingConnection. Phil Anyway, I don't think it is a big deal to call passivate twice. It used to cause a SQLException because the delegating statements would throw an exception on the second close. -dain On Jul 19, 2007, at 10:33 PM, Phil Steitz wrote: Sorry I missed this in initial review. I am not sure we want to remove the passivate() below, since that closes statements traced by this connection. Am I missing something here? Phil jakarta/commons/proper/dbcp/trunk/src/java/org/apache/commons/dbcp/ DelegatingConnection.java Tue Jul 17 23:46:16 2007 @@ -208,10 +208,17 @@ * Closes the underlying connection, and close * any Statements that were not explicitly closed. */ -public void close() throws SQLException -{ -passivate(); -_conn.close(); +public void close() throws SQLException { +// close can be called multiple times, but PoolableConnection improperly +// throws an exception when a connection is closed twice, so before calling +// close we aren't alreayd closed +if (!isClosed()) { +try { +_conn.close(); +} finally { +_closed = true; +} +} } - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [DBCP] close issues
On 7/21/07, Dain Sundstrom [EMAIL PROTECTED] wrote: On Jul 20, 2007, at 10:15 PM, Phil Steitz wrote: On 7/20/07, Dain Sundstrom [EMAIL PROTECTED] wrote: On Jul 20, 2007, at 11:26 AM, Dain Sundstrom wrote: I think this will require a patch to pooling (documented in DBCP-221). What are the plans for pooling? This is a tiny change so we could do a pool 1.3.1 or 1.4 release. Alternatively, we could wait until DBCP 1.4 (and the next pool release) to address this issue. I am fine waiting to DBCP 1.4, since unless we are talking about different things, this really amounts to a significant change to both dbcp and pool. If what we want is to *always* track open connections and have the lingering close apply to the active (i.e. checked out) as well as idle connections, we need to follow through on what looks like it was the original plan of moving AbandonedObjectPool to pool and use this _all the time_ in place of GenericObjectPool, which is really just an idle object pool (maintains no references to borrowed objects). I think there are two features here also. The first is a lingering close where we close the data source along with all idle connection. Then as the checked out connections are returned to the pool, we destroy them instead of putting them in a closed pool. The second feature is a force close which as you pointed out requires tracking of active connection. After looking at the pooling code, I think that will take a lot of work to implement with the current code. Agreed. Let's focus on getting dbcp 1.3 out with current (incomplete) lifecycle semantics supported by pool 1.3 and postpone major surgery for now. We should open a pool JIRA at some point, though, summarizing the need for full lifecycle support. In any case, we need to get a pool release out ASAP since pool 1.3 introduced some bugs that are causing problems (see for example POOL-97) since dbcp started using this version. Synchronization was increased in pool 1.3 as well. The hang here is lack of volunteer time and difficulty getting into the codebase. I have only recently started working on the pool code base. The compositepool package includes an alternative impl that we have been thinking about as a pool 2.0. The plan that I proposed a while back (http://www.mail-archive.com/commons-dev@jakarta.apache.org/ msg94027.html) was to push out a pool 1.3.1 patch release fixing POOL-97 (when reviewing the patch there, remember that dbcp statement pooling can create quite a few pools) and other bugs fixed since 1.3 and have DBCP 1.3 depend on that, both fully backward compatible with current versions. I still think we should do that. I can handle the RM duty for both of these and close a couple more of the pool bugs, but what we need to speed things up is more eyeballs validating and testing and contributing - and applying - patches. I'll try to review the patch. If we do do a 1.3.1, I think we should change GOP and GKOP to destroy objects returned to the pool after the pool is closed. Otherwise you end up with stuck objects in a closed pool. Its not quite that bad now; but the returning orphans do not get closed on return. What happens now is that the GOP throws IllegalStateException when you try to return an object (or perform any other operation) on a closed pool. We could include a patch in pool 1.3.1 to passivate and destroy a returning orphan before throwing the IllegalStateException, taking a baby step toward better lifeclycle management. Since the pool does not hold references to these orphans once its closed, I am not sure how big a problem this is in general; though certainly for dbcp, the underlying physical connections do not get closed right away in this case. Phil - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[jira] Resolved: (POOL-94) GenericObjectPool allows checking in of previously checked in objects
[ https://issues.apache.org/jira/browse/POOL-94?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phil Steitz resolved POOL-94. - Resolution: Won't Fix Javadoc has been updated to include a warning. GenericObjectPool allows checking in of previously checked in objects - Key: POOL-94 URL: https://issues.apache.org/jira/browse/POOL-94 Project: Commons Pool Issue Type: New Feature Affects Versions: 1.3 Environment: JDK 1.4.2, web application running under Tomcat 5.0.25 Reporter: Tim McCollough Priority: Minor I am using GenericObjectPool to store a pool of socket connections. While debugging the application I noticed that the result of GetNumActive() was becoming more and more negative, while the GetNumIldle() count was ever increasing. Further debug showed that my application was returning the same connection more than once and the GenericObjectPool implementation accepted the return silently and decremented the active count and incremented the idle count. I don't object to GenericObjectPool allowing multiple returns on the same object, but the bookkeeping problem will lead to bad things happening in the pool management code. I am investigating what it would take to fix GenericObjectPool but since I am inexperienced in these commons projects I don't know what I should do from here. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[jira] Resolved: (DBCP-232) maxWait = 0 waits indefinitely too, not only maxWait = -1
[ https://issues.apache.org/jira/browse/DBCP-232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phil Steitz resolved DBCP-232. -- Resolution: Fixed Javadoc fix committed in r558394. Thanks for reporting thi maxWait = 0 waits indefinitely too, not only maxWait = -1 -- Key: DBCP-232 URL: https://issues.apache.org/jira/browse/DBCP-232 Project: Commons Dbcp Issue Type: Improvement Environment: all Reporter: Peter Welkenbach Priority: Critical the documentation describes the maxWait property as: The maximum number of milliseconds that the pool will wait (when there are no available connections) for a connection to be returned before throwing an exception, or -1 to wait indefinitely. this seems to be wrong. Compared to the source code it should be for a connection to be returned before throwing an exception, or -1 or 0 to wait indefinitely. in the source code of class GenericObjectPool the comparison is maxWait = 0 -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[jira] Commented: (MATH-167) ConvergenceException in normal CDF
[ https://issues.apache.org/jira/browse/MATH-167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12514453 ] Phil Steitz commented on MATH-167: -- Fixed for nomal distribution in r558450. Leaving open because we should look at other distributions before closing. ConvergenceException in normal CDF -- Key: MATH-167 URL: https://issues.apache.org/jira/browse/MATH-167 Project: Commons Math Issue Type: Bug Reporter: Mikko Kauppila Priority: Minor Fix For: 1.2 NormalDistributionImpl::cumulativeProbability(double x) throws ConvergenceException if x deviates too much from the mean. For example, when x=+/-100, mean=0, sd=1. Of course the value of the CDF is hard to evaluate in these cases, but effectively it should be either zero or one. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[jira] Updated: (MATH-167) ConvergenceException in normal CDF
[ https://issues.apache.org/jira/browse/MATH-167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phil Steitz updated MATH-167: - Fix Version/s: 1.2 ConvergenceException in normal CDF -- Key: MATH-167 URL: https://issues.apache.org/jira/browse/MATH-167 Project: Commons Math Issue Type: Bug Reporter: Mikko Kauppila Priority: Minor Fix For: 1.2 NormalDistributionImpl::cumulativeProbability(double x) throws ConvergenceException if x deviates too much from the mean. For example, when x=+/-100, mean=0, sd=1. Of course the value of the CDF is hard to evaluate in these cases, but effectively it should be either zero or one. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [DBCP] Release 1.3 soon?
On 7/19/07, Dain Sundstrom [EMAIL PROTECTED] wrote: Are there any DBCP-1.3 release plans? Based on the JIRAs I think we are close to being ready to release. Are there any items that are planned but don't have JIRAs? There are two things that I would like to at least talk about that relate to various JIRAs and comments on the list: 1) Intended current and future contract of close() on a connection pool, in particular the contract of BasicDataSource.close. The javadoc says Close and release all connections that are currently stored in the connection pool associated with our data source. Some users interpret this - incorrectly - to mean that close will close *active* as well as idle connections. Even when AbandonedConfig is used (in which case the pool holds references to connections that have been checked out), close only closes idle connections (since the pool is really and idle object pool). So the answer to the question in, e.g. DBCP-221, is sorry, no way to do that. Javadoc should be improved in any case. 2) Immutable-once-initialized status of BasicDataSource. I am inclined to closeDBCP-221 as WONTFIX, but in this case we should rip out the remnants of what must have seemed like a good idea at the time to support restart. This is sort of related to 1), because if we are going to attempt to allow BasicDataSource to be mutable once it has been initialized, I don't see any way to do that consistently without closing or appropriately modifying connections that have been checked out. Since we don't do that now, we can't really support this. My vote would be to keep BasicDataSource immutable-once-initialized. Here are some open JIRAs I think we can close: Fixed: DBCP-194 BasicDataSource.setLogWriter should not do createDataSource DBCP-102 setReadOnly setAutoCommit called too many times DBCP-97 setAutoCommit(true) when returning connection to the pool DBCP-212 PoolingDataSource closes physical connections +1 and thanks for verifying 97. Invalid: DBCP-209 Is DataSource.getConnection(user, pass) working the way it is suppose to? User should be using either SharedPoolDataSource or the PerUserPoolDataSource. DBCP-53 commons dbcp does not supports Firebird DB Torque bug or misconfiguration by user. +1 Won't fix DBCP-115 allow to set = 6 parameters to do non-global SSL Request for mysql specific feature DBCP-152 add a socketFactory attribute to BasicDataSource (to allow SSL thread-safe) Request for mysql specific feature +1 I would love to have a fix for DBCP-44; but that could wait on pool 1.4 if necessary (and Ipersonally see no way to fix it just within dbcp. It would be great if I was wrong on that). We should also address DBCP-4 (by using jdk logging, since we have bumped to 1.4 level). I think it would be good to start adding some simple instrumentation in 1.3 that we could add to in subsequent releases. Having things like physical connection opens / closes, pool high water marks, waits, etc., loggable would make debuggng and performance tuning much easier. I will finish reviewing recent patches tomorrow and come up with a straw man release plan this weekend if no one beats me to it. Thanks for all of your help on this, Dain. Phil - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: svn commit: r557176 - in /jakarta/commons/proper/dbcp/trunk: src/java/org/apache/commons/dbcp/ src/java/org/apache/commons/dbcp/cpdsadapter/ src/test/org/apache/commons/dbcp/ src/test/org/apache/c
On 7/19/07, Dain Sundstrom [EMAIL PROTECTED] wrote: I think passivate() is called automatically when the connection is put back in the pool (due to the _conn.close() call). I think there are tests that check that the statements were closed when the connection is closed. OK, I will look at the tests and verify. The removed passivate is on the DelegatingConnection itself. The statement constructors add the created DelegatingStatements to the AbandonedTrace of the DelegatingConnection and its passivate walks the statements and closes them. _con.close() is on the delegate. You are probably right that the only resources that really matter get cleaned up in any case and if the tests show that, then this is no problem. Anyway, I don't think it is a big deal to call passivate twice. It used to cause a SQLException because the delegating statements would throw an exception on the second close. -dain On Jul 19, 2007, at 10:33 PM, Phil Steitz wrote: Sorry I missed this in initial review. I am not sure we want to remove the passivate() below, since that closes statements traced by this connection. Am I missing something here? Phil jakarta/commons/proper/dbcp/trunk/src/java/org/apache/commons/dbcp/ DelegatingConnection.java Tue Jul 17 23:46:16 2007 @@ -208,10 +208,17 @@ * Closes the underlying connection, and close * any Statements that were not explicitly closed. */ -public void close() throws SQLException -{ -passivate(); -_conn.close(); +public void close() throws SQLException { +// close can be called multiple times, but PoolableConnection improperly +// throws an exception when a connection is closed twice, so before calling +// close we aren't alreayd closed +if (!isClosed()) { +try { +_conn.close(); +} finally { +_closed = true; +} +} } - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [DBCP] DBCP-44 Deadlock
On 7/20/07, Dain Sundstrom [EMAIL PROTECTED] wrote: On Jul 20, 2007, at 11:26 AM, Dain Sundstrom wrote: On Jul 19, 2007, at 11:19 PM, Phil Steitz wrote: I would love to have a fix for DBCP-44; but that could wait on pool 1.4 if necessary (and Ipersonally see no way to fix it just within dbcp. It would be great if I was wrong on that). I think the makeObject method is over synchronized. Actually, the class doesn't look it's synchronized properly at all. I'll take a shot at fixing this. I attached a patch that fixes the synchronization in PoolableConnectionFactory, but the deadlock still persists. The problem is GenericObjectPool.borrowObject() is synchronized so when it needs to makeObject that method is called while the synchronized block is held. I think this would take major surgery to make GenericObjectPool not perform this way. Thats what I feared. Thanks for looking in any case. I think the way to solve this is to write a new pool implementation that is much more async. This easier with the Java5 concurrent packages, but still quite tricky. Yes, and at least for dbcp 1.3, I would prefer not to hop all the way to 1.5 required JDK level. I'll attempt to put together one in a few days. Regardless, I don't think this is something we should target for this release. Before writing another one, have a look at the compositepool package in pool head. -dain - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [DBCP] close issues
On 7/20/07, Dain Sundstrom [EMAIL PROTECTED] wrote: On Jul 20, 2007, at 11:26 AM, Dain Sundstrom wrote: On Jul 19, 2007, at 11:19 PM, Phil Steitz wrote: On 7/19/07, Dain Sundstrom [EMAIL PROTECTED] wrote: Are there any DBCP-1.3 release plans? Based on the JIRAs I think we are close to being ready to release. Are there any items that are planned but don't have JIRAs? There are two things that I would like to at least talk about that relate to various JIRAs and comments on the list: 1) Intended current and future contract of close() on a connection pool, in particular the contract of BasicDataSource.close. The javadoc says Close and release all connections that are currently stored in the connection pool associated with our data source. Some users interpret this - incorrectly - to mean that close will close *active* as well as idle connections. Even when AbandonedConfig is used (in which case the pool holds references to connections that have been checked out), close only closes idle connections (since the pool is really and idle object pool). So the answer to the question in, e.g. DBCP-221, is sorry, no way to do that. Javadoc should be improved in any case. 2) Immutable-once-initialized status of BasicDataSource. I am inclined to close DBCP-221 as WONTFIX, but in this case we should rip out the remnants of what must have seemed like a good idea at the time to support restart. This is sort of related to 1), because if we are going to attempt to allow BasicDataSource to be mutable once it has been initialized, I don't see any way to do that consistently without closing or appropriately modifying connections that have been checked out. Since we don't do that now, we can't really support this. My vote would be to keep BasicDataSource immutable-once-initialized. I think these are basically the same issue. I agree with the comments in DBCP-221 which seems to want a lingering close. This is in line with how I expect close to work (having not read any of the pooling code yet). I think the root of this problem is we don't have a clear start/ stop life-cycle methods. Currently, we are using the first getConnection() for start and close() for stop, which I think are good choices. Maybe we could keep those choices, and introduce an explicit start(), stop() and stop(long maxWait). This way we can support the close lingering and close immediately options people seem to be asking for. Once we have this functionality, it should be easy to add restart() which would close() lingering the existing inner datasource and create/start a new one. I'm not sure this is something that can be done without changes to pool, but I'll take a look at it today. I think this will require a patch to pooling (documented in DBCP-221). What are the plans for pooling? This is a tiny change so we could do a pool 1.3.1 or 1.4 release. Alternatively, we could wait until DBCP 1.4 (and the next pool release) to address this issue. I am fine waiting to DBCP 1.4, since unless we are talking about different things, this really amounts to a significant change to both dbcp and pool. If what we want is to *always* track open connections and have the lingering close apply to the active (i.e. checked out) as well as idle connections, we need to follow through on what looks like it was the original plan of moving AbandonedObjectPool to pool and use this _all the time_ in place of GenericObjectPool, which is really just an idle object pool (maintains no references to borrowed objects). In any case, we need to get a pool release out ASAP since pool 1.3 introduced some bugs that are causing problems (see for example POOL-97) since dbcp started using this version. Synchronization was increased in pool 1.3 as well. The hang here is lack of volunteer time and difficulty getting into the codebase. I have only recently started working on the pool code base. The compositepool package includes an alternative impl that we have been thinking about as a pool 2.0. The plan that I proposed a while back (http://www.mail-archive.com/commons-dev@jakarta.apache.org/msg94027.html) was to push out a pool 1.3.1 patch release fixing POOL-97 (when reviewing the patch there, remember that dbcp statement pooling can create quite a few pools) and other bugs fixed since 1.3 and have DBCP 1.3 depend on that, both fully backward compatible with current versions. I still think we should do that. I can handle the RM duty for both of these and close a couple more of the pool bugs, but what we need to speed things up is more eyeballs validating and testing and contributing - and applying - patches. Phil - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
New Commons Committer: Dain Sundstrom
Please join us in welcoming Dain Sundstrom as a new Commons committer. Dain is an apache committer active on multiple ASF projects who has been contributing patches to [dbcp] faster than we can commit them :) We are happy to have him among us as a Commons committer. Welcome, Dain! - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[jira] Resolved: (DBCP-209) Is DataSource.getConnection(user, pass) working the way it is suppose to?
[ https://issues.apache.org/jira/browse/DBCP-209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phil Steitz resolved DBCP-209. -- Resolution: Invalid I agree with Dain. For BasicDataSource, the username and password are pool properties. Is DataSource.getConnection(user, pass) working the way it is suppose to? - Key: DBCP-209 URL: https://issues.apache.org/jira/browse/DBCP-209 Project: Commons Dbcp Issue Type: Bug Affects Versions: 1.2.1 Reporter: Michael Remijan Fix For: 1.3 In Tomcat's server.xml, I create a DataSource resource using the FACTORY org.apache.commons.dbcp.BasicDataSourceFactory and I also provide a URL and a DRIVERCLASSNAME. However I do not provide USERNAME or PASSWORD because I want to use DataSource.getConnection(user, pass) in my application. When I call DataSource.getConnection(user, pass) I get the following exception, java.sql.SQLException: invalid arguments in call, which was unexpected. I dug into the source code for BasicDataSource and I found what I think is the source of the problem. First, the method getConnection(user, pass) call the createDataSource() method. The createDataSource() method creates a Properties object and tries to put the username and password into the properties object. However, because the server.xml file does contain a username or password, this Properties object (named connectionProperties in the code) is empty. The createDataSource() the proceeds to call the validateConnectionFactory() method. This method then tries to get a Connection object!! This attempt fails because the Properties object has no username or password in it hence the Oracle driver complains about being passed invalid arguments. My question is why is the code working this way? Why does the createDataSource() and validateConnectionFactory() methods assume the username and password have been set in server.xml and then attempt to try to return a Connection object with the username and password passed to the getConnection(user, pass) method? It would seem to me the createDataSource() and validateConnectionFactory() methods should be aware of the username and password passed to the getConnection(user, pass) if this method is used. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: svn commit: r557176 - in /jakarta/commons/proper/dbcp/trunk: src/java/org/apache/commons/dbcp/ src/java/org/apache/commons/dbcp/cpdsadapter/ src/test/org/apache/commons/dbcp/ src/test/org/apache/c
Sorry I missed this in initial review. I am not sure we want to remove the passivate() below, since that closes statements traced by this connection. Am I missing something here? Phil jakarta/commons/proper/dbcp/trunk/src/java/org/apache/commons/dbcp/DelegatingConnection.java Tue Jul 17 23:46:16 2007 @@ -208,10 +208,17 @@ * Closes the underlying connection, and close * any Statements that were not explicitly closed. */ -public void close() throws SQLException -{ -passivate(); -_conn.close(); +public void close() throws SQLException { +// close can be called multiple times, but PoolableConnection improperly +// throws an exception when a connection is closed twice, so before calling +// close we aren't alreayd closed +if (!isClosed()) { +try { +_conn.close(); +} finally { +_closed = true; +} +} } - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [DBCP] Remove primitive default values?
On 7/19/07, Dain Sundstrom [EMAIL PROTECTED] wrote: The PerUserDataSource and SharedPoolDataSource use primitives for the read only, transaction isolation and auto commit default values so there is not way to see if the value was set in the configuration. This means there is no way to allow the driver defaults to pass through like in the PoolingDataSource. In the future, should all of these default values be non-primitive so we do not set them unless explicitly set in the configuration? +1 Phil -dain - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[jira] Resolved: (DBCP-233) Allow connection, statement, and result set to be closed multiple times
[ https://issues.apache.org/jira/browse/DBCP-233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phil Steitz resolved DBCP-233. -- Resolution: Fixed Patch applied. Many thanks. Allow connection, statement, and result set to be closed multiple times --- Key: DBCP-233 URL: https://issues.apache.org/jira/browse/DBCP-233 Project: Commons Dbcp Issue Type: Improvement Reporter: Dain Sundstrom Fix For: 1.3 Attachments: CloseTwice.patch This patch allows Connection, Statement, PreparedStatement, CallableStatement and ResultSet to be closed multiple times. The first time close is called the resource is closed and any subsequent calls have no effect. This behavior is required as per the JavaDocs for these classes. The patch adds tests for closing all types multiple times and updates any tests that incorrectly assert that a resource can be closed more then once. This patch fixes DBCP-134 and DBCP-3 -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[jira] Resolved: (DBCP-134) [dbcp] DelegatingConnection.close() throws exception
[ https://issues.apache.org/jira/browse/DBCP-134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phil Steitz resolved DBCP-134. -- Resolution: Fixed Fixed in r 557176. [dbcp] DelegatingConnection.close() throws exception Key: DBCP-134 URL: https://issues.apache.org/jira/browse/DBCP-134 Project: Commons Dbcp Issue Type: Bug Affects Versions: Nightly Builds Environment: Operating System: Mac OS X 10.3 Platform: Macintosh Reporter: Hilmar Lapp Priority: Critical Fix For: 1.3 Closing connections that were obtained from PoolingDataSource und wrapped with a DelegatingConnection throws a 'java.sql.SQLException: Already closed' when calling close() on them in order to return the connection to the underlying pool. The reason is code in DelegatingConnection.passivate() the motivation for which I don't completely understand. At any rate, here is what happens. DelegatingConnection.close() calls passivate() before actually closing the delegate: /** * Closes the underlying connection, and close * any Statements that were not explicitly closed. */ public void close() throws SQLException { passivate(); _conn.close(); } DelegatingConnection.passivate() in turn cleans up statements and, if the delegate is a DelegatingConnection too, calls passivate() on the delegate. Finally, the instance variable _closed is set to true: protected void passivate() throws SQLException { try { // ... some statement clean up work, then: if(_conn instanceof DelegatingConnection) { ((DelegatingConnection)_conn).passivate(); } } finally { _closed = true; } } When this finishes and the delegate is indeed itself a delegating connection, close() will call _conn.close(). If DelegatingConnection were final this would even work, but it is not (on purpose). A notable derived class is PoolableConnection, which overrides close() and throws an exception if it is called when isClosed() returns true. isClosed() returns true if the _closed instance variable is true. BUMMER. The problem surfaces as soon as one tries to wrap the connection returned by PoolingDataSource with another DelegatingConnections, which happens to be what I do. I noticed this when I upgraded from 1.1 to 1.2.1, and it's still there in the nightly snapshot. There are several design decisions that I think deserve a critical look: - why does passivate() set a variable that effectively decides whether a connection is considered closed or not? shouldn't only close() be doing that? - why does DelegatingConnection even bother to clean up statements when those statements by definition must have come from the delegate (or its delegate and so forth) and so are the responsibility of the delegate (creator) to clean up - by propagating passivate() in the same method to the delegate if the delegate delegates itself DelegatingConnection is making assumptions that may be (and in fact are) easily violated if someone sub-classes DelegatingConnection and the delegate is now a subclass with possibly altered behavior. Why does it not suffice to expect that calling close() on the delegate will give the delegate enough chance to clean up itself, regardless of the implementing class of the delegate. I'd be thrilled if this can be fixed quickly, and fixing any of the problems pinpointed above will fix the issue. Or so I think. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[jira] Resolved: (DBCP-3) [dbcp] PoolableConnection.close() won't allow multiple close
[ https://issues.apache.org/jira/browse/DBCP-3?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phil Steitz resolved DBCP-3. Resolution: Fixed Fixed in r 557176. [dbcp] PoolableConnection.close() won't allow multiple close Key: DBCP-3 URL: https://issues.apache.org/jira/browse/DBCP-3 Project: Commons Dbcp Issue Type: Bug Affects Versions: Nightly Builds Environment: Operating System: All Platform: All Reporter: Adam Jenkins Fix For: 1.3 Sun's javadoc for java.sql.Connection.close() specifies that calling close on an already closed Connection is a no-op. However, PoolableConnection.close() (v 1.10) throws a SQLException if close() is called on a closed Connection. PoolableConnection.close() should just return if the Connection is already closed. Here is a patch: To demonstrate the bug, just obtain an open PoolableConnection and call close() on it twice; the second call will produce a SQLException. According to Sun's spec, the second close() should just be a no-op. The current behaviour is preferable to the old behaviour where it returned the Connection to the pool twice, but it's still not according to the spec. Here's a patch: *** PoolableConnection.java.orig2003-09-15 16:07:53.0 -0400 --- PoolableConnection.java 2003-09-15 16:08:11.0 -0400 *** *** 108,114 */ public synchronized void close() throws SQLException { if(isClosed()) { ! throw new SQLException(Already closed.); } else { try { _pool.returnObject(this); --- 108,114 */ public synchronized void close() throws SQLException { if(isClosed()) { ! return; } else { try { _pool.returnObject(this); -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[jira] Resolved: (DBCP-5) [dbcp] PoolGuardConnectionWrapper violates close() contract
[ https://issues.apache.org/jira/browse/DBCP-5?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phil Steitz resolved DBCP-5. Resolution: Fixed Fixed in r 557176. [dbcp] PoolGuardConnectionWrapper violates close() contract --- Key: DBCP-5 URL: https://issues.apache.org/jira/browse/DBCP-5 Project: Commons Dbcp Issue Type: Bug Environment: Operating System: All Platform: All Reporter: Derek Park Fix For: 1.3 org.apache.commons.dbcp.PoolingDatasource.PoolGuardConnectionWrapper.close() violates the Connection.close() contract specified in the Java 1.5 API. The current API specifies that calling close() on an already-closed connection is a no-op. (Blame Sun for the bug. The API didn't used to say that.) PoolGuardConnectionWrapper.close() first calls checkOpen() which throws an exception if close() has already been called. Clearly that's not a no-op. The simplest fix is to change the first line in the close() method from this: checkOpen(); to this: if (this.delegate == null) return; As of today (2006-03-22) this bug is in the latest SVN source (and has been in previous versions as well). DelegatingConnection and PoolingConnection don't seem (from a quick glance) to have this problem. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[jira] Resolved: (DBCP-23) [dbcp] SQLException When PoolablePreparedStatement Already Closed
[ https://issues.apache.org/jira/browse/DBCP-23?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phil Steitz resolved DBCP-23. - Resolution: Fixed Fixed in r 557176. [dbcp] SQLException When PoolablePreparedStatement Already Closed - Key: DBCP-23 URL: https://issues.apache.org/jira/browse/DBCP-23 Project: Commons Dbcp Issue Type: Bug Affects Versions: 1.2 Environment: Operating System: All Platform: All Reporter: JZ Fix For: 1.3 Attachments: issue32441.patch, patch.txt When closing an already closed org.apache.commons.dbcp.PoolablePreparedStatement, a SQLException is thrown when the isClosed() method returns true. This seems to violate the contract of java.sql.Statement (super interface of implemented PreparedStatement) whose javadoc reads Calling the method close on a Statement object that is already closed has no effect. Work around exists -- when ever closing a statement, also null out. Then, before closing, check that it's non-null. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [dbcp][pool] Performance / load tests
On 7/16/07, Henri Yandell [EMAIL PROTECTED] wrote: On 7/15/07, Phil Steitz [EMAIL PROTECTED] wrote: I have cleaned up some of my performance / load test code for [dbcp] and [pool] and would like to commit it somewhere so others can use and improve it. There is some common load generation code that should be factored out and I don't want to clutter the component codebases, so I am hesitant to commit to either pool or dbcp trunk. Any objections to my starting a [performance] sandbox component and seeding it with [dbcp] and [pool] performance tests? Any better ideas on where to put this code? +1 on putting it in as a component in sandbox. Committed the dbcp stuff. Pool to follow. Code is rough, but works. Ant from top level kicks off a run based on config in config.xml. If the database config in config.xml and jdbc driver location in build.properties are correct, the first run will create and populate a table called test_table in the database. I have tested the core code with mysql, postgres, oracle, hsqldb and sybase; but the currently packaged version only with mysql and postgres. The Digester config and overall property management is ugly and should be cleaned up. The run parameters are a little cryptic - see nextDelay javadoc in ClientThread for how this works. Phil - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [DBCP] Back pointers
On 7/17/07, Dain Sundstrom [EMAIL PROTECTED] wrote: I'm working on a fix for the back pointers bugs DBCP-11 and DBCP-217 where the Statement.getConnection() and ResultSet.getStatement() return the wrong objects. The fix is pretty simple; we just need to make sure we wrap Statements and ResultSets returned from DelegatingConnection with the matching delegating type. +1 Anyway, I have the fix mostly complete with a bunch of test cases, but there is one problem... The PerUserPoolDataSource and SharedPoolDataSource classes return the ConnectionImpl class directly. This class is a wrapper around the real connection so we need to wrap returned Statements which is easy enough. The problem is these datasources use the CPDSConnectionFactory which does not call passivate on the delegating connection when the connection is returned to the pool so the Statements owned by the DelegatingConnection aren't closed. To make matters worse, CPDSConnectionFactory can't call passivate anyway because it is in a different package and the method is protected :( At this point I'm not sure what to do. I could fix the problem for all DataSources except for these two, and in the future we could rework these two to subclass PoolingDataSource. Alternative, we could move CPDSConnectionFactory to same package as DelegatingConnection or make is a sublcass of some ConnectionFactory with access to the passivate method. I really do think these datasources should be brought in line with the main abstractions used by the other classes, but I don't think that is something for this release (maybe for 2.0?). I think we should leave this alone for now and consider refactoring for 2.0, but there is a semantic difference that we need to keep in mind. InstanceKeyDataSource (parent of PerUser and SharedPool) sources connections from a ConnectionPoolDataSource. These datasources return connection *handles* (PooledConnection impls), which are not the same as DelegatingConnections. The cpdsadapter package is just there for older jdbc drivers that do not provide ConnectionPoolDataSource implementations. See the javadoc for InstanceKeyDataSource and also the implementation of makeObject there. The key difference in the contract is that InstanceKeyDataSource implements ConnectionEventListener, so when used with a driver that correctly supports ConnectionPoolDataSource, the connection handles handed out to users notify the pool (actually the factory in dbcp) when they are closed by the user. See connectionClosed in CPDSConnectionFactory. Phil - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[dbcp][pool] Performance / load tests
I have cleaned up some of my performance / load test code for [dbcp] and [pool] and would like to commit it somewhere so others can use and improve it. There is some common load generation code that should be factored out and I don't want to clutter the component codebases, so I am hesitant to commit to either pool or dbcp trunk. Any objections to my starting a [performance] sandbox component and seeding it with [dbcp] and [pool] performance tests? Any better ideas on where to put this code? Phil - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: svn commit: r555980 - in /jakarta/commons/proper/dbcp/trunk: src/java/org/apache/commons/dbcp/ src/java/org/apache/commons/dbcp/managed/ xdocs/
On 7/13/07, Julien Aymé [EMAIL PROTECTED] wrote: It seems good; just a little mispelling problem with the protected method createConectionFactory(): Wouldn't it be better if it were spelled createConnectionFactory() ? (With 2 n in connection :-) Good catch. Thanks! Phil - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[jira] Resolved: (DBCP-230) [DBCP] BasicManagedDataSource
[ https://issues.apache.org/jira/browse/DBCP-230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phil Steitz resolved DBCP-230. -- Resolution: Fixed Patch applied. Thanks! [DBCP] BasicManagedDataSource - Key: DBCP-230 URL: https://issues.apache.org/jira/browse/DBCP-230 Project: Commons Dbcp Issue Type: New Feature Reporter: Dain Sundstrom Fix For: 1.3 Attachments: BasicManagedDataSource.patch This patch creates an extension to the BasicDataSource which creates ManagedConnection. This class allows easy usage of the ManagedDataSource from environments that configure via JavaBeans properties. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: Commons Math, Commons Nightlies, vmbuild
On 7/12/07, Brett Porter [EMAIL PROTECTED] wrote: On 13/07/07, Phil Steitz [EMAIL PROTECTED] wrote: Where are we now in terms of capabilities for Continuum builds? In particular, is it possible to a) deploy the tarballs to people.apache.org and b) deploy the snapshot jars produced by the nightlies to the apache snapshot repo? Does this work for both m1 and m2? b) is yes, a) should beyes, if your m1/m2/ant script can do it for you I say go for it then. Let's test it once it's up, then decide whether to use that, or move the script over. What would be the best set of projects to pilot with (the ones that cover all cases)? Lets start with [math] and [dbcp], which now have working m2 builds and then, say, [pool] which has not yet moved to m2. I can help with this. Let me know when the Continuum is set up on the new image. I think Ant may be using something like our nightly script on vmbuild. Might be good to give them heads up on moving vmbuild. Phil - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: Commons Math, Commons Nightlies, vmbuild
On 7/11/07, Brett Porter [EMAIL PROTECTED] wrote: Hi, This is particularly for Phil and those on commons-math, but if anyone else is interested in getting set up just holler. commons-math currently have a build set up in vmbuild.apache.org. It's been down for little bit, but is now back up. vmbuild is scheduled to be moved to a faster machine, and I intend to install a more recent build of Continuum (that supports grouping projects and is generally faster, more managable and more stable). I'm able to help get it set up as effectively as possible. There are a lot of failing builds on the machine right now (probably unused by the corresponding projects), so I'm cleaning house before the move. Please let me know if: [ ] you would like the project set up on the new machine with a clean slate [ ] you would like the project (and it's build history if possible) moved over [ ] you are no longer interested in using vmbuild for CI/nightlies/ whatever In addition, the commons nightlies scripts will need to be moved along with the VM, but since we will probably start with a clean slate I might need a hand with that. Where are we now in terms of capabilities for Continuum builds? In particular, is it possible to a) deploy the tarballs to people.apache.org and b) deploy the snapshot jars produced by the nightlies to the apache snapshot repo? Does this work for both m1 and m2? If the answers to both of these are yes, I think we should retire the nightly build script and get all of the component builds set up in Continuum. If no, then we should move the commons-nightly script when the box moves. That will be easy and I can help with it. Phil - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[jira] Reopened: (MATH-160) Chi-Square Test for Comparing two binned Data Sets
[ https://issues.apache.org/jira/browse/MATH-160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phil Steitz reopened MATH-160: -- Good catch, Luc. I thought clirr was set up to fail the build when this happens. In any case, this needs to be fixed somehow. Probably best to use a separate interface. Chi-Square Test for Comparing two binned Data Sets -- Key: MATH-160 URL: https://issues.apache.org/jira/browse/MATH-160 Project: Commons Math Issue Type: New Feature Reporter: Matthias Hummel Priority: Minor Fix For: 1.2 Attachments: commons-math.patch Current Chi-Square test implementation only supports standard Chi-Square testing with respect to known distribution. We needed testing for comparison of two sample data sets where the distribution can be unknown. For this case the Chi-Square test has to be computed in a different way so that both error contributions (one for each sample data set) are taken into account. See Press et. al, Numerical Recipes, Second Edition, formula 14.3.2. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [proposal] No VOTE needed to elect ASF committers to commons WAS: Re: request karma to commons validator/i18n
On 7/8/07, Jochen Wiedmann [EMAIL PROTECTED] wrote: I second Rahul's opinion. I wouldn't want to build high barreers, but I'd prefer to have them. Apart from that, I have never found it a problem to join the Commons. In particular, considering my own invitation which came basically with the first patch. Apart from that, the more important part, as far as I am concerned, is the policy of whether and when I may commit to subprojects. I am sure, that most of us would consider it a rude behaviour if I'd simply start committing to a subproject without a silent or explicit confirmation or even invitation by the subprojects members. If that policy applies to subprojects, then I can't see, why anyone should be considered a member of commons without being a member of at least one subproject. One thing that we shoud remain clear on is that there are *no* subprojects in commons. There never have been and we should not go back down the Jakarta path of subprojects. We have components and people working on them. We have a polite way of getting involved in working on components, discussing things, and collectively providing oversight. I don't think this is borken or needs to be formalized any further. Sometimes I worry about whether we have adequate code-level oversight over all of the components, but I see the answer to that to be growing the community and *active* committer base. That's why I want to make it easy for people to get involved - both ASF committers and volunteers who are not yet ASF committers. That does not mean making committers of everyone instantly and I do not advocate that. We can set whatever bar we want for becoming a commons committer. I expressed my view that we should allow ASF committers to join Commons on demand, but Its clear that just being an ASF committer is not enough for some of us, so lets close this discussion and use private@ (when it is set up) to discuss and vote on new committers. Phil - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [VOTE] Release CLI 1.1 (3rd RC)
+1 Phil On 7/4/07, Henri Yandell [EMAIL PROTECTED] wrote: I've updated the release notes to match the website page: http://people.apache.org/~bayard/commons-cli/1.0-rc3/ with the site in: http://people.apache.org/~bayard/commons-cli/1.0-rc3/site/ One quirk to note. The site is from trunk while the release is from the 1.0.x branch. [ ] +1, before 6 years since 1.0 arrives [ ] -1, we can make 6 years --- The only changes to svn are Rahul's NOTICE fix for our TLP change and my updating the RELEASE-NOTES.txt with the latest information. So I plan to consider any existing +1s for the RC2 as applying to this (ie: Don't revote unless you want to). Hen - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[vfs] Re: issue
If this relates to an open issue against [vfs], you should add your comments to the JIRA ticket for the issue. If you have looked at the open issues and this looks like a new problem, please open a new ticket. To find or create issues against [vfs], follow this link: http://jakarta.apache.org/commons/vfs/issue-tracking.html Also, when posting to this list, you should prepend the component name to the subject line as I have done above. Thanks! Phil - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[jira] Commented: (MATH-167) ConvergenceException in normal CDF
[ https://issues.apache.org/jira/browse/MATH-167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12510905 ] Phil Steitz commented on MATH-167: -- Thanks for reporting this. I see three alternatives to address - appreciate comments. 1) Determine tail resolution possible with current impl (hopefully not different on different JDKs, platforms) and top code, checking arguments and returning 0 or 1, resp if argument is too far in SD units from the mean. To find the cut points, empirically determine where convergence starts to fail. Document the cut points in javadoc for Impl. 2) Catch ConvergenceException and return 0 or 1, resp if argument is far from the mean; rethrow otherwise (though this should never happen). 3) Resolve as WONTFIX and leave it to client to catch and handle ConvergenceException, examining argument. Document algorithm more fully and warn that ConvergenceException will be thrown if tail probability cannot be accurately estimated or distinguished from 0. My first thought was 2 and I guess I still favor that, since 3) is inconvenient for users and 1) may not be stable unless cut points are conservative. Note that this same problem may apply to tail probablilities of other continuous distributions and we should check and address all of these before resolving this issue. ConvergenceException in normal CDF -- Key: MATH-167 URL: https://issues.apache.org/jira/browse/MATH-167 Project: Commons Math Issue Type: Bug Reporter: Mikko Kauppila Priority: Minor NormalDistributionImpl::cumulativeProbability(double x) throws ConvergenceException if x deviates too much from the mean. For example, when x=+/-100, mean=0, sd=1. Of course the value of the CDF is hard to evaluate in these cases, but effectively it should be either zero or one. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [DBCP] Managed Connection support
On 7/6/07, Dain Sundstrom [EMAIL PROTECTED] wrote: On Jul 5, 2007, at 7:13 AM, Phil Steitz wrote: Thanks, Dain. I applied the patch. I also patched the m1 and ant builds to work. The Ant now fails with JDK 1.3, but unless someone screams loudly soon, we have moved the minimum jdk level for dbcp 1.3 to JDK 1.4 (per earlier discussion), so this is not an issue. Sweet! That was very fast. snip/ For now the code is contained in the org.apache.commons.dbcp.managed package, but I would suspect we may want to spread it out amongst the existing packages instead of creating a feature specific package. I'm also not sure what additional interfaces people may want such as equivalents of the BasicDataSource or BasicDataSourceFactory. I am ambivalent on the merging into existing packages, but we should talk about this. We can figure that out as we get close to a release. If the thing isn't fully tested by then we could just mark the whole package as experimental. Thats what I was thinking, so good to leave as is for now. The code has tests and has javadoc, but it needs real world testing and some real docs. I'm going try hooking it into OpenEJB and running it's massive test suite with a couple of opensource DBs. Anyways, I hope you all like it and accept the patch. I'm around to help with changes or whatever. I also ran into a few bugs while working on this that are already reported in JIRA (like the close bugs) and am willing to help with those also. That would be greatly appreciated. We really need [dbcp] and [pool] volunteers. Given that you are an ASF committer, all you have to do is ask to get commons karma and you are certainly welcome to do that :) Excellent, I definitely like access, so I can fix any bugs in the code directly. Looks like there's a little beaurocracy to go through here, but pls keep the patches coming for now. In [dbcp] 1.3, we can fix the close semantics and other things that involve semantic changes. All suggestions and patches are welcome. I'll take a look at it when I get back in town next week. Thanks in advance. You will find that some of these bugs relate to [pool] and fixes need to either be mindful of current pool impl - most importantly the fact that the core of pool is an idle object linked list and there is no guard to protect against the same object appearing multiple times in the list, which will happen if it is returned twice, resulting in serious badness for dbcp - or we need to do something with pool to keep the changes safe. There is an alternative pool impl in the compositepool package in pool head, but the current plan is to push out one more patch release of pool and have dbcp 1.3 continue to use the GenericObjectPool. See roadmap discussion here: http://www.mail-archive.com/commons-dev@jakarta.apache.org/msg94027.html Comments welcome! Since POOL-97 is causing app server issues, it would be great to get some feedback on the proposed fix there and then bundle up a pool 1.3.1 patch release. Phil - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: svn commit: r553747 - in /jakarta/commons/proper/dbcp/trunk: pom.xml src/site/ src/site/site.xml
On 7/6/07, Niall Pemberton [EMAIL PROTECTED] wrote: Phil, Dennis released Verson 3 of the parent pom a while ago - do you not want to use that? Doh! Thx Phil Niall On 7/6/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: Author: psteitz Date: Thu Jul 5 23:04:45 2007 New Revision: 553747 URL: http://svn.apache.org/viewvc?view=revrev=553747 Log: Updates / fixes to get site generation working on Maven 2 - Added site.xml based on navigation.xml POM fixes: - Updated version to 1.3-SNAPSHOT - Added reporting section - Updated commons parent version Added: jakarta/commons/proper/dbcp/trunk/src/site/ jakarta/commons/proper/dbcp/trunk/src/site/site.xml Modified: jakarta/commons/proper/dbcp/trunk/pom.xml Modified: jakarta/commons/proper/dbcp/trunk/pom.xml URL: http://svn.apache.org/viewvc/jakarta/commons/proper/dbcp/trunk/pom.xml?view=diffrev=553747r1=553746r2=553747 == --- jakarta/commons/proper/dbcp/trunk/pom.xml (original) +++ jakarta/commons/proper/dbcp/trunk/pom.xml Thu Jul 5 23:04:45 2007 @@ -22,12 +22,12 @@ parent groupIdorg.apache.commons/groupId artifactIdcommons-parent/artifactId -version1/version +version2/version /parent modelVersion4.0.0/modelVersion - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: request karma to commons validator/i18n
Can someone with karma karma pls set Paul up? It would be great to get i18n promoted to proper and released. Any other volunteers to help with this? Phil On 7/5/07, Paul Benedict [EMAIL PROTECTED] wrote: I would like to commit to commons validator and commons i18n to enhance them for Struts. For validator, I want to add and finish some issues in the current snapshot, and, respectively, port some good i18n code from other Apache projects. Can I get karma for this? Thanks, Paul - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[proposal] No VOTE needed to elect ASF committers to commons WAS: Re: request karma to commons validator/i18n
On 7/6/07, Henri Yandell [EMAIL PROTECTED] wrote: On 7/6/07, Niall Pemberton [EMAIL PROTECTED] wrote: On 7/6/07, Paul Benedict [EMAIL PROTECTED] wrote: I would like to commit to commons validator and commons i18n to enhance them for Struts. For validator, I want to add and finish some issues in the current snapshot, and, respectively, port some good i18n code from other Apache projects. Can I get karma for this? Although Commons has a liberal policy on giving Karma to ASF committers a better (more ASF like) first step IMO would have been to start talking about what you want to do first - a good recent example of that is Dain: http://tinyurl.com/yrmgpf Even though I'm already a committer I still regularly create Jira tickets and post patches (for code changes) to components that I don't have much history on rather than diving straight in. I'm hoping you'll do the same, 'coz I'm going to be unhappy if I start seeing Validator commits with no prior discussion. Ack - Martin just pointed out that it's Sandbox karma on request, not all of Commons. I'll adjust - ie: Paul will have commit for i18n, but we'll have to vote to give him commit to validator. Sorry, I thought we had changed that policy, so I guess I would like to propose that we change it now. It does not really make sense to me to distinguish the sandbox and I think we should make it as easy as possible for existing ASF committers to contribute to commons. So my proposal is that any ASF committer who wishes to become a commons committer just needs to make that request here on the commons-dev mailing list and they will granted karma for both commons proper and commons sandbox. Expectation is of course that ASF committers joining the commons will behave (http://wiki.apache.org/jakarta-commons/JakartaCommonsEtiquette). As an alternative, we could discuss requests for karma from ASF committers on private@ (good sign that we have not even created that yet :) but I don't personally see the need to do that. In any case, I don't think we should revert to the old practice of public committer votes (whether or not the individual is a current ASF committer). Phil - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[jira] Resolved: (DBCP-228) [dbcp] Managed Connection support
[ https://issues.apache.org/jira/browse/DBCP-228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phil Steitz resolved DBCP-228. -- Resolution: Fixed Patch applied. Thanks! [dbcp] Managed Connection support - Key: DBCP-228 URL: https://issues.apache.org/jira/browse/DBCP-228 Project: Commons Dbcp Issue Type: New Feature Reporter: Dain Sundstrom Attachments: ManagedConnection.patch This patch adds support for pooling of ManagedConnections. A managed connection is responsible for managing a database connection in a transactional environment (typically called Container Managed). A managed connection opperates like any other connection when no gloabal transaction (a.k.a. XA transaction or JTA Transaction) is in progress. When a global transaction is active a single physical connection to the database is used by all ManagedConnections accessed in the scope of the transaction. Connection sharing means that all data access during a transaction has a consistent view of the database. When the global transaction is committed or rolled back the enlisted connections are committed or rolled back. This patch supports full XADataSources and non-XA data sources using local transaction semantics. non-XA data sources commit and rollback as part of the transaction but are not recoverable in the case of an error because they do not implement the two-phase commit protocol. The patch includes test cases and javadoc comments. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [DBCP] Managed Connection support
I just posted a patch JIRA which adds support for container managed connections to DBCP. In an environment where you have an accessible transaction manger such as Tomcat (when installed), Geronimo or OpenEJB, this patch allows adds support for pooling managed connections to an XA or non-XA data base. There are many libraries that use DBCP for non-managed environments, but when additional resources such as a JMS provider are added to the mix, they have to replace DBCP with something else. If you search google for XA and DBCP, you will loads of painful direction on how to replace DBCP with something else. I personally want to use DBCP in Tomcat with ActiveMQ. Thanks, Dain. I applied the patch. I also patched the m1 and ant builds to work. The Ant now fails with JDK 1.3, but unless someone screams loudly soon, we have moved the minimum jdk level for dbcp 1.3 to JDK 1.4 (per earlier discussion), so this is not an issue. snip/ For now the code is contained in the org.apache.commons.dbcp.managed package, but I would suspect we may want to spread it out amongst the existing packages instead of creating a feature specific package. I'm also not sure what additional interfaces people may want such as equivalents of the BasicDataSource or BasicDataSourceFactory. I am ambivalent on the merging into existing packages, but we should talk about this. The code has tests and has javadoc, but it needs real world testing and some real docs. I'm going try hooking it into OpenEJB and running it's massive test suite with a couple of opensource DBs. Anyways, I hope you all like it and accept the patch. I'm around to help with changes or whatever. I also ran into a few bugs while working on this that are already reported in JIRA (like the close bugs) and am willing to help with those also. That would be greatly appreciated. We really need [dbcp] and [pool] volunteers. Given that you are an ASF committer, all you have to do is ask to get commons karma and you are certainly welcome to do that :) In [dbcp] 1.3, we can fix the close semantics and other things that involve semantic changes. All suggestions and patches are welcome. Phil - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[nightly build] dbcp failed.
Failed build logs: http://vmbuild.apache.org/~commons/nightly/logs//20070704/dbcp.log - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [VOTE] 4th attempt: Release commons-io 1.3.2
+1 Phil On 6/26/07, Jochen Wiedmann [EMAIL PROTECTED] wrote: Hi, I have prepared a further release candidate, with the following changes: - Deprecation tags have been removed from the FileCleaner. (In the 1.3 branch only, not in the trunk.) The discussion has clearly shown, that opinions vary on this topic, nevertheless I feel forced to make that change against my personal opinion. IMO, releasing a 1.4 release with as little changes as that would be the greater evil. - The extracted source distribution is now using the -src suffix. - The .md5 and .sha1 files meet the commons standard and have the format checksum *filename Please cast your vote. Thanks, Jochen [ ] +1 [ ] =0 [ ] -1 -- Besides, manipulating elections is under penalty of law, resulting in a preventative effect against manipulating elections. The german government justifying the use of electronic voting machines and obviously believing that we don't need a police, because all illegal actions are forbidden. http://dip.bundestag.de/btd/16/051/1605194.pdf - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[nightly build] proxy failed.
Failed build logs: http://vmbuild.apache.org/~commons/nightly/logs//20070629/proxy.log - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [RESULT] [VOTE] Release commons-sandbox-parent 1
+1 - need to do this anyway. On 6/29/07, Dennis Lundberg [EMAIL PROTECTED] wrote: Hmm, it seems that I spoke too soon. We need a place in subversion to put the tagged release. Since the pom is currently in sandbox-trunks there simply is no tags directory to put the release in. I propose that we move the sandbox parent out of sandbox-trunks and into a commons-sandbox-parent directory in commons proper. That would make it a sibling to commons-parent. The only thing left in sandbox-trunks then would be the site for the sandbox, which consists of one (1) page. That could easily be moved down into a sandbox-site directory, that would be a sibling to the sandbox components. Thoughts? Dennis Lundberg wrote: The results are in: +1 Dennis Lundberg Torsten Curdt Niall Pemberton -0 Rahul Akolkar I will proceed with the release. Dennis Lundberg wrote: Hi, It is time to release version 1 of the commons-sandbox-parent. The latest changes includes updating the parent to commons-parent-3 and locking down the versions for plugins. Note that I have changed the artifactId to commons-sandbox-parent, to have a consistent naming scheme (compare it to commons-parent). This will be the first release and is important because it enables reproducible builds and site generation for the sandbox components. This vote is for revision 550041, which will have its version number changed to 1 when the release is done. A SNAPSHOT has been deployed to the Apache snapshot repo if you want to take it for a spin. [ ] +1 [ ] =0 [ ] -1 -- Dennis Lundberg - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[nightly build] lang failed.
Failed build logs: http://vmbuild.apache.org/~commons/nightly/logs//20070628/lang.log - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: infrastructure work for TLP move
On 6/27/07, Henri Yandell [EMAIL PROTECTED] wrote: Personally, my vote would be to say: commons=committers You mean all apache committers? I agree that we should continue the tradition of granting commons karma to any committer who asks for it. I don't know much about how this works in svn or what the risk / downside of auto-granting commons karma to new committers would be, but I agree with the principle that any ASF committer should be welcome here. There are also a lot of current commons committers missing from the list above. My preference would be to have them remain commons committers if that is not too hard to do. In any case, any current commons committer who wants karma should get it. Phil - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [VOTE] Release commons-sandbox-parent 1
On 6/23/07, Rahul Akolkar [EMAIL PROTECTED] wrote: On 6/23/07, Phil Steitz [EMAIL PROTECTED] wrote: On 6/23/07, Wendy Smoak [EMAIL PROTECTED] wrote: On 6/23/07, Dennis Lundberg [EMAIL PROTECTED] wrote: This will be the first release and is important because it enables reproducible builds and site generation for the sandbox components. Since you have a policy against releasing sandbox components, why not just deploy snapshots of the sandbox parent pom, and advance to the next snapshot version (without a release) when there is a change? I guess the issue there is that you then have to add the snapshot repo explicitly just to *build* a sandbox component or generate its web site. We want to force people to do that if they *depend* on sandbox jars, but just building a sandbox component should not require it IMO. snip/ And it doesn't require that to build either -- install the parent. I suspect anyone who has used m2 even minimally is aware of the bootstrapping problems with development builds and how to solve them. For everyone else (and I'm sure we will get questions), the sandbox components should have 'install parent pom' as step 0 of their 'building' pages. I guess that's what I see as the problem. IMO we should strive to make our components as easy to build as possible and this should apply to the sandbox as well as proper. Having to separately download, build and install the parent (correct me if I am wrong here, though if I am it sort of illustrates my point ;-) is a needless PITA for those trying to build a sandbox m2 component from source. Maven is supposed to make building easier and admittedly sometimes it does not. This is a case where needless futzing to get a build to work could be avoided by just publishing the parent so a straight build from the checked out sandbox component source can work. It is a maven pet peeve of mine that in some cases special local incantations have to be performed to get a build to work. I like to do everything possible to eliminate that. The site is also an issue. For better or for worse, site extensibility is tied to pom inheritance (again, correct me if I am wrong), so having a stable and consistent sandbox site build depends on having the sandbox parent POM available. Again, local build-and-install can workaround this, but why force people to do that and why give up consistency (whatever random svn grab is used will determine what shows up)? I guess I also agree with Dennis that I fail to see the negatives. Regarding the recurring busy work this is a do-ocracy and Dennis is stepping up to do this release. I will also help out as needed to maintain this POM. Phil -Rahul - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[jira] Resolved: (MATH-160) Chi-Square Test for Comparing two binned Data Sets
[ https://issues.apache.org/jira/browse/MATH-160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phil Steitz resolved MATH-160. -- Resolution: Fixed Applied a modified version of the patch, along with test cases, verified against DATAPLOT Modifications: * Changed input array data type to long[]. This is consistent with other ChiSquare tests and with the specification of the test (i.e., it is not clear what floats as arguments would mean) * Added weighting as specified in the NIST reference provided to adjust for possibly different bin sums for the two samples. Chi-Square Test for Comparing two binned Data Sets -- Key: MATH-160 URL: https://issues.apache.org/jira/browse/MATH-160 Project: Commons Math Issue Type: New Feature Reporter: Matthias Hummel Priority: Minor Fix For: 1.2 Attachments: commons-math.patch Current Chi-Square test implementation only supports standard Chi-Square testing with respect to known distribution. We needed testing for comparison of two sample data sets where the distribution can be unknown. For this case the Chi-Square test has to be computed in a different way so that both error contributions (one for each sample data set) are taken into account. See Press et. al, Numerical Recipes, Second Edition, formula 14.3.2. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [VOTE] Release commons-sandbox-parent 1
On 6/23/07, Wendy Smoak [EMAIL PROTECTED] wrote: On 6/23/07, Dennis Lundberg [EMAIL PROTECTED] wrote: This will be the first release and is important because it enables reproducible builds and site generation for the sandbox components. Since you have a policy against releasing sandbox components, why not just deploy snapshots of the sandbox parent pom, and advance to the next snapshot version (without a release) when there is a change? I guess the issue there is that you then have to add the snapshot repo explicitly just to *build* a sandbox component or generate its web site. We want to force people to do that if they *depend* on sandbox jars, but just building a sandbox component should not require it IMO. As I said above, I see the sandbox parent pom as a commons release, not a sandbox release, since it is part of the infrastructure of commons. What Dennis wants to release is not a snapshot, but a stable release of this part of commons infrastructure, just like the commons-parent pom. Phil -- Wendy - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [VOTE] Release commons-sandbox-parent 1
On 6/23/07, Rahul Akolkar [EMAIL PROTECTED] wrote: On 6/23/07, Dennis Lundberg [EMAIL PROTECTED] wrote: Hi, It is time to release version 1 of the commons-sandbox-parent. The latest changes includes updating the parent to commons-parent-3 and locking down the versions for plugins. Note that I have changed the artifactId to commons-sandbox-parent, to have a consistent naming scheme (compare it to commons-parent). This will be the first release and is important because it enables reproducible builds and site generation for the sandbox components. snip/ I haven't yet understood why we need to release anything from the sandbox at all. Sure, reproducibility is a good thing, but I doubt the builds are radically irreproducible without this release; and more importantly, I believe if people are interested in the sandbox components and their reproducibility, they should help get a release out instead. I think you have a good point there, Rahul, but I would see this as a commons release, not a commons-sandbox release and I personally see the benefit (consistent builds, easier to get a sandbox component to build when jumping in) as outweighing the negatives (increasing likelihood people depend on sandbox components, making the sandbox more comfortable), especially given that we are *not* releasing any sandbox jars. I have a couple of comments on the pom itself before adding my +1, though. Sorry if I missed this before, but I don't see why we should include the reports that are added to the sandbox POM. The ones in the parent are the only ones that I see as *always* needed. I have thought about suggesting that we drop the RAT report from there. At different stages of development, different reports make sense and I personally prefer to maintain the list per component, other than things like javadoc that you are never going to want to turn off. Another minor comment is that it might be better to move the pom and site into a sandbox-parent in svn. This obviously has nothing to do with the release vote. Phil - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[jira] Updated: (POOL-97) EVICTION_TIMER is never cancelled.
[ https://issues.apache.org/jira/browse/POOL-97?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phil Steitz updated POOL-97: Attachment: timer.patch EVICTION_TIMER is never cancelled. -- Key: POOL-97 URL: https://issues.apache.org/jira/browse/POOL-97 Project: Commons Pool Issue Type: Bug Affects Versions: 1.3 Reporter: Devendra Patil Fix For: 2.0 Attachments: timer.patch The static EVICTION_TIMER (java.util.Timer) used in GenericObjectPool is never cancelled (even after closing the pool). The GenericObjectPool.close() method just cancels the _evictor (TimerTask). I agree this behaviour is ideal if EVICTION_TIMER is to be used across multiple pools. But, In my case, the resources (i.e. jars) are dynamically deployed and undeployed on remote grid-servers. If EVICTION_TIMER thread doesn't stop, the grid-servers fails to undeploy (i.e. delete) the jars. The grid-server doesn't restart during resource deployment/undeployment, so, setting EVICTION_TIMER to daemon doesn't help me. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[jira] Updated: (POOL-97) EVICTION_TIMER is never cancelled.
[ https://issues.apache.org/jira/browse/POOL-97?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phil Steitz updated POOL-97: Attachment: timer.patch Attaching a patch (timer.patch) that makes the eviction timer a (lazy initialized) instance variable in GOP, GKOP and cancels it in close. Questions: 1. Will this resolve the issue fully? 2. Other than additional overhead in multi-pool settings, are there other negatives to this? How bad is the overhead issue? 3. Does the fix introduce any other problems? EVICTION_TIMER is never cancelled. -- Key: POOL-97 URL: https://issues.apache.org/jira/browse/POOL-97 Project: Commons Pool Issue Type: Bug Affects Versions: 1.3 Reporter: Devendra Patil Fix For: 2.0 Attachments: timer.patch The static EVICTION_TIMER (java.util.Timer) used in GenericObjectPool is never cancelled (even after closing the pool). The GenericObjectPool.close() method just cancels the _evictor (TimerTask). I agree this behaviour is ideal if EVICTION_TIMER is to be used across multiple pools. But, In my case, the resources (i.e. jars) are dynamically deployed and undeployed on remote grid-servers. If EVICTION_TIMER thread doesn't stop, the grid-servers fails to undeploy (i.e. delete) the jars. The grid-server doesn't restart during resource deployment/undeployment, so, setting EVICTION_TIMER to daemon doesn't help me. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[jira] Updated: (POOL-97) EVICTION_TIMER is never cancelled.
[ https://issues.apache.org/jira/browse/POOL-97?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phil Steitz updated POOL-97: Attachment: (was: timer.patch) EVICTION_TIMER is never cancelled. -- Key: POOL-97 URL: https://issues.apache.org/jira/browse/POOL-97 Project: Commons Pool Issue Type: Bug Affects Versions: 1.3 Reporter: Devendra Patil Fix For: 2.0 The static EVICTION_TIMER (java.util.Timer) used in GenericObjectPool is never cancelled (even after closing the pool). The GenericObjectPool.close() method just cancels the _evictor (TimerTask). I agree this behaviour is ideal if EVICTION_TIMER is to be used across multiple pools. But, In my case, the resources (i.e. jars) are dynamically deployed and undeployed on remote grid-servers. If EVICTION_TIMER thread doesn't stop, the grid-servers fails to undeploy (i.e. delete) the jars. The grid-server doesn't restart during resource deployment/undeployment, so, setting EVICTION_TIMER to daemon doesn't help me. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[jira] Updated: (POOL-97) EVICTION_TIMER is never cancelled.
[ https://issues.apache.org/jira/browse/POOL-97?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phil Steitz updated POOL-97: Attachment: (was: timer.patch) EVICTION_TIMER is never cancelled. -- Key: POOL-97 URL: https://issues.apache.org/jira/browse/POOL-97 Project: Commons Pool Issue Type: Bug Affects Versions: 1.3 Reporter: Devendra Patil Fix For: 2.0 The static EVICTION_TIMER (java.util.Timer) used in GenericObjectPool is never cancelled (even after closing the pool). The GenericObjectPool.close() method just cancels the _evictor (TimerTask). I agree this behaviour is ideal if EVICTION_TIMER is to be used across multiple pools. But, In my case, the resources (i.e. jars) are dynamically deployed and undeployed on remote grid-servers. If EVICTION_TIMER thread doesn't stop, the grid-servers fails to undeploy (i.e. delete) the jars. The grid-server doesn't restart during resource deployment/undeployment, so, setting EVICTION_TIMER to daemon doesn't help me. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[jira] Updated: (POOL-97) EVICTION_TIMER is never cancelled.
[ https://issues.apache.org/jira/browse/POOL-97?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phil Steitz updated POOL-97: Attachment: timer.patch EVICTION_TIMER is never cancelled. -- Key: POOL-97 URL: https://issues.apache.org/jira/browse/POOL-97 Project: Commons Pool Issue Type: Bug Affects Versions: 1.3 Reporter: Devendra Patil Fix For: 2.0 Attachments: timer.patch The static EVICTION_TIMER (java.util.Timer) used in GenericObjectPool is never cancelled (even after closing the pool). The GenericObjectPool.close() method just cancels the _evictor (TimerTask). I agree this behaviour is ideal if EVICTION_TIMER is to be used across multiple pools. But, In my case, the resources (i.e. jars) are dynamically deployed and undeployed on remote grid-servers. If EVICTION_TIMER thread doesn't stop, the grid-servers fails to undeploy (i.e. delete) the jars. The grid-server doesn't restart during resource deployment/undeployment, so, setting EVICTION_TIMER to daemon doesn't help me. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[jira] Updated: (POOL-102) Thread waiting forever for borrowObject() cannot be interrupted
[ https://issues.apache.org/jira/browse/POOL-102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phil Steitz updated POOL-102: - Fix Version/s: 2.0 Affects Version/s: 1.1 1.2 Thread waiting forever for borrowObject() cannot be interrupted --- Key: POOL-102 URL: https://issues.apache.org/jira/browse/POOL-102 Project: Commons Pool Issue Type: Bug Affects Versions: 1.1, 1.2, 1.3 Reporter: John Sumsion Priority: Minor Fix For: 2.0 In the following GenericObjectPool snippet inside borrowObject(), InterruptedException is caught and ignored. case WHEN_EXHAUSTED_BLOCK: try { if(_maxWait = 0) { wait(); } else { wait(_maxWait); } } catch(InterruptedException e) { // ignored } There are two problems here: 1) a thread waiting forever to get an object out of the pool will NEVER terminate, even if interrupted 2) even if you put a throw e in, it will still be wrong because the thread's interrupted status is not preserved This will cause cancellation problems for threads that are inside borrowObject() that want to terminate early ONLY if they are interrupted. For example, if a borrow-and-wait-forever was running on an pooled executor thread in Java 1.5 and the executor service tried to cancel a task and that task had early-termination logic in it that checked interrupted status to terminate early, the task would never be cancelled. For us, this is minor because we are on a Tomcat request thread that has to wait for this resource to continue, but for others that have pools of stuff that are being used by time-bound tasks, it's inconvenient to write code that waits for what ends up being arbitrary time periods for a wait. It would be easier to just say wait forever and allow interruption by someone else who is watching me. Suggestion: make the code read like this: case WHEN_EXHAUSTED_BLOCK: try { if(_maxWait = 0) { wait(); } else { wait(_maxWait); } } catch(InterruptedException e) { Thread.currentThread().interrupt(); throw e; } -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[jira] Resolved: (POOL-102) Thread waiting forever for borrowObject() cannot be interrupted
[ https://issues.apache.org/jira/browse/POOL-102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phil Steitz resolved POOL-102. -- Resolution: Fixed Patch applied. Thanks! Thread waiting forever for borrowObject() cannot be interrupted --- Key: POOL-102 URL: https://issues.apache.org/jira/browse/POOL-102 Project: Commons Pool Issue Type: Bug Affects Versions: 1.1, 1.2, 1.3 Reporter: John Sumsion Priority: Minor Fix For: 2.0 In the following GenericObjectPool snippet inside borrowObject(), InterruptedException is caught and ignored. case WHEN_EXHAUSTED_BLOCK: try { if(_maxWait = 0) { wait(); } else { wait(_maxWait); } } catch(InterruptedException e) { // ignored } There are two problems here: 1) a thread waiting forever to get an object out of the pool will NEVER terminate, even if interrupted 2) even if you put a throw e in, it will still be wrong because the thread's interrupted status is not preserved This will cause cancellation problems for threads that are inside borrowObject() that want to terminate early ONLY if they are interrupted. For example, if a borrow-and-wait-forever was running on an pooled executor thread in Java 1.5 and the executor service tried to cancel a task and that task had early-termination logic in it that checked interrupted status to terminate early, the task would never be cancelled. For us, this is minor because we are on a Tomcat request thread that has to wait for this resource to continue, but for others that have pools of stuff that are being used by time-bound tasks, it's inconvenient to write code that waits for what ends up being arbitrary time periods for a wait. It would be easier to just say wait forever and allow interruption by someone else who is watching me. Suggestion: make the code read like this: case WHEN_EXHAUSTED_BLOCK: try { if(_maxWait = 0) { wait(); } else { wait(_maxWait); } } catch(InterruptedException e) { Thread.currentThread().interrupt(); throw e; } -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[jira] Updated: (POOL-97) EVICTION_TIMER is never cancelled.
[ https://issues.apache.org/jira/browse/POOL-97?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phil Steitz updated POOL-97: Fix Version/s: 2.0 EVICTION_TIMER is never cancelled. -- Key: POOL-97 URL: https://issues.apache.org/jira/browse/POOL-97 Project: Commons Pool Issue Type: Bug Affects Versions: 1.3 Reporter: Devendra Patil Fix For: 2.0 The static EVICTION_TIMER (java.util.Timer) used in GenericObjectPool is never cancelled (even after closing the pool). The GenericObjectPool.close() method just cancels the _evictor (TimerTask). I agree this behaviour is ideal if EVICTION_TIMER is to be used across multiple pools. But, In my case, the resources (i.e. jars) are dynamically deployed and undeployed on remote grid-servers. If EVICTION_TIMER thread doesn't stop, the grid-servers fails to undeploy (i.e. delete) the jars. The grid-server doesn't restart during resource deployment/undeployment, so, setting EVICTION_TIMER to daemon doesn't help me. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[jira] Resolved: (POOL-95) GenericObjectPool constructor with GenericObjectPool.Config ignores softMinEvictableIdleTimeMillis
[ https://issues.apache.org/jira/browse/POOL-95?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phil Steitz resolved POOL-95. - Resolution: Fixed Fix Version/s: 2.0 Fix has been applied and will be included in 2.0 GenericObjectPool constructor with GenericObjectPool.Config ignores softMinEvictableIdleTimeMillis -- Key: POOL-95 URL: https://issues.apache.org/jira/browse/POOL-95 Project: Commons Pool Issue Type: Bug Affects Versions: 1.3, 2.0 Reporter: Christoph Grothaus Priority: Minor Fix For: 2.0 Attachments: GenericObjectPool.patch The GenericObjectPool(PoolableObjectFactory factory, GenericObjectPool.Config config) constructor ignores the setting of softMinEvictableIdleTimeMillis that is made in config. Reason: The abovementioned constructor calls the wrong constructor GenericObjectPool(PoolableObjectFactory factory, int maxActive, byte whenExhaustedAction, long maxWait, int maxIdle, int minIdle, boolean testOnBorrow, boolean testOnReturn, long timeBetweenEvictionRunsMillis, int numTestsPerEvictionRun, long minEvictableIdleTimeMillis, boolean testWhileIdle) instead of the correct one GenericObjectPool(PoolableObjectFactory factory, int maxActive, byte whenExhaustedAction, long maxWait, int maxIdle, int minIdle, boolean testOnBorrow, boolean testOnReturn, long timeBetweenEvictionRunsMillis, int numTestsPerEvictionRun, long minEvictableIdleTimeMillis, boolean testWhileIdle, long softMinEvictableIdleTimeMillis) -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [dbcp] [pool] Roadmap ideas
On 6/21/07, Henri Yandell [EMAIL PROTECTED] wrote: On 6/20/07, Phil Steitz [EMAIL PROTECTED] wrote: Next releases: [dbcp] - 1.3 close as many of the 1.3-marked bugs as possible without the new pool impl and add instrumentation using JDK logging, therefore increasing required JDK level to 1.4. +1. Instrumentation is strongly needed. Agreed. I have started this and will commit what I have to trunks. Resolution of some issues involving close behavior may have to be deferred to rework of pool-dbcp connection (move to CompositePools). Continue dependency on [pool]'s GOP in this release. More aggressive bug fixing, performance improvement - more testing, public beta required. Need to talk about a strategy for that. It'd be very nice to get a test suite, separate from the unit tests, that we can point at an undefined database and churn through. It could do performance testing as well as veracity testing. I have some of this and will commit it in crude state to start. [pool] - push out a 1.3.1 including fixes applied since 1.3 and if possible, fixes for POOL-97, POOL-93, with dbcp 1.3 depending on this. The idea here is the 1.3.x branches of [pool] and [dbcp] continue to support existing clients with full backward compatibility at JDK 1.4 level, providing bug fixes but no new functionality or APIs. My general thoughts on [pool] are 'whatever [dbcp] needs it to do'. After looking at this some more and the fixes that have been committed to the pool 2.0 trunk, I am now seeing Sandy's original plan as better - get a pool 2.0 out directly instead of trying to push out a 1.3.1. Some of the fixes already committed would force 1.4 at least and the new stuff in compositepool should be released. So unless I hear screams, I will continue down the pool 2.0 path, getting a release out ASAP that will work with dbcp 1.2.x. Sorry to change my mind on this. 2.0's: (Work could begin now on branches, concurrently with 1.x releases above) [dbcp]: 2.0 move to CompositePool backing and add JDBC 4 support, increasing JDK level to 1.5 Now agree with Sandy's orginal plan to leave this at 1.4 and move to 1.5 in 3.0. If 1.x-incompatible changes are necessary (not obvious at this point that they are), rename affected packages dbcp2. [pool]: 2.0 release compositepool package, resolve open pool bugs. JDK level upped to 1.5. Investigate use of JDK concurrency package to improve performance and/or resolve some open pool issues. I'd have to find time to help out with the minor releases before I probably have huge ideas here. All help appreciated. Thanks! Phil - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[jira] Commented: (POOL-98) Make GenericObjectPool better extensible
[ https://issues.apache.org/jira/browse/POOL-98?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12506784 ] Phil Steitz commented on POOL-98: - OK, I understand the use case, but with the 1.3 pool impl, I don't think what you want can be done. Fortunately or unfortunately, GenericObjectPool is actually an idle object pool + accounting - i.e, the object references that it maintains persistently in its _pool linked list are only the *idle* instances waiting to be checked out. It does not maintain references to the checked out instances (only counts of these) so there is no way to walk the pool of all idle plus active objects. To support this, you need something like the (now deprecated) AbandonedObjectPool that was added to [dbcp] to keep track of objects checked out of the pool. The compositepool package (at this writing not yet released) also includes support for tracking objects borrowed from the pool. The reason that I don't like allowing subclasses to depend on _pool is that it then becomes part of the public API, so can't be changed or deleted later. Make GenericObjectPool better extensible Key: POOL-98 URL: https://issues.apache.org/jira/browse/POOL-98 Project: Commons Pool Issue Type: Improvement Affects Versions: 1.3 Reporter: Henning Schmiedehausen Priority: Minor The current implementation of GenericObjectPool encapsulates the _pool object and there is no way to get it directly, which makes some things like JMX pool monitoring a bit awkward. Would it be possible to either make _pool protected or add a method protected Collection getInternalPool() { return _pool; } or something like that to the GenericObjectPool implementation (and probably others, but that is the one that bites me most... :-) ) This would make extending the GenericObjectPool much easier. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [VOTE] Commons Modeler 2.0.1 RC1
+1 on release notes, release contents, rat, META-INF, maven build, but have to -1 on bits-to-mirrors, until I can verify the hashes: ERROR: Checksum stored in commons-modeler-2.0.1-javadoc.jar.md5 does not match commons-modeler-2.0.1-javadoc.jar Correct hash: a5381f035c7076bffe8df548768449e2 commons-modeler-2.0.1-javadoc.jar File contents: 5f2e5e2aadc3445cbc6a7d754bf6cc49 ERROR: Checksum stored in commons-modeler-2.0.1-sources.jar.md5 does not match commons-modeler-2.0.1-sources.jar Correct hash: fa1531a9178e625d77f5209a59056bf2 commons-modeler-2.0.1-sources.jar File contents: 5f2e5e2aadc3445cbc6a7d754bf6cc49 The sigs (including for these files) and other hashes check fine for me. I don't know what is going on here, since I get a similar failure when I execute maven dist on the unpacked sources and then verify the hashes of the generated files - the same two files give different hashes when run through either md5sum or openssl md5 on Linux Fedora Core 4. I use verify_sigs.sh from /committers/tools/releases to verify the sigs and hashes; but I also checked both md5sum and openssl md5 from the command line. Both gave the Correct hash values above. Notice that the two hashes in the .md5 files are the *same* for the -sources and -javadoc jars, which have different contents. To make sure this was not a download problem on my end, I downloaded twice and then just looked at the .md5 files directly at http://people.apache.org/~niallp/modeler-2.0.1-rc1/. You can see the stored hashes are the same. So unless I am going crazy here, m1 is doing something funny. Very odd, I just did maven dist again, watched it create the hashes with the right names and put the same hash values into the two .md5. Bizarre... I did not verify the Ant build - too painful - but assm you have checked that ;-) Phil - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [VOTE] Commons Modeler 2.0.1 RC1
I am not 100% sure that this is correct Jelly ant lib syntax, but I think I now see the problem. In maven.xml, you have !-- create checksum for sources jar -- ant:checksum file=${maven.dist.dir}/${maven.final.name}-sources.jar property=jar.md5/ ant:echo message=${jar.md5} *${maven.final.name}-sources.jar file=${maven.dist.dir}/${maven.final.name}-sources.jar.md5 / !-- create checksum for javadoc jar -- ant:checksum file=${maven.dist.dir}/${maven.final.name}-javadoc.jar property=jar.md5/ ant:echo message=${jar.md5} *${maven.final.name}-javadoc.jar file=${maven.dist.dir}/${maven.final.name}-javadoc.jar.md5 / The jar.md5 property is somehow being overloaded. The following works4me: !-- create checksum for sources jar -- ant:checksum file=${maven.dist.dir}/${maven.final.name}-sources.jar property=sources.jar.md5/ ant:echo message=${sources.jar.md5} *${maven.final.name}-sources.jar file=${maven.dist.dir}/${maven.final.name}-sources.jar.md5 / !-- create checksum for javadoc jar -- ant:checksum file=${maven.dist.dir}/${maven.final.name}-javadoc.jar property=javadoc.jar.md5/ ant:echo message=${javadoc.jar.md5} *${maven.final.name}-javadoc.jar file=${maven.dist.dir}/${maven.final.name}-javadoc.jar.md5 / Phil - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [VOTE] Invite Rahul Akolkar to join the Apache Commons PMC
+1 to no escape! Phil On 6/21/07, Niall Pemberton [EMAIL PROTECTED] wrote: Rahul Akolkar is an existing Jakarta PMC member and Commons Committer. He was against the proposal for Commons to become a TLP. Since that vote passed and the Apache Board has now passed the resolution for Commons to become a TLP I would like to invite Rahul to join the new Apache Commons PMC. It would be a tragedy IMO if we lose valuable members of the Commons Community just because they were originally against the TLP proposal. [ ] +1, don't let Rahul get away - lets try to get him to join the new PMC [ ] -1, no leave him out Niall P.S. Anyone else in the same situation, then I suggest we do the same - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [VOTE] Invite Simon Kitching to join the Apache Commons PMC
+1! Phil On 6/21/07, Niall Pemberton [EMAIL PROTECTED] wrote: Simon Kitching is an existing Jakarta PMC member and Commons Committer. He was against the proposal for Commons to become a TLP. Since that vote passed and the Apache Board has now passed the resolution for Commons to become a TLP I would like to invite Simon to join the new Apache Commons PMC. It would be a tragedy IMO if we lose valuable members of the Commons Community just because they were originally against the TLP proposal. [ ] +1, don't let Simon get away - lets try to get him to join the new PMC [ ] -1, no leave him out Niall P.S. Anyone else in the same situation, then I suggest we do the same - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[jira] Reopened: (POOL-86) GenericKeyedObjectPool retaining too many idle objects
[ https://issues.apache.org/jira/browse/POOL-86?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phil Steitz reopened POOL-86: - Assignee: Phil Steitz (was: Sandy McArthur) I agree with Rob that a better solution should be provided - either make GenericObjectPool's LRU/MRU behavior configurable or provide an alternative MRU-based impl. Robust alternatives are available in the compositepool package, so one resolution is to close this on release of pool 2.0, but I would like to consider addressing this in the 1.x branch as well. GenericKeyedObjectPool retaining too many idle objects -- Key: POOL-86 URL: https://issues.apache.org/jira/browse/POOL-86 Project: Commons Pool Issue Type: Bug Affects Versions: 1.3 Reporter: Mike Martin Assignee: Phil Steitz Fix For: 2.0 Attachments: pool-86.patch, pool-86.withtest.patch There are two somewhat related problems in GenericKeyedObjectPool that cause many more idle objects to be retained than should be, for much longer than they should be. Firstly, borrowObject() is returning the LRU object rather than the MRU object. That minimizes rather than maximizes object reuse and tends to refresh all the idle objects, preventing them from becoming evictable. The idle LinkedList is being maintained with: borrowObject: list.removeFirst() returnObject: list.addLast() These should either both be ...First() or both ...Last() so the list maintains a newer-to-older, or vice-versa, ordering. The code in evict() works from the end of the list which indicates newer-to-older might have been originally intended. Secondly, evict() itself has a couple of problems, both of which only show up when many keys are in play: 1. Once it processes a key it doesn't advance to the next key. 2. _evictLastIndex is not working as documented (Position in the _pool where the _evictor last stopped). Instead it's the position where the last scan started, and becomes the position at which it attempts to start scanning *in the next pool*. That just causes objects eligible for eviction to sometimes be skipped entirely. Here's a patch fixing both problems: GenericKeyedObjectPool.java 990c990 pool.addLast(new ObjectTimestampPair(obj)); --- pool.addFirst(new ObjectTimestampPair(obj)); 1094,1102c1094,1095 } // if we don't have a keyed object pool iterator if (objIter == null) { final LinkedList list = (LinkedList)_poolMap.get(key); if (_evictLastIndex 0 || _evictLastIndex list.size()) { _evictLastIndex = list.size(); } objIter = list.listIterator(_evictLastIndex); --- LinkedList list = (LinkedList)_poolMap.get(key); objIter = list.listIterator(list.size()); 1154,1155c1147 _evictLastIndex = -1; objIter = null; --- key = null; 1547,1551d1538 /** * Position in the _pool where the _evictor last stopped. */ private int _evictLastIndex = -1; I have a local unit test for this but it depends on some other code I can't donate. It works like this: 1. Fill the pool with _maxTotal objects using many different keys. 2. Select X as a small number, e.g. 2. 3. Compute: maxEvictionRunsNeeded = (maxTotal - X) / numTestsPerEvictionRun + 2; maxEvictionTime = minEvictableIdleTimeMillis + maxEvictionRunsNeeded * timeBetweenEvictionRunsMillis; 4. Enter a loop: a. Borrow X objects. b. Exit if _totalIdle = 0 c. Return the X objects. Fail if loop doesn't exit within maxEvictionTime. Mike -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [VOTE] Commons Modeler 2.0.1 RC2
+1 - looks good, including hashes now. Phil - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[pool] 2.0 releaese WAS: [jira] Commented: (POOL-98) Make GenericObjectPool better extensible
Leaving the ticket because this is really general discussion. snip. ok, I can understand this. Any time frame on compositepool / what is needed to get the code release-ready? As with all things apache, that depends on available volunteer cycles. I posted a straw man roadmap for both [pool] and [dbcp] earlier this week (http://marc.info/?l=jakarta-commons-devm=118237533115289w=4). Comments are welcome. There are some bugs open against the GenericObjectPool impl that I think we should fix in a pool 1.3.1 or 1.4 that could in theory be worked (and maintained) concurrently with pool 2.0. I am willing to do the grunt RM work if others will step up to contribute and most of all review and test. The compositepool impl is in good shape but needs review and testing. There are also some things that could be added or improved if we up the JDK requirement to 1.5 so we can use the concurrent package. Another reason to patch 1.3 is to address things like POOL-97, POOL-93, POOL-102, POOL-86 for direct users of GenericObjectPool and to provide a drop-in fix for tomcat and others who use pool with dbcp while we work on moving dbcp to use pool 2.0. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[jira] Commented: (DBCP-216) Improvement of error recovery in KeyedCPDSConnectionFactory
[ https://issues.apache.org/jira/browse/DBCP-216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12506620 ] Phil Steitz commented on DBCP-216: -- Patch looks correct and appropriate to me, subject to the comments below. Similar changes should probably be made to CPDSConnectionFactory. Regarding 2., with current backing pool (o.a.c.pool.GenericKeyedObjectPool), the destroy - invalidate change (2.) will only work for *active* (i.e., checked out) connections and in this case it is necessary, but not to remove from the pool, but to maintain integrity of the active count. Unfortunately, the contract of the pool's invalidate method only applies to objects that have been borrowed from the pool, so if the (exception-generating) connectionClosed event originates from from an idle connection, this will not work. This should not happen in general though, since these events should come from handles from checked out connections. Test cases illustrating the problem in this ticket should be added before committing the patch. Improvement of error recovery in KeyedCPDSConnectionFactory --- Key: DBCP-216 URL: https://issues.apache.org/jira/browse/DBCP-216 Project: Commons Dbcp Issue Type: Improvement Affects Versions: 1.2.2 Environment: Windows XP, Java 1.5.0_06-b05, Sybase ASE 12.5.4, jConnect 6.0.5 EBF 13862, Commons Pool 1.3 Reporter: Marcos Sanz Fix For: 1.3 Attachments: KeyedCPDSConnectionFactory.java.diff Attached you'll find a patch that improves the recovery of the class in different error situations. 1. The addition of removeConnectionEventListener() in destroyObject() ensures that the listener is removed, which is not guaranteed to have happened upon call of destroyObject(). We are for sure not interested any more in events, since we are about to destroy. 2. The same addition is made to connectionClosed(). Additionally, we have substituted there the call to destroyObject() with a call to pool.invalidateObject(). This is necessary because otherwise the object is destroyed but not removed from the pool. 3. The same substitution is made in connectionErrorOccurred(), otherwise the object might remain in the pool. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[jira] Updated: (DBCP-97) setAutoCommit(true) when returning connection to the pool
[ https://issues.apache.org/jira/browse/DBCP-97?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phil Steitz updated DBCP-97: Fix Version/s: 1.3 setAutoCommit(true) when returning connection to the pool - Key: DBCP-97 URL: https://issues.apache.org/jira/browse/DBCP-97 Project: Commons Dbcp Issue Type: Bug Affects Versions: Nightly Builds Environment: Operating System: All Platform: All Reporter: Dirk Verbeeck Fix For: 1.3 From the Struts user list: [OT] RE: Stackoverflow after DB inactivity (MySQL reconnect problem) http://www.mail-archive.com/[EMAIL PROTECTED]/msg70196.html Giving a hint to the database driver that you don't need long running transactions makes sense. setAutoCommit(true) should be added to PoolableConnectionFactory.passivateObject -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[jira] Updated: (DBCP-225) getConnection / borrowObject fails with NullPointerException
[ https://issues.apache.org/jira/browse/DBCP-225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phil Steitz updated DBCP-225: - Fix Version/s: 1.3 getConnection / borrowObject fails with NullPointerException Key: DBCP-225 URL: https://issues.apache.org/jira/browse/DBCP-225 Project: Commons Dbcp Issue Type: Bug Affects Versions: 1.2.1, 1.2.2 Environment: Solaris 10, Oracle 10g RAC Reporter: Alexei Samonov Fix For: 1.3 We use dbcp PoolingDataSource in Solaris/Oracle 10g RAC environment and our getConnection calls fail sporadically with the following stack trace (1.2.1) Caused by: org.apache.commons.dbcp.SQLNestedException: Cannot get a connection, pool exhausted at org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:103) at org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:540) ... more Caused by: java.util.NoSuchElementException: Could not create a validated object, cause: null at org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:806) at org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:95) ... 24 more This is definitely not a pool exhausted situation, it is just being reported as pool exhausted. Since NoSuchElementException that you use does not allow cause, only Exception message (null) is being printed. With some debugging I was able to recover the root exception: java.lang.NullPointerException at org.apache.commons.dbcp.DelegatingConnection.setAutoCommit(DelegatingConnection.java:268) at org.apache.commons.dbcp.PoolableConnectionFactory.activateObject(PoolableConnectionFactory.java:368) at org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:786) at org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:95) at org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:540) ... Looks like it is trying to borrow/validate DelegatingConnection which delegate is null. Hoping to resolve the issue we upgraded to 1.2.2 but it did not help. Here is is an exception stack trace from 1.2.2: org.apache.commons.dbcp.SQLNestedException: Cannot get a connection, pool error Could not create a validated object, cause: null at org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:104) at org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:880) ... more Caused by: java.util.NoSuchElementException: Could not create a validated object, cause: null at org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:871) at org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:96) ... 28 more We use the following dbcp properties: autoCommit=false readOnly=false maxActive=200 maxIdle=20 minIdle=10 minEvictableIdleIime=30 maxWait=200 accessToUnderlyingConnectionAllowed=true validationQuery=SELECT 1 FROM DUAL ConnectionCachingEnabled=true FastConnectionFailoverEnabled=true I could not find the existing reported dbcp/object pool bug but I see similar reports on the web, for example http://forum.java.sun.com/thread.jspa?threadID=713200messageID=4124915 -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[jira] Commented: (DBCP-212) PoolingDataSource closes physical connections
[ https://issues.apache.org/jira/browse/DBCP-212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12506646 ] Phil Steitz commented on DBCP-212: -- PoolableConnectionFactory.makeObject() has been synchronized since the initial commit. Synchronization of validateObject was removed in r132110 in response to BZ 25096. Can anyone else see a reason that this synchronization is necessary? PoolingDataSource closes physical connections - Key: DBCP-212 URL: https://issues.apache.org/jira/browse/DBCP-212 Project: Commons Dbcp Issue Type: Bug Affects Versions: 1.2.2 Environment: Windows XP, Java 1.5.0_06-b05, Sybase ASE 12.5.4, jConnect 6.0.5 EBF 13862, Commons Pool 1.3 Reporter: Marcos Sanz Fix For: 1.3 Attachments: DBCPtester.java, DBCPtester.java, output.txt By executing the attached program and monitoring the process id of the physical connections at the database server, it is possible to demonstrate that the connections are being actually physically closed and reopened by the application at a very high rate. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[jira] Commented: (POOL-98) Make GenericObjectPool better extensible
[ https://issues.apache.org/jira/browse/POOL-98?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12506659 ] Phil Steitz commented on POOL-98: - I don't think we want to expose the _pool, since this is actually a linked list of idle objects, so exposing it is not that useful and would really tie GOP to the current internal implementation if we allowed subclasses to depend on it. Other than the current counters numActive, numIdle, and the pool properties, what would be most useful for management? Make GenericObjectPool better extensible Key: POOL-98 URL: https://issues.apache.org/jira/browse/POOL-98 Project: Commons Pool Issue Type: Improvement Affects Versions: 1.3 Reporter: Henning Schmiedehausen Priority: Minor The current implementation of GenericObjectPool encapsulates the _pool object and there is no way to get it directly, which makes some things like JMX pool monitoring a bit awkward. Would it be possible to either make _pool protected or add a method protected Collection getInternalPool() { return _pool; } or something like that to the GenericObjectPool implementation (and probably others, but that is the one that bites me most... :-) ) This would make extending the GenericObjectPool much easier. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[dbcp] [pool] Roadmap ideas
There are still quite a few bugs open against [dbcp] and [pool]. We also have an unreleased, improved pool impl in the compositepool package. I would like to toss out some ideas for discussion about where to go with these two components and guage interest in helping out. Recent releases: [dbcp] - The 1.2.2 release was a conservative bug fix release - minimal change while addressing some bugs and upgrading to [pool] 1.3. [pool] - 1.3 less conservative vis a vis 1.2, several bug fixes, including some added synchronization to fix thread safety issues. Next releases: [dbcp] - 1.3 close as many of the 1.3-marked bugs as possible without the new pool impl and add instrumentation using JDK logging, therefore increasing required JDK level to 1.4. Resolution of some issues involving close behavior may have to be deferred to rework of pool-dbcp connection (move to CompositePools). Continue dependency on [pool]'s GOP in this release. More aggressive bug fixing, performance improvement - more testing, public beta required. Need to talk about a strategy for that. [pool] - push out a 1.3.1 including fixes applied since 1.3 and if possible, fixes for POOL-97, POOL-93, with dbcp 1.3 depending on this. The idea here is the 1.3.x branches of [pool] and [dbcp] continue to support existing clients with full backward compatibility at JDK 1.4 level, providing bug fixes but no new functionality or APIs. 2.0's: (Work could begin now on branches, concurrently with 1.x releases above) [dbcp]: 2.0 move to CompositePool backing and add JDBC 4 support, increasing JDK level to 1.5 and removing currently deprecated classes. If 1.x-incompatible changes are necessary (not obvious at this point that they are), rename affected packages dbcp2. [pool]: 2.0 release compositepool package, resolve open pool bugs. JDK level upped to 1.5. Investigate use of JDK concurrency package to improve performance and/or resolve some open pool issues. Comments, suggestions, *volunteers*? Thanks! Phil - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[jira] Resolved: (MATH-166) Special functions not very accurate
[ https://issues.apache.org/jira/browse/MATH-166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phil Steitz resolved MATH-166. -- Resolution: Fixed Patch applied. Thanks Special functions not very accurate --- Key: MATH-166 URL: https://issues.apache.org/jira/browse/MATH-166 Project: Commons Math Issue Type: Bug Affects Versions: 1.1 Reporter: Lukas Theussl Priority: Minor Fix For: 1.2 Attachments: MATH-166.patch The Gamma and Beta functions return values in double precision but the default epsilon is set to 10e-9. I think that the default should be set to the highest possible accuracy, as this is what I'd expect to be returned by a double precision routine. Note that the erf function already uses a call to Gamma.regularizedGammaP with an epsilon of 1.0e-15. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[jira] Resolved: (MATH-158) Arbitrary log
[ https://issues.apache.org/jira/browse/MATH-158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phil Steitz resolved MATH-158. -- Resolution: Fixed Modified version of patch applied. Changes: made arguments doubles and used conventional names for base, argument. Also documented behavior for 0, negative arguments. Thanks! Arbitrary log - Key: MATH-158 URL: https://issues.apache.org/jira/browse/MATH-158 Project: Commons Math Issue Type: New Feature Reporter: Hasan Diwan Priority: Minor Fix For: 1.2 Attachments: commons-math.pat Patch adds the change-of-base property for a logarithm and a test to make sure it works. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [RESULT] 3rd attempt: Release commons-io 1.3.2
On 6/19/07, Dion Gillard [EMAIL PROTECTED] wrote: I believe you're right. http://jakarta.apache.org/site/proposal.html#decisions/items/plan says ...Majority approval is required before the public release can be made. Yes, that is the policy, but I have never seen us move forward with a release with an unresolved -1 in commons. Could be this has happened, but not in the last 4 or so years. It is up to the RM, but with a -1 from a major contributor to the code base, I would personally not push out the release. FWIW, as mentioned on other threads, I agree with Stephen on the version number issue. Phil - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[jira] Reopened: (DBCP-102) [dbcp] setReadOnly setAutoCommit called too many times
[ https://issues.apache.org/jira/browse/DBCP-102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phil Steitz reopened DBCP-102: -- [dbcp] setReadOnly setAutoCommit called too many times Key: DBCP-102 URL: https://issues.apache.org/jira/browse/DBCP-102 Project: Commons Dbcp Issue Type: Bug Affects Versions: 1.2 Environment: Operating System: other Platform: Sun Reporter: AC Fix For: 1.2.2 In order to gain some processor time for my application that uses Hibernate, I looked with optimizeIt where it spends time. It seems that for a request on the database (Oracle 9) around 25% (!!?) is spent on getting the connection from the DBCP pool, and this not only the first time!. The methods that provoke this loss of time are connection.setReadOnly and connection.setAutoCommit called inside the method PoolableConnectionFactory.activateObject. Looking to the stack, these calls translate to communication with the Oracle server. The obvious thing to do is to check if read only and autocommit flags are already set to the expected values. (Of course, Oracle could 've done this too, but I hope you'll have a faster response :) ) Thank you very much for you help. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[jira] Updated: (DBCP-102) [dbcp] setReadOnly setAutoCommit called too many times
[ https://issues.apache.org/jira/browse/DBCP-102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phil Steitz updated DBCP-102: - Fix Version/s: (was: 1.2.2) 1.3 Setting fix level to 1.3, as fix represents a behavior change for passivateObject vs. 1.2 base. The fix in 1.2.2 retained (broken?) 1.2.1 behavior. [dbcp] setReadOnly setAutoCommit called too many times Key: DBCP-102 URL: https://issues.apache.org/jira/browse/DBCP-102 Project: Commons Dbcp Issue Type: Bug Affects Versions: 1.2 Environment: Operating System: other Platform: Sun Reporter: AC Fix For: 1.3 In order to gain some processor time for my application that uses Hibernate, I looked with optimizeIt where it spends time. It seems that for a request on the database (Oracle 9) around 25% (!!?) is spent on getting the connection from the DBCP pool, and this not only the first time!. The methods that provoke this loss of time are connection.setReadOnly and connection.setAutoCommit called inside the method PoolableConnectionFactory.activateObject. Looking to the stack, these calls translate to communication with the Oracle server. The obvious thing to do is to check if read only and autocommit flags are already set to the expected values. (Of course, Oracle could 've done this too, but I hope you'll have a faster response :) ) Thank you very much for you help. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[jira] Reopened: (DBCP-97) setAutoCommit(true) when returning connection to the pool
[ https://issues.apache.org/jira/browse/DBCP-97?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phil Steitz reopened DBCP-97: - Need to verify this fix is correct and necessary. setAutoCommit(true) when returning connection to the pool - Key: DBCP-97 URL: https://issues.apache.org/jira/browse/DBCP-97 Project: Commons Dbcp Issue Type: Bug Affects Versions: Nightly Builds Environment: Operating System: All Platform: All Reporter: Dirk Verbeeck From the Struts user list: [OT] RE: Stackoverflow after DB inactivity (MySQL reconnect problem) http://www.mail-archive.com/[EMAIL PROTECTED]/msg70196.html Giving a hint to the database driver that you don't need long running transactions makes sense. setAutoCommit(true) should be added to PoolableConnectionFactory.passivateObject -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[nightly build] dbcp failed.
Failed build logs: http://vmbuild.apache.org/~commons/nightly/logs//20070616/dbcp.log - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [vote] releasing jci RC4 as 1.0
+1 Builds happily, sigs and hashes are good and everything else looks good to me. Thanks for rolling the source distro. Phil On 6/13/07, Torsten Curdt [EMAIL PROTECTED] wrote: As (more or less) requested I've also created a source and binary distributions http://people.apache.org/builds/jakarta-commons/jci/1.0-RC4/dists/ The website is live http://jakarta.apache.org/commons/jci/ And maven artifacts are here http://people.apache.org/builds/jakarta-commons/jci/1.0-RC4/org/ apache/commons/commons-jci/1.0/ http://people.apache.org/builds/jakarta-commons/jci/1.0-RC4/org/ apache/commons/commons-jci-core/1.0/ http://people.apache.org/builds/jakarta-commons/jci/1.0-RC4/org/ apache/commons/commons-jci-fam/1.0/ http://people.apache.org/builds/jakarta-commons/jci/1.0-RC4/org/ apache/commons/commons-jci-eclipse/1.0/ http://people.apache.org/builds/jakarta-commons/jci/1.0-RC4/org/ apache/commons/commons-jci-groovy/1.0/ http://people.apache.org/builds/jakarta-commons/jci/1.0-RC4/org/ apache/commons/commons-jci-janino/1.0/ http://people.apache.org/builds/jakarta-commons/jci/1.0-RC4/org/ apache/commons/commons-jci-javac/1.0/ http://people.apache.org/builds/jakarta-commons/jci/1.0-RC4/org/ apache/commons/commons-jci-rhino/1.0/ http://people.apache.org/builds/jakarta-commons/jci/1.0-RC4/org/ apache/commons/commons-jci-examples/1.0/ So please cast your votes to release commons jci RC4 as 1.0 cheers -- Torsten - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[all] Versioining and compatibility WAS Re: [VOTE] Release CLI 1.1
Sorry, but I have to agree with Niall on this - needs to be 2.0 with the incompatible changes - and I would like very much for us to agree once and for all on some versioning/deprecation rules. We seem to have lost the old versioning guidelines (unless I am senile, we used to have these on the commons or Jakarta pages somewhere). Apart from 0), which is a conservative but worth-considering-carefully PoV, the rules below are more or less what we have been adhering to over the last several years in commons and I would like to propose that we standardize on them. If all are OK, I will gin up a versioning page. A very good one, which is pretty much a C version of what I am proposing below, is http://apr.apache.org/versioning.html. 0) Never break backward source or binary compatibility - i.e., when you need to, change the package name. 1) 0 is going to have to fail sometimes for practical reasons. When it does, fall back to a) no source, binary or semantic incompatibilities or deprecations introduced in a x.y.z release b) no source or binary incompatibilities in an x.y release, but deprecations and semantic changes allowed c) no removals without prior deprecations, but these and other backward incompatible changes allowed in x.0 releases. This means that the [cli] release with the current changes would need to be 2.0. Phil - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[nightly build] email logging finder pipeline failed.
Failed build logs: http://vmbuild.apache.org/~commons/nightly/logs//20070613/email.log http://vmbuild.apache.org/~commons/nightly/logs//20070613/logging.log http://vmbuild.apache.org/~commons/nightly/logs//20070613/finder.log http://vmbuild.apache.org/~commons/nightly/logs//20070613/pipeline.log - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [nightly build] email logging finder pipeline failed.
Ignore this. All builds actually succeeded. Error in script made worse last night :-(. Should be fixed now. Phil - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[nightly build] beanutils logging failed.
Failed build logs: http://vmbuild.apache.org/~commons/nightly/logs//20070612/beanutils.log http://vmbuild.apache.org/~commons/nightly/logs//20070612/logging.log - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [nightly build] beanutils logging failed.
On 6/12/07, Dennis Lundberg [EMAIL PROTECTED] wrote: Dennis Lundberg wrote: Phil Steitz wrote: Failed build logs: http://vmbuild.apache.org/~commons/nightly/logs//20070612/beanutils.log http://vmbuild.apache.org/~commons/nightly/logs//20070612/logging.log - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] Now I'm confused. From what I can tell by the log for logging, all the builds were successful. Might be something wrong with the nightly script. Will have a look. Found it. On line 280 we do this: if [ `ls target/commons-$component*.jar` ] # build succeeded With commons logging there is more than one such jar file, which makes for an unpredictable response from ls. We should probably do something along the lines of this instead: if [ -f target/commons-$component*.jar ] # build succeeded I'll let one of the unix gurus figure that the exact details, since I don't have an environment set up to try this properly. Ugh. IIRC there is a comment above that line saying that that ugly hack should be replaced by examining return code from m1, m2 ;-) If Hen does not beat me to it, I will fix this. Thanks for chasing it down. Phil - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[nightly build] logging failed.
Failed build logs: http://vmbuild.apache.org/~commons/nightly/logs//20070611/logging.log - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [nightly build] logging failed.
On 6/11/07, Ben Speakmon [EMAIL PROTECTED] wrote: Phil actually had to fix a bug in commons-nightly before the maven 2 file would be honored, so after I asked about getting email working, logging started working too. On 6/11/07, Dennis Lundberg [EMAIL PROTECTED] wrote: I'm unsure why, but it seems that Commons Logging has been built using Maven 1 up until June 9. even though it was listed in the nightly_proper_maven2_list.txt file. When Ben added Email to the list of Maven 2 builds, it triggered Logging to be built with Maven 2 as well. Hopefully my patch to the nightly build script will fix the nightly M2 build of Logging. That is correct. The problem was that the script was svn-upping the wrong directory (ever since it was moved from commons-build to commons-nightly several months back) so it had not picked up changes to the nightly list files until I fixed it a couple of days ago. It will pick up changes now. Phil - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[jira] Created: (DBCP-226) DBCP should support configuration of queryTimeout for validation queries
DBCP should support configuration of queryTimeout for validation queries Key: DBCP-226 URL: https://issues.apache.org/jira/browse/DBCP-226 Project: Commons Dbcp Issue Type: Improvement Affects Versions: 1.2.2 Reporter: Phil Steitz Priority: Minor Fix For: 1.3 Validation queries used when testOnBorrow and/or testOnReturn do not currently support queryTimeouts. This should be configurable. See http://www.mail-archive.com/commons-dev@jakarta.apache.org/msg92457.html for background on this request made on the commons-dev mailing list. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[nightly build] logging failed.
Failed build logs: http://vmbuild.apache.org/~commons/nightly/logs//20070610/logging.log - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[all] What is a release? WAS: [vote] releasing jci RC3 as 1.0 ...maybe this time?
On 6/10/07, Torsten Curdt [EMAIL PROTECTED] wrote: So you are saying +1 for the assembly release ...but I don't get the who needs what part. What I meant was that I did not see it as a big user inconvenience to bundle all of the jars into a single release, since they are individually small. So, yes, I am +1 on putting together an assembly and making it available, along with the KEYS file, on the commons download page. I just fear no one will use it ...but anyway. Good question. Two groups that I can think of. First would be anyone who repackages and/or builds larger distributions from source including this component. I think some Linux distros and other repackagers use some of our components built from source. IIRC this has come up in the past. I know some other OSS projects also build from our released sources. For example, Tomcat repackages parts of [pool], [dbcp] and some others, building the repackaged classes from our released source distributions. In theory, they could do the same from svn tags, but there is some loss of control in this case (see below). A second user group for the full distros is IT shops where full buildable source and binaries are a requirement for OSS intruduction. Forcing them to assemble their own source distros without hashes or sigs associated with the sources would not be a welcome prospect. See above ...I think subversion is our source distribution. I don't really see a point in providing a classic source distribution. But maybe that's too much change for now ;) Yes, too much for me at least. In theory, voting on a tag and pointing users there to get sources still could be viewed as a release, but that is a big change from current practice and inconvenient for users who prefer to build from release sources. It is a big change ...but who says that changes are bad? ;) Not me - change is good and it is good to talk about these things. This one is a little dicey though because a) I don't like the idea of not having signed sources as part of the distro and b) if not looked at the right way it sort of seems like we are moving away from open source and in violation of ASF release policy (from http://www.apache.org/dev/release.html): The Apache Software Foundation produces open source software. All releases are in the form of the source materials needed to make changes to the software being released. In some cases, binary/bytecode packages are also produced as a convenience to users that might not have the appropriate tools to build a compiled version of the source. In all such cases, the binary/bytecode package must have the same version number as the source release and may only add binary/bytecode files that are the result of compiling that version of the source code release. But seriously: be realistic. Those people building the releases from will have subversion on their machine. And what can be simpler than a one-liner to checkout the sources? Even downloading it from an apache mirror is more work. I think we should always distribute the source with our releases. I think the only problem that I am seeing is that tags are not immutable in svn. So in theory even a tag is not good enough but a release is really a revision number. I guess what we are really talking about here is what is a release? True. Especially with maven2 as the build system. I tried to raise that a couple of times already. We need to come up with proper release instructions for maven2 based projects. This is for sure. There we agree :-) - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: svn commit: r545184 - /jakarta/commons/proper/math/trunk/src/java/org/apache/commons/math/distribution/PoissonDistributionImpl.java
On 6/7/07, Martin van den Bemt [EMAIL PROTECTED] wrote: If the intention is to have a NullPointerException when null is passed, declare it and throw it specifically (that way you are in control of the exception). I think every undocumented nullpointer exception is a bug. I would prefer throwing IllegalArgumentException to NPE in this case, since there is no null dereference. I would also consider making this a no-op (i.e., leave the NormalDistribution alone if null is passed to this method). I am fine leaving as is, however. Phil - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: svn commit: r545192 - in /jakarta/commons/proper/math/trunk/src: java/org/apache/commons/math/distribution/ test/org/apache/commons/math/distribution/
public abstract class DistributionFactory { /** @@ -59,16 +59,7 @@ * @return a new factory. */ public static DistributionFactory newInstance() { -DistributionFactory factory = null; -try { -DiscoverClass dc = new DiscoverClass(); -factory = (DistributionFactory) dc.newInstance( -DistributionFactory.class, - org.apache.commons.math.distribution.DistributionFactoryImpl); -} catch(Throwable t) { -return new DistributionFactoryImpl(); -} -return factory; +return new DistributionFactoryImpl(); } /** This will break anyone who is actually using the commons-discovery method to provide a custom factory - i.e., upgrading to 1.2 will result in the custom config being ignored. If done everywhere else, the above change also allows us to eliminate compile-time dependency on [discovery] and [logging]. The alternative is to leave the old code alone inside the now deprecated class and remove altogether in 2.0. I guess I am OK with this, since I doubt there are many users actually depending on this and we can doc the changed behavior. Other opinions? Phil - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]