+1

martin

Daniel Rall wrote:

Scott Eade <[EMAIL PROTECTED]> writes:



Shouldn't the TORQUE_3_0_BRANCH have had a TORQUE_3_0_1
tag added with the release of 3.0.1?

I guess there is little point if there is not likely to
be a 3.0.2 (i.e. efforts are now focused on 3.1).



What do people think about putting this to get this deadlock fix into a 3.0.2? I ask because I have seen it fatal to an entire web application.


dlr 2003/06/19 17:41:18


Modified: src/java/org/apache/torque/manager AbstractBaseManager.java
MethodResultCache.java
Log:
Corrected deadly multi-CPU thread deadlock problem discovered by Ed
Korthof <[EMAIL PROTECTED]> and John McNally <[EMAIL PROTECTED]>. The
problem was due to emulation of synchronization using an int counter
(to improve performance by avoiding Java "synchronized" keyword).
Post-increment and decrement operators compile to three op codes (with
Sun's JDK 1.3.1 for Linux), unsafe on a multi-CPU box.
* src/java/org/apache/torque/manager/AbstractBaseManager.java
lockCache, inGet, cacheGet(), removeInstanceImpl(),
putInstanceImpl(): Removed use of lockeCache and inGet instance
fields, replaced by consistent use of Java's "synchronized" keyword
(on the current instance, "this").
getMethodResultCache(), addCacheListenerImpl(), createSubsetList(),
readObject(): Added JavaDoc.
* src/java/org/apache/torque/manager/MethodResultCache.java
lockCache, getImpl(), putImpl(), get(): Removed use of lockeCache
instance fields, replaced by consistent use of Java's "synchronized"
keyword (on the current instance, "this").
remove(): Added error messages to several method overloads.


Ed Korthof <[EMAIL PROTECTED]> supplied some test code:

Subject: test code demonstrating the lack of atomicity in increment/decrement
Date: Wed, 18 Jun 2003 19:16:53 -0700

This took a while to fail on a single CPU box ... but it fails pretty
quickly on a multi-cpu box.

thanks --

Ed





--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]



Reply via email to