[jira] Created: (OPENJPA-146) Entity enhancement fails while using EmbeddedId on a MappedSuperclass
Entity enhancement fails while using EmbeddedId on a MappedSuperclass - Key: OPENJPA-146 URL: https://issues.apache.org/jira/browse/OPENJPA-146 Project: OpenJPA Issue Type: Bug Components: kernel Environment: openjpa 0.9.6 Reporter: Gokhan Ergul Both buildtime and runtime class enhancement fail with the following error: ... 1339 TRACE [main] openjpa.Enhance - Enhancing type class test.B. Exception in thread main 0|false|0.9.6-incubating org.apache.openjpa.util.GeneralException: null at org.apache.openjpa.enhance.PCEnhancer.run(PCEnhancer.java:350) at org.apache.openjpa.enhance.PCEnhancer.run(PCEnhancer.java:3711) at org.apache.openjpa.enhance.PCEnhancer.run(PCEnhancer.java:3661) at org.apache.openjpa.enhance.PCEnhancer.main(PCEnhancer.java:3633) Caused by: java.lang.NullPointerException at org.apache.openjpa.enhance.PCEnhancer.enhanceObjectId(PCEnhancer.java:2745) at org.apache.openjpa.enhance.PCEnhancer.run(PCEnhancer.java:338) ... 3 more Test code as follows: test/A.java: -- package test; import javax.persistence.*; import java.io.Serializable; @MappedSuperclass abstract public class A { @Embeddable public static class A_PK implements Serializable { @Basic protected int id1; @Basic protected String id2; public boolean equals (Object other) { return false; } public int hashCode () { return 0; } } @EmbeddedId protected A_PK pk; @Basic protected String val; } -- test/B.java: -- package test; import javax.persistence.Entity; @Entity public class B extends A { } -- META-INF/persistence.xml: -- persistence xmlns=http://java.sun.com/xml/ns/persistence; xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance; xsi:schemaLocation=http://java.sun.com/xml/ns/persistence/orm http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd; version=1.0 persistence-unit name=TestService transaction-type=RESOURCE_LOCAL classtest.A$A_PK/class classtest.A/class classtest.B/class properties property name=openjpa.Log value=DefaultLevel=TRACE/ property name=openjpa.ConnectionUserName value=test/ property name=openjpa.ConnectionPassword value=test/ property name=openjpa.ConnectionURL value=jdbc:mysql://localhost:3306/oam?useServerPrepStmts=false/ property name=openjpa.ConnectionDriverName value=com.mysql.jdbc.Driver/ /properties /persistence-unit /persistence -- -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (OPENJPA-146) Entity enhancement fails while using EmbeddedId on a MappedSuperclass
[ https://issues.apache.org/jira/browse/OPENJPA-146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gokhan Ergul updated OPENJPA-146: - Attachment: test-case.zip Test case and trace output attached. Entity enhancement fails while using EmbeddedId on a MappedSuperclass - Key: OPENJPA-146 URL: https://issues.apache.org/jira/browse/OPENJPA-146 Project: OpenJPA Issue Type: Bug Components: kernel Environment: openjpa 0.9.6 Reporter: Gokhan Ergul Attachments: test-case.zip Both buildtime and runtime class enhancement fail with the following error: ... 1339 TRACE [main] openjpa.Enhance - Enhancing type class test.B. Exception in thread main 0|false|0.9.6-incubating org.apache.openjpa.util.GeneralException: null at org.apache.openjpa.enhance.PCEnhancer.run(PCEnhancer.java:350) at org.apache.openjpa.enhance.PCEnhancer.run(PCEnhancer.java:3711) at org.apache.openjpa.enhance.PCEnhancer.run(PCEnhancer.java:3661) at org.apache.openjpa.enhance.PCEnhancer.main(PCEnhancer.java:3633) Caused by: java.lang.NullPointerException at org.apache.openjpa.enhance.PCEnhancer.enhanceObjectId(PCEnhancer.java:2745) at org.apache.openjpa.enhance.PCEnhancer.run(PCEnhancer.java:338) ... 3 more Test code as follows: test/A.java: -- package test; import javax.persistence.*; import java.io.Serializable; @MappedSuperclass abstract public class A { @Embeddable public static class A_PK implements Serializable { @Basic protected int id1; @Basic protected String id2; public boolean equals (Object other) { return false; } public int hashCode () { return 0; } } @EmbeddedId protected A_PK pk; @Basic protected String val; } -- test/B.java: -- package test; import javax.persistence.Entity; @Entity public class B extends A { } -- META-INF/persistence.xml: -- persistence xmlns=http://java.sun.com/xml/ns/persistence; xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance; xsi:schemaLocation=http://java.sun.com/xml/ns/persistence/orm http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd; version=1.0 persistence-unit name=TestService transaction-type=RESOURCE_LOCAL classtest.A$A_PK/class classtest.A/class classtest.B/class properties property name=openjpa.Log value=DefaultLevel=TRACE/ property name=openjpa.ConnectionUserName value=test/ property name=openjpa.ConnectionPassword value=test/ property name=openjpa.ConnectionURL value=jdbc:mysql://localhost:3306/oam?useServerPrepStmts=false/ property name=openjpa.ConnectionDriverName value=com.mysql.jdbc.Driver/ /properties /persistence-unit /persistence -- -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (OPENJPA-144) JDBCConfigurationImpl does not support JNDI lookup for non-jta-data-source.
[ https://issues.apache.org/jira/browse/OPENJPA-144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brad L Vandermoon updated OPENJPA-144: -- Environment: WebSphere 6.1, DB2 v8.1 and sequences (was: WebSphere 6.1 ) The above solution is offered as an analysis level solution only. Testing is in process. JDBCConfigurationImpl does not support JNDI lookup for non-jta-data-source. - Key: OPENJPA-144 URL: https://issues.apache.org/jira/browse/OPENJPA-144 Project: OpenJPA Issue Type: Bug Components: jdbc Environment: WebSphere 6.1, DB2 v8.1 and sequences Reporter: Brad L Vandermoon A non-jta-data-source is required for DB2 sequences; however, org.apache.openjpa.jdbc.conf.JDBCConfigurationImpl does not support a JNDI lookup for this data source from the openjpa.ConnectionFactory2Name property as documented (refer to section 5.12 and 4.2.1 of the OpenJPA manual). It seems like the same implementation for the jta-data-source should be implemented for the non-jta-data-source. i.e. // ADD createConnectionFactory2() private DecoratingDataSource createConnectionFactory2() { DataSource ds = (DataSource) connectionFactory2.get(); if (ds != null) return setupConnectionFactory(ds, true); ds = (DataSource) super.getConnectionFactory2(); // JNDI lookup if (ds == null) ds = DataSourceFactory.newDataSource(this, true); return setupConnectionFactory(ds, true); } // MODIFY this method public Object getConnectionFactory2() { // override to configure data source if (dataSource2 == null) { DecoratingDataSource ds = createConnectionFactory2(); dataSource2 = DataSourceFactory.installDBDictionary (getDBDictionaryInstance(), ds, this, true); } return dataSource2; } -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (OPENJPA-144) JDBCConfigurationImpl does not support JNDI lookup for non-jta-data-source.
[ https://issues.apache.org/jira/browse/OPENJPA-144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12473085 ] Brad L Vandermoon commented on OPENJPA-144: --- When we specify only a jta-data-source, we get: 4|false|0.9.7-incubating-SNAPSHOT org.apache.openjpa.util.InvalidStateException: Unable to execute suspend on a WebSphere managed transaction. WebSphere does not support direct manipulation of managed transactions. When we try to use JNDI to specify the non-jta-data-source, we get the same error. JDBCConfigurationImpl doesn't appear to use this information to populate datasource2. When we pass the properties to have OpenJPA create the connection it works fine. Here's the properties we pass: entry key=openjpa.Connection2DriverName value=com.ibm.db2.jcc.DB2Driver/ entry key=openjpa.Connection2UserName value=Test/ entry key=openjpa.Connection2Password value=password/ entry key=openjpa.Connection2URL value=jdbc:db2://localhost:3700/RQSTSET/ entry key=openjpa.jdbc.Schema value=PROPCAS/ In this later case, JDBCConfigurationImpl creates a SimpleDriverDataSource for the datasource2 and uses it to do the sequence. JDBCConfigurationImpl does not support JNDI lookup for non-jta-data-source. - Key: OPENJPA-144 URL: https://issues.apache.org/jira/browse/OPENJPA-144 Project: OpenJPA Issue Type: Bug Components: jdbc Environment: WebSphere 6.1, DB2 v8.1 and sequences Reporter: Brad L Vandermoon A non-jta-data-source is required for DB2 sequences; however, org.apache.openjpa.jdbc.conf.JDBCConfigurationImpl does not support a JNDI lookup for this data source from the openjpa.ConnectionFactory2Name property as documented (refer to section 5.12 and 4.2.1 of the OpenJPA manual). It seems like the same implementation for the jta-data-source should be implemented for the non-jta-data-source. i.e. // ADD createConnectionFactory2() private DecoratingDataSource createConnectionFactory2() { DataSource ds = (DataSource) connectionFactory2.get(); if (ds != null) return setupConnectionFactory(ds, true); ds = (DataSource) super.getConnectionFactory2(); // JNDI lookup if (ds == null) ds = DataSourceFactory.newDataSource(this, true); return setupConnectionFactory(ds, true); } // MODIFY this method public Object getConnectionFactory2() { // override to configure data source if (dataSource2 == null) { DecoratingDataSource ds = createConnectionFactory2(); dataSource2 = DataSourceFactory.installDBDictionary (getDBDictionaryInstance(), ds, this, true); } return dataSource2; } -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (OPENJPA-146) Entity enhancement fails while using EmbeddedId on a MappedSuperclass
[ https://issues.apache.org/jira/browse/OPENJPA-146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12473122 ] Abe White commented on OPENJPA-146: --- We don't enhance the oid class anymore, so this bug is probably fixed or at least will manifest itself in a different way in the latest code. Entity enhancement fails while using EmbeddedId on a MappedSuperclass - Key: OPENJPA-146 URL: https://issues.apache.org/jira/browse/OPENJPA-146 Project: OpenJPA Issue Type: Bug Components: kernel Environment: openjpa 0.9.6 Reporter: Gokhan Ergul Attachments: test-case.zip Both buildtime and runtime class enhancement fail with the following error: ... 1339 TRACE [main] openjpa.Enhance - Enhancing type class test.B. Exception in thread main 0|false|0.9.6-incubating org.apache.openjpa.util.GeneralException: null at org.apache.openjpa.enhance.PCEnhancer.run(PCEnhancer.java:350) at org.apache.openjpa.enhance.PCEnhancer.run(PCEnhancer.java:3711) at org.apache.openjpa.enhance.PCEnhancer.run(PCEnhancer.java:3661) at org.apache.openjpa.enhance.PCEnhancer.main(PCEnhancer.java:3633) Caused by: java.lang.NullPointerException at org.apache.openjpa.enhance.PCEnhancer.enhanceObjectId(PCEnhancer.java:2745) at org.apache.openjpa.enhance.PCEnhancer.run(PCEnhancer.java:338) ... 3 more Test code as follows: test/A.java: -- package test; import javax.persistence.*; import java.io.Serializable; @MappedSuperclass abstract public class A { @Embeddable public static class A_PK implements Serializable { @Basic protected int id1; @Basic protected String id2; public boolean equals (Object other) { return false; } public int hashCode () { return 0; } } @EmbeddedId protected A_PK pk; @Basic protected String val; } -- test/B.java: -- package test; import javax.persistence.Entity; @Entity public class B extends A { } -- META-INF/persistence.xml: -- persistence xmlns=http://java.sun.com/xml/ns/persistence; xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance; xsi:schemaLocation=http://java.sun.com/xml/ns/persistence/orm http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd; version=1.0 persistence-unit name=TestService transaction-type=RESOURCE_LOCAL classtest.A$A_PK/class classtest.A/class classtest.B/class properties property name=openjpa.Log value=DefaultLevel=TRACE/ property name=openjpa.ConnectionUserName value=test/ property name=openjpa.ConnectionPassword value=test/ property name=openjpa.ConnectionURL value=jdbc:mysql://localhost:3306/oam?useServerPrepStmts=false/ property name=openjpa.ConnectionDriverName value=com.mysql.jdbc.Driver/ /properties /persistence-unit /persistence -- -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (OPENJPA-141) More performance improvements (in response to changes for OPENJPA-138)
[ https://issues.apache.org/jira/browse/OPENJPA-141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kevin Sutter updated OPENJPA-141: - Attachment: openjpa-141.txt On second thought, I have attached the proposed patch to this Issue. Please review and provide feedback in a timely manner. I would like to commit these changes as soon as possible. Thank you. The one thing that I decided to do is to go with SOFT references for the keys instead of WEAK. This way, in case the GC does start to run more often, it will only clean up these SOFT references if memory constraints exist. Of course, all of the OpenJPA tests run just fine and the performance numbers are looking better. Thanks for your help, Kevin More performance improvements (in response to changes for OPENJPA-138) -- Key: OPENJPA-141 URL: https://issues.apache.org/jira/browse/OPENJPA-141 Project: OpenJPA Issue Type: Sub-task Components: jpa Reporter: Kevin Sutter Assigned To: Kevin Sutter Attachments: openjpa-141.txt Abe's response to my committed changes for OPENJPA-138. I will be working with Abe and my performance team to work through these issues... == --- incubator/openjpa/trunk/openjpa-kernel/src/main/java/org/apache/ openjpa/ee/JNDIManagedRuntime.java (original) +++ incubator/openjpa/trunk/openjpa-kernel/src/main/java/org/apache/ openjpa/ee/JNDIManagedRuntime.java Sun Feb 11 18:33:05 2007 @@ -29,6 +29,7 @@ implements ManagedRuntime { private String _tmLoc = java:/TransactionManager; +private static TransactionManager _tm; Whoa, I didn't think you meant caching the TM statically. That has to be backed out. You can cache it in an instance variable, but not statically. Nothing should prevent someone having two different BrokerFactories accessing two different TMs at two different JNDI locations. BrokerImpl: + * Cache from/to assignments to avoid Class.isAssignableFrom overhead + * @param from the target Class + * @param to the Class to test + * @return true if the to class could be assigned to from class + */ +private boolean isAssignable(Class from, Class to) { + boolean isAssignable; + ConcurrentReferenceHashMap assignableTo = + (ConcurrentReferenceHashMap) _assignableTypes.get(from); + + if (assignableTo != null) { // to cache exists... + isAssignable = (assignableTo.get(to) != null); + if (!isAssignable) { // not in the map yet... + isAssignable = from.isAssignableFrom(to); + if (isAssignable) { + assignableTo.put(to, new Object()); + } + } + } else { // no to cache yet... + isAssignable = from.isAssignableFrom(to); + if (isAssignable) { + assignableTo = new ConcurrentReferenceHashMap( + ReferenceMap.HARD, ReferenceMap.WEAK); + _assignableTypes.put(from, assignableTo); + assignableTo.put(to, new Object()); + } + } + return isAssignable; +} This code could be simplified a lot. Also, I don't understand what you're trying to do from a memory management perspective. For the _assignableTypes member you've got the Class keys using hard refs and the Map values using weak refs. No outside code references the Map values, so all entries should be eligible for GC pretty much immediately. The way reference hash maps work prevents them from expunging stale entries except on mutators, but still... every time a new entry is added, all the old entries should be getting GC'd and removed. Same for the individual Map values, which again map a hard class ref to an unreferenced object value with a weak ref. Basically the whole map-of-maps system should never contain more than one entry total after a GC run and a mutation. I'd really like to see you run your tests under a different JVM, because it seems to me like (a) this shouldn't be necessary in the first place, and (b) if this is working, it's again only because of some JVM particulars or GC timing particulars or testing particulars (I've seen profilers skew results in random ways like this) or even a bug in ConcurrentReferenceHashMap. The same goes for all the repeat logic in FetchConfigurationImpl. And if we keep this code or some variant of it, I strongly suggest moving it to a common place like ImplHelper. +/** + * Generate the hashcode for this Id. Cache the type's generated hashcode + * so that it doesn't have to be generated each time. + */ public int hashCode() { if
[jira] Commented: (OPENJPA-141) More performance improvements (in response to changes for OPENJPA-138)
[ https://issues.apache.org/jira/browse/OPENJPA-141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12473172 ] Abe White commented on OPENJPA-141: --- 1. Why not keep a single assignable types map in ImplHelper? 2. I thought we had decided on the assignable types map having hard keys and soft values. Using soft keys and hard values is odd to say the least. First, as I mentioned in a previous note, using soft Class keys is pointless. Once a Class is eligible for GC there's no point in keeping it in cache, so weak is better. Second, using hard values means that other than adapting to class redeploys, this is basically a hard cache, because the only time entries are removed is when a Class disappears, and that only happens on redeploy. It's not necessarily bad to make this a hard cache, but it should be discussed. 3. Why keep dedicated isAssignable methods in BrokerImpl and FetchConfigurationImpl if all they do is delegate to ImplHelper? Why not call ImplHelper directly? 4. Why are you using a static JNDI location - TM cache in JNDITransactionManager rather than just caching the TM in an instance variable? The only time that would help performance is if you're creating a bunch of BrokerFactories all using the same TM location. Most applications will only use a single BrokerFactory. If your benchmarks is constantly creating BrokerFactories, I'd question the validity of the benchmark. 5. Even if ImplHelper.isAssignable retains its map parameter (and per #1 above I question why it should), it should just be a Map; I don't see why you'd have the method require a ConcurrentMap. 6. #2 above applies also to the Class-base hash map in OpenJPAId. More performance improvements (in response to changes for OPENJPA-138) -- Key: OPENJPA-141 URL: https://issues.apache.org/jira/browse/OPENJPA-141 Project: OpenJPA Issue Type: Sub-task Components: jpa Reporter: Kevin Sutter Assigned To: Kevin Sutter Attachments: openjpa-141.txt Abe's response to my committed changes for OPENJPA-138. I will be working with Abe and my performance team to work through these issues... == --- incubator/openjpa/trunk/openjpa-kernel/src/main/java/org/apache/ openjpa/ee/JNDIManagedRuntime.java (original) +++ incubator/openjpa/trunk/openjpa-kernel/src/main/java/org/apache/ openjpa/ee/JNDIManagedRuntime.java Sun Feb 11 18:33:05 2007 @@ -29,6 +29,7 @@ implements ManagedRuntime { private String _tmLoc = java:/TransactionManager; +private static TransactionManager _tm; Whoa, I didn't think you meant caching the TM statically. That has to be backed out. You can cache it in an instance variable, but not statically. Nothing should prevent someone having two different BrokerFactories accessing two different TMs at two different JNDI locations. BrokerImpl: + * Cache from/to assignments to avoid Class.isAssignableFrom overhead + * @param from the target Class + * @param to the Class to test + * @return true if the to class could be assigned to from class + */ +private boolean isAssignable(Class from, Class to) { + boolean isAssignable; + ConcurrentReferenceHashMap assignableTo = + (ConcurrentReferenceHashMap) _assignableTypes.get(from); + + if (assignableTo != null) { // to cache exists... + isAssignable = (assignableTo.get(to) != null); + if (!isAssignable) { // not in the map yet... + isAssignable = from.isAssignableFrom(to); + if (isAssignable) { + assignableTo.put(to, new Object()); + } + } + } else { // no to cache yet... + isAssignable = from.isAssignableFrom(to); + if (isAssignable) { + assignableTo = new ConcurrentReferenceHashMap( + ReferenceMap.HARD, ReferenceMap.WEAK); + _assignableTypes.put(from, assignableTo); + assignableTo.put(to, new Object()); + } + } + return isAssignable; +} This code could be simplified a lot. Also, I don't understand what you're trying to do from a memory management perspective. For the _assignableTypes member you've got the Class keys using hard refs and the Map values using weak refs. No outside code references the Map values, so all entries should be eligible for GC pretty much immediately. The way reference hash maps work prevents them from expunging stale entries except on mutators, but still... every time a new entry is added, all the old entries should be getting GC'd and removed. Same for the individual Map values, which
AttributeOverride to secondary table of entity using mappedsuperclass not honored
I have the following scenario mapping entity to a table: - a mapped super class that has a field - a subclass with a pk and a field. - trying to map all the fields (except the pk (id) ) to a secondary table (SEC_TABLE2MSC) - use @Column in the sub-class to override (name) to the secondary table - use @AttributeOverride to override the field (street) in the mapped super class to the secondary table. === @MappedSuperclass public abstract class AnnMSCMultiTable implements IMultiTableEntity { // @Column(table=SEC_TABLE2MSC) private String street; public String getStreet() { return street; } public void setStreet(String street) { this.street = street; } } === @Entity @SecondaryTable(name=SEC_TABLE2MSC, [EMAIL PROTECTED](name=id)) @AttributeOverrides( { @AttributeOverride(name=street, [EMAIL PROTECTED](name=street, table=SEC_TABLE2MSC)), }) public class AnnMSCMultiTableEnt extends AnnMSCMultiTable { @Id private int id; @Column(name=name2, table=SEC_TABLE2MSC) private String name; } === From examining JPA spec, there is no specific in the @Column and @AttributeOverride that this should not be allow. So I believe this is a valid scenario. Using the MappingTool, the attribute override does not map the street field to the SEC_TABLE2MSC as I would expect: CREATE TABLE AnnMSCMultiTableEnt (id INTEGER NOT NULL, street VARCHAR(254), PRIMARY KEY (id)); CREATE TABLE SEC_TABLE2MSC (id INTEGER, name2 VARCHAR(254)); CREATE INDEX I_SC_TMSC_ID ON SEC_TABLE2MSC (id); I experiment this a little bit and the only way I can map the street field to SEC_TABLE2MSC is to add the @Column against the street attribute in the super class. (the commented @Column in the example). The expected SQL are: CREATE TABLE AnnMSCMultiTableEnt (id INTEGER NOT NULL, PRIMARY KEY (id)); CREATE TABLE SEC_TABLE2MSC (id INTEGER, street VARCHAR(254), name2 VARCHAR(254)); CREATE INDEX I_SC_TMSC_ID ON SEC_TABLE2MSC (id); I tried to create the tables manually using the expected layout, but the runtime still using the incorrect tables structure. I would suspect the MappingTool and the runtime are using the same mapping strategy. Questions: 1) Can someone confirm this is a valid scenario? 2) In AnnotationPersistenceMappingParser.setupColumn() method, where the @AttributeOverride annotation is process, it does not seems to extract the table name from the annotation. Can someone give me direction if this is the place to start looking for a solution? And what/where-else should I be looking? 3) May be someone has a better idea how to correct this problem! Thanks. Albert Lee.
[jira] Updated: (OPENJPA-144) JDBCConfigurationImpl does not support JNDI lookup for non-jta-data-source.
[ https://issues.apache.org/jira/browse/OPENJPA-144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brad L Vandermoon updated OPENJPA-144: -- Attachment: Only-JTASpecified.txt Both-JTAandNonJTASpecified.txt Requested stack traces provided above. I also verified that JDBCConfigurationImpl.connectionFactory2Name contained my non-jta-data-source reference (i.e. java:comp/env/jdbc/RequestSetNoTran) when producing Both-JTAandNonJTASpecified.txt. JDBCConfigurationImpl does not support JNDI lookup for non-jta-data-source. - Key: OPENJPA-144 URL: https://issues.apache.org/jira/browse/OPENJPA-144 Project: OpenJPA Issue Type: Bug Components: jdbc Environment: WebSphere 6.1, DB2 v8.1 and sequences Reporter: Brad L Vandermoon Attachments: Both-JTAandNonJTASpecified.txt, Only-JTASpecified.txt A non-jta-data-source is required for DB2 sequences; however, org.apache.openjpa.jdbc.conf.JDBCConfigurationImpl does not support a JNDI lookup for this data source from the openjpa.ConnectionFactory2Name property as documented (refer to section 5.12 and 4.2.1 of the OpenJPA manual). It seems like the same implementation for the jta-data-source should be implemented for the non-jta-data-source. i.e. // ADD createConnectionFactory2() private DecoratingDataSource createConnectionFactory2() { DataSource ds = (DataSource) connectionFactory2.get(); if (ds != null) return setupConnectionFactory(ds, true); ds = (DataSource) super.getConnectionFactory2(); // JNDI lookup if (ds == null) ds = DataSourceFactory.newDataSource(this, true); return setupConnectionFactory(ds, true); } // MODIFY this method public Object getConnectionFactory2() { // override to configure data source if (dataSource2 == null) { DecoratingDataSource ds = createConnectionFactory2(); dataSource2 = DataSourceFactory.installDBDictionary (getDBDictionaryInstance(), ds, this, true); } return dataSource2; } -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (OPENJPA-144) JDBCConfigurationImpl does not support JNDI lookup for non-jta-data-source.
[ https://issues.apache.org/jira/browse/OPENJPA-144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12473208 ] Patrick Linskey commented on OPENJPA-144: - It looks like there are two issues here. First, we seem to be incorrectly attempting to suspend the transaction when a non-jta data source is specified. Second, our WAS integration does not allow suspending of the JTA transaction. I will create a new issue for this second problem. JDBCConfigurationImpl does not support JNDI lookup for non-jta-data-source. - Key: OPENJPA-144 URL: https://issues.apache.org/jira/browse/OPENJPA-144 Project: OpenJPA Issue Type: Bug Components: jdbc Environment: WebSphere 6.1, DB2 v8.1 and sequences Reporter: Brad L Vandermoon Attachments: Both-JTAandNonJTASpecified.txt, Only-JTASpecified.txt A non-jta-data-source is required for DB2 sequences; however, org.apache.openjpa.jdbc.conf.JDBCConfigurationImpl does not support a JNDI lookup for this data source from the openjpa.ConnectionFactory2Name property as documented (refer to section 5.12 and 4.2.1 of the OpenJPA manual). It seems like the same implementation for the jta-data-source should be implemented for the non-jta-data-source. i.e. // ADD createConnectionFactory2() private DecoratingDataSource createConnectionFactory2() { DataSource ds = (DataSource) connectionFactory2.get(); if (ds != null) return setupConnectionFactory(ds, true); ds = (DataSource) super.getConnectionFactory2(); // JNDI lookup if (ds == null) ds = DataSourceFactory.newDataSource(this, true); return setupConnectionFactory(ds, true); } // MODIFY this method public Object getConnectionFactory2() { // override to configure data source if (dataSource2 == null) { DecoratingDataSource ds = createConnectionFactory2(); dataSource2 = DataSourceFactory.installDBDictionary (getDBDictionaryInstance(), ds, this, true); } return dataSource2; } -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (OPENJPA-148) Parsing exception while using an exploded archive
[ https://issues.apache.org/jira/browse/OPENJPA-148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Florent BENOIT updated OPENJPA-148: --- Attachment: steps.txt The steps to reproduce this error. Parsing exception while using an exploded archive --- Key: OPENJPA-148 URL: https://issues.apache.org/jira/browse/OPENJPA-148 Project: OpenJPA Issue Type: Bug Components: jpa Environment: Sun JDK 5.0 / EasyBeans / OpenJPA snapshot 0.9.7 Reporter: Florent BENOIT Attachments: stacktrace-error.txt, steps.txt This happens when using OpenJPA as persistence provider for the EasyBeans ObjectWeb container. The error is happening with exploded archive. Exploded means that there is a directory, named entitybean.jar with a folder META-INF which contains a file named persistence.xml, and other directories/files for the classes. It seems that when using this mode, OpenJPA is trying to parse the directory inputstream (which is failing). This exception is not occuring if a jar file is used instead of the exploded mode, but the exploded mode is the default mode for EasyBeans. Note also that this exception don't occur by using Hibernate Entity Manager or Oracle TopLink Essentials as persistence provider. I will attach to this issue a stack trace (with the exploded directory) which fails and at the end with a jar file (which work) And 4 steps used to reproduce this problem -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (OPENJPA-141) More performance improvements (in response to changes for OPENJPA-138)
[ https://issues.apache.org/jira/browse/OPENJPA-141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12473228 ] Kevin Sutter commented on OPENJPA-141: -- Good thing I posted the patch... ;-) 1. Why not keep a single assignable types map in ImplHelper? I actually thought about doing that, but then I had concerns about the size of the Cache. So, I decided to isolate the changes (and the caches) to the problems at hand. If you don't see a problem with this, I can change to a single assignable types map. 2. I thought we had decided on the assignable types map having hard keys and soft values. Using soft keys and hard values is odd to say the least. First, as I mentioned in a previous note, using soft Class keys is pointless. Once a Class is eligible for GC there's no point in keeping it in cache, so weak is better. Second, using hard values means that other than adapting to class redeploys, this is basically a hard cache, because the only time entries are removed is when a Class disappears, and that only happens on redeploy. It's not necessarily bad to make this a hard cache, but it should be discussed. Hmmm.. If I understand what you are saying, it really doesn't matter whether we use hard, weak, or soft keys, since the resulting cache will be hard no matter what -- since we're using Class objects in the cache. And, using weak keys actually sounds better due to the class redeploy scenario. I can understand your hesitance with soft keys, but from your arguments, it sounds like we should go with weak keys and hard values. 3. Why keep dedicated isAssignable methods in BrokerImpl and FetchConfigurationImpl if all they do is delegate to ImplHelper? Why not call ImplHelper directly? Certainly could. Cleans up the code a bit. 4. Why are you using a static JNDI location - TM cache in JNDITransactionManager rather than just caching the TM in an instance variable? The only time that would help performance is if you're creating a bunch of BrokerFactories all using the same TM location. Most applications will only use a single BrokerFactory. If your benchmarks is constantly creating BrokerFactories, I'd question the validity of the benchmark. It's probably six of one, half a dozen of the other. I can make the change to use an instance variable. The benchmark is a set of primitives based on the SpecJApp application using the SunOne Application Server. The profiling data from this set of tests indicate that caching of the JNDI lookup is beneficial. Maybe this change only helps with this particular Application Server, but I'm trying to isolate our OpenJPA implementation from the RI (SunOne). 5. Even if ImplHelper.isAssignable retains its map parameter (and per #1 above I question why it should), it should just be a Map; I don't see why you'd have the method require a ConcurrentMap. I did this way to be thread safe. If I only used a Map parameter, then the caller would have to ensure that any updates to the Cache are thread safe. Instead of putting that :artificial requirement on the caller, why not just use the ConcurrentMap type to ensure the safety? Of course, this point is moot, if you are okay with going with a single Cache. 6. #2 above applies also to the Class-base hash map in OpenJPAId. Yep, all the same issue. More performance improvements (in response to changes for OPENJPA-138) -- Key: OPENJPA-141 URL: https://issues.apache.org/jira/browse/OPENJPA-141 Project: OpenJPA Issue Type: Sub-task Components: jpa Reporter: Kevin Sutter Assigned To: Kevin Sutter Attachments: openjpa-141.txt Abe's response to my committed changes for OPENJPA-138. I will be working with Abe and my performance team to work through these issues... == --- incubator/openjpa/trunk/openjpa-kernel/src/main/java/org/apache/ openjpa/ee/JNDIManagedRuntime.java (original) +++ incubator/openjpa/trunk/openjpa-kernel/src/main/java/org/apache/ openjpa/ee/JNDIManagedRuntime.java Sun Feb 11 18:33:05 2007 @@ -29,6 +29,7 @@ implements ManagedRuntime { private String _tmLoc = java:/TransactionManager; +private static TransactionManager _tm; Whoa, I didn't think you meant caching the TM statically. That has to be backed out. You can cache it in an instance variable, but not statically. Nothing should prevent someone having two different BrokerFactories accessing two different TMs at two different JNDI locations. BrokerImpl: + * Cache from/to assignments to avoid Class.isAssignableFrom overhead + * @param from the target Class + * @param to the Class to test + * @return true if the to class could be assigned to
[jira] Commented: (OPENJPA-141) More performance improvements (in response to changes for OPENJPA-138)
[ https://issues.apache.org/jira/browse/OPENJPA-141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12473234 ] Abe White commented on OPENJPA-141: --- Craig, good catch. I didn't even look at the actual assignable method code... I was saving that for when we had settled on a caching strategy. 1. Why not keep a single assignable types map in ImplHelper? I actually thought about doing that, but then I had concerns about the size of the Cache. How are two static maps going to end up being smaller overall than one combined static map? Hmmm.. If I understand what you are saying, it really doesn't matter whether we use hard, weak, or soft keys, since the resulting cache will be hard no matter what -- since we're using Class objects in the cache. No, it does matter. And the type of value references we use matters way more. If we want a hard cache that drops entries for classes that are redeployed, then we should be using weak keys and hard values. If we want a memory sensitive cache, then we should be using hard keys and soft values. I'm not sure where the disconnect is coming from with these reference types. The benchmark is a set of primitives based on the SpecJApp application using the SunOne Application Server. The profiling data from this set of tests indicate that caching of the JNDI lookup is beneficial. Beneficial over the suggested use of an instance variable? Or beneficial over no caching of the TM whatsoever? There's a big difference. 5. Even if ImplHelper.isAssignable retains its map parameter (and per #1 above I question why it should), it should just be a Map; I don't see why you'd have the method require a ConcurrentMap. I did this way to be thread safe. If I only used a Map parameter, then the caller would have to ensure that any updates to the Cache are thread safe. The caller is giving you the Map in your scheme. It's up to him whether the Map he's giving you is used concurrently or not. The helper method itself has no threading issues at all, and only requires a Map. But I agree that if we move to a single cache in ImplHelper it's a moot point. More performance improvements (in response to changes for OPENJPA-138) -- Key: OPENJPA-141 URL: https://issues.apache.org/jira/browse/OPENJPA-141 Project: OpenJPA Issue Type: Sub-task Components: jpa Reporter: Kevin Sutter Assigned To: Kevin Sutter Attachments: openjpa-141.txt Abe's response to my committed changes for OPENJPA-138. I will be working with Abe and my performance team to work through these issues... == --- incubator/openjpa/trunk/openjpa-kernel/src/main/java/org/apache/ openjpa/ee/JNDIManagedRuntime.java (original) +++ incubator/openjpa/trunk/openjpa-kernel/src/main/java/org/apache/ openjpa/ee/JNDIManagedRuntime.java Sun Feb 11 18:33:05 2007 @@ -29,6 +29,7 @@ implements ManagedRuntime { private String _tmLoc = java:/TransactionManager; +private static TransactionManager _tm; Whoa, I didn't think you meant caching the TM statically. That has to be backed out. You can cache it in an instance variable, but not statically. Nothing should prevent someone having two different BrokerFactories accessing two different TMs at two different JNDI locations. BrokerImpl: + * Cache from/to assignments to avoid Class.isAssignableFrom overhead + * @param from the target Class + * @param to the Class to test + * @return true if the to class could be assigned to from class + */ +private boolean isAssignable(Class from, Class to) { + boolean isAssignable; + ConcurrentReferenceHashMap assignableTo = + (ConcurrentReferenceHashMap) _assignableTypes.get(from); + + if (assignableTo != null) { // to cache exists... + isAssignable = (assignableTo.get(to) != null); + if (!isAssignable) { // not in the map yet... + isAssignable = from.isAssignableFrom(to); + if (isAssignable) { + assignableTo.put(to, new Object()); + } + } + } else { // no to cache yet... + isAssignable = from.isAssignableFrom(to); + if (isAssignable) { + assignableTo = new ConcurrentReferenceHashMap( + ReferenceMap.HARD, ReferenceMap.WEAK); + _assignableTypes.put(from, assignableTo); + assignableTo.put(to, new Object()); + } + } + return isAssignable; +} This code could be simplified a lot. Also, I don't understand what you're trying to do from a memory management perspective. For the
[jira] Commented: (OPENJPA-141) More performance improvements (in response to changes for OPENJPA-138)
[ https://issues.apache.org/jira/browse/OPENJPA-141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12473243 ] Kevin Sutter commented on OPENJPA-141: -- Yes, good catch, Craig. The original location that I wanted to resolve the isAssignableFrom() overhead was in BrokerImpl and that was just a one-way check. The two-way check in FetchConfigurationImpl was overlooked. Thank you. But, that brings up a new question... Do we do the two-way check in this new utility method (even though BrokerImpl didn't require this in the past)? Or, is the one-way check sufficient for FetchConfigurationImpl's usage? Any historical perspective? 1. Why not keep a single assignable types map in ImplHelper? I actually thought about doing that, but then I had concerns about the size of the Cache. How are two static maps going to end up being smaller overall than one combined static map? Never mind. Since we're dealing with (hopefully) unique Class keys, a single map will suffice. Hmmm.. If I understand what you are saying, it really doesn't matter whether we use hard, weak, or soft keys, since the resulting cache will be hard no matter what -- since we're using Class objects in the cache. No, it does matter. And the type of value references we use matters way more. If we want a hard cache that drops entries for classes that are redeployed, then we should be using weak keys and hard values. If we want a memory sensitive cache, then we should be using hard keys and soft values. I'm not sure where the disconnect is coming from with these reference types. There have been several viewpoints on the use of these reference types and what the impact would be. To be honest, at this point, all that I am looking for is the ability to cache these assignable types. Whether it's redployment-friendly or memory-friendly, I don't really care at this point. We can worry about that later. If you have a preference for this first iteration, let me know. Thanks. The benchmark is a set of primitives based on the SpecJApp application using the SunOne Application Server. The profiling data from this set of tests indicate that caching of the JNDI lookup is beneficial. Beneficial over the suggested use of an instance variable? Or beneficial over no caching of the TM whatsoever? There's a big difference. Definitely beneficial over no caching of the TM whatsoever. Sorry for the confusion. 5. Even if ImplHelper.isAssignable retains its map parameter (and per #1 above I question why it should), it should just be a Map; I don't see why you'd have the method require a ConcurrentMap. I did this way to be thread safe. If I only used a Map parameter, then the caller would have to ensure that any updates to the Cache are thread safe. The caller is giving you the Map in your scheme. It's up to him whether the Map he's giving you is used concurrently or not. The helper method itself has no threading issues at all, and only requires a Map. But I agree that if we move to a single cache in ImplHelper it's a moot point. That's definitely one way around it. I prefer to enforce the requirement via the signature of the contract. Changing to a single map anyway... More performance improvements (in response to changes for OPENJPA-138) -- Key: OPENJPA-141 URL: https://issues.apache.org/jira/browse/OPENJPA-141 Project: OpenJPA Issue Type: Sub-task Components: jpa Reporter: Kevin Sutter Assigned To: Kevin Sutter Attachments: openjpa-141.txt Abe's response to my committed changes for OPENJPA-138. I will be working with Abe and my performance team to work through these issues... == --- incubator/openjpa/trunk/openjpa-kernel/src/main/java/org/apache/ openjpa/ee/JNDIManagedRuntime.java (original) +++ incubator/openjpa/trunk/openjpa-kernel/src/main/java/org/apache/ openjpa/ee/JNDIManagedRuntime.java Sun Feb 11 18:33:05 2007 @@ -29,6 +29,7 @@ implements ManagedRuntime { private String _tmLoc = java:/TransactionManager; +private static TransactionManager _tm; Whoa, I didn't think you meant caching the TM statically. That has to be backed out. You can cache it in an instance variable, but not statically. Nothing should prevent someone having two different BrokerFactories accessing two different TMs at two different JNDI locations. BrokerImpl: + * Cache from/to assignments to avoid Class.isAssignableFrom overhead + * @param from the target Class + * @param to the Class to test + * @return true if the to class could be assigned to from class + */ +private boolean isAssignable(Class from, Class to) { +
extraneous joins OPENJPA-134
Hi Patrick or Abe, I was wondering if any of you have noticed the following problem. ( this is written up as OPENJPA-134). This shows up as a performance problem in applications when we compare with Hibernate. If I have a M:1 relationship and I make both sides of the relationship EAGER in the mapping, OR I do a join fetch of the multi side relationship, we see an extra join in the generated sql. Example: I have a Dept - Emp with inverse Emp-Dept. If both sides are EAGER in mapping I get the sql select .. from dept t0 join emp t1 on(...) join dept t2 on (...) If both sides are LAZY and I do EJB query select d from Dept d left join fetch d.emps the generated sql is select ... from dept t0 left join emp t1 on(...) join dept t2 on(...) The extra join dept t2 on(...) seems to be extraneous and has a serious impact on performance. Since I already joined Dept with Emp, there is no need to join it back to the Dept because the Dept-Emp and Emp-Dept relationships are inverses of the same foreign key. Have you noticed this same problem? Might you have a quick fix that you can? Otherwise we will start analyzing to see what might be causing it.
[jira] Commented: (OPENJPA-141) More performance improvements (in response to changes for OPENJPA-138)
[ https://issues.apache.org/jira/browse/OPENJPA-141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12473254 ] Patrick Linskey commented on OPENJPA-141: - If we want a hard cache that drops entries for classes that are redeployed, then we should be using weak keys and hard values. If we want a memory sensitive cache, then we should be using hard keys and soft values. I vaguely prefer a hard cache that drops entries for classes that are redeployed, since I care more about speed than memory footprint. More performance improvements (in response to changes for OPENJPA-138) -- Key: OPENJPA-141 URL: https://issues.apache.org/jira/browse/OPENJPA-141 Project: OpenJPA Issue Type: Sub-task Components: jpa Reporter: Kevin Sutter Assigned To: Kevin Sutter Attachments: openjpa-141.txt Abe's response to my committed changes for OPENJPA-138. I will be working with Abe and my performance team to work through these issues... == --- incubator/openjpa/trunk/openjpa-kernel/src/main/java/org/apache/ openjpa/ee/JNDIManagedRuntime.java (original) +++ incubator/openjpa/trunk/openjpa-kernel/src/main/java/org/apache/ openjpa/ee/JNDIManagedRuntime.java Sun Feb 11 18:33:05 2007 @@ -29,6 +29,7 @@ implements ManagedRuntime { private String _tmLoc = java:/TransactionManager; +private static TransactionManager _tm; Whoa, I didn't think you meant caching the TM statically. That has to be backed out. You can cache it in an instance variable, but not statically. Nothing should prevent someone having two different BrokerFactories accessing two different TMs at two different JNDI locations. BrokerImpl: + * Cache from/to assignments to avoid Class.isAssignableFrom overhead + * @param from the target Class + * @param to the Class to test + * @return true if the to class could be assigned to from class + */ +private boolean isAssignable(Class from, Class to) { + boolean isAssignable; + ConcurrentReferenceHashMap assignableTo = + (ConcurrentReferenceHashMap) _assignableTypes.get(from); + + if (assignableTo != null) { // to cache exists... + isAssignable = (assignableTo.get(to) != null); + if (!isAssignable) { // not in the map yet... + isAssignable = from.isAssignableFrom(to); + if (isAssignable) { + assignableTo.put(to, new Object()); + } + } + } else { // no to cache yet... + isAssignable = from.isAssignableFrom(to); + if (isAssignable) { + assignableTo = new ConcurrentReferenceHashMap( + ReferenceMap.HARD, ReferenceMap.WEAK); + _assignableTypes.put(from, assignableTo); + assignableTo.put(to, new Object()); + } + } + return isAssignable; +} This code could be simplified a lot. Also, I don't understand what you're trying to do from a memory management perspective. For the _assignableTypes member you've got the Class keys using hard refs and the Map values using weak refs. No outside code references the Map values, so all entries should be eligible for GC pretty much immediately. The way reference hash maps work prevents them from expunging stale entries except on mutators, but still... every time a new entry is added, all the old entries should be getting GC'd and removed. Same for the individual Map values, which again map a hard class ref to an unreferenced object value with a weak ref. Basically the whole map-of-maps system should never contain more than one entry total after a GC run and a mutation. I'd really like to see you run your tests under a different JVM, because it seems to me like (a) this shouldn't be necessary in the first place, and (b) if this is working, it's again only because of some JVM particulars or GC timing particulars or testing particulars (I've seen profilers skew results in random ways like this) or even a bug in ConcurrentReferenceHashMap. The same goes for all the repeat logic in FetchConfigurationImpl. And if we keep this code or some variant of it, I strongly suggest moving it to a common place like ImplHelper. +/** + * Generate the hashcode for this Id. Cache the type's generated hashcode + * so that it doesn't have to be generated each time. + */ public int hashCode() { if (_typeHash == 0) { -Class base = type; -while (base.getSuperclass() != null - base.getSuperclass() != Object.class) -base =
[jira] Updated: (OPENJPA-144) JDBCConfigurationImpl does not support JNDI lookup for non-jta-data-source.
[ https://issues.apache.org/jira/browse/OPENJPA-144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Patrick Linskey updated OPENJPA-144: Attachment: OPENJPA-144-patch.diff JDBCConfigurationImpl does not support JNDI lookup for non-jta-data-source. - Key: OPENJPA-144 URL: https://issues.apache.org/jira/browse/OPENJPA-144 Project: OpenJPA Issue Type: Bug Components: jdbc Environment: WebSphere 6.1, DB2 v8.1 and sequences Reporter: Brad L Vandermoon Attachments: Both-JTAandNonJTASpecified.txt, Only-JTASpecified.txt, OPENJPA-144-patch.diff A non-jta-data-source is required for DB2 sequences; however, org.apache.openjpa.jdbc.conf.JDBCConfigurationImpl does not support a JNDI lookup for this data source from the openjpa.ConnectionFactory2Name property as documented (refer to section 5.12 and 4.2.1 of the OpenJPA manual). It seems like the same implementation for the jta-data-source should be implemented for the non-jta-data-source. i.e. // ADD createConnectionFactory2() private DecoratingDataSource createConnectionFactory2() { DataSource ds = (DataSource) connectionFactory2.get(); if (ds != null) return setupConnectionFactory(ds, true); ds = (DataSource) super.getConnectionFactory2(); // JNDI lookup if (ds == null) ds = DataSourceFactory.newDataSource(this, true); return setupConnectionFactory(ds, true); } // MODIFY this method public Object getConnectionFactory2() { // override to configure data source if (dataSource2 == null) { DecoratingDataSource ds = createConnectionFactory2(); dataSource2 = DataSourceFactory.installDBDictionary (getDBDictionaryInstance(), ds, this, true); } return dataSource2; } -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
RE: extraneous joins OPENJPA-134
What if just one side is eager? Also, is the @OneToMany's mappedBy attribute set up correctly? -Patrick -- Patrick Linskey BEA Systems, Inc. ___ Notice: This email message, together with any attachments, may contain information of BEA Systems, Inc., its subsidiaries and affiliated entities, that may be confidential, proprietary, copyrighted and/or legally privileged, and is intended solely for the use of the individual or entity named in this message. If you are not the intended recipient, and have received this message in error, please immediately return this by email and then delete it. -Original Message- From: David Wisneski [mailto:[EMAIL PROTECTED] Sent: Wednesday, February 14, 2007 3:27 PM To: open-jpa-dev Subject: extraneous joins OPENJPA-134 Hi Patrick or Abe, I was wondering if any of you have noticed the following problem. ( this is written up as OPENJPA-134). This shows up as a performance problem in applications when we compare with Hibernate. If I have a M:1 relationship and I make both sides of the relationship EAGER in the mapping, OR I do a join fetch of the multi side relationship, we see an extra join in the generated sql. Example: I have a Dept - Emp with inverse Emp-Dept. If both sides are EAGER in mapping I get the sql select .. from dept t0 join emp t1 on(...) join dept t2 on (...) If both sides are LAZY and I do EJB query select d from Dept d left join fetch d.emps the generated sql is select ... from dept t0 left join emp t1 on(...) join dept t2 on(...) The extra join dept t2 on(...) seems to be extraneous and has a serious impact on performance. Since I already joined Dept with Emp, there is no need to join it back to the Dept because the Dept-Emp and Emp-Dept relationships are inverses of the same foreign key. Have you noticed this same problem? Might you have a quick fix that you can? Otherwise we will start analyzing to see what might be causing it.
[jira] Commented: (OPENJPA-141) More performance improvements (in response to changes for OPENJPA-138)
[ https://issues.apache.org/jira/browse/OPENJPA-141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12473285 ] Craig Russell commented on OPENJPA-141: --- If we want a hard cache that drops entries for classes that are redeployed, then we should be using weak keys and hard values. If we want a memory sensitive cache, then we should be using hard keys and soft values. I vaguely prefer a hard cache that drops entries for classes that are redeployed, since I care more about speed than memory footprint. This is also affected by whether we use a giant cache (including all the EMFs) or one cache per EMF. I prefer one cache per EMF because it simplifies the cache entry life cycle. During EMF close, the cache can be explicitly cleared, which allows the garbage collector to be more efficient since it doesn't have to scan all the entries in the cache. More performance improvements (in response to changes for OPENJPA-138) -- Key: OPENJPA-141 URL: https://issues.apache.org/jira/browse/OPENJPA-141 Project: OpenJPA Issue Type: Sub-task Components: jpa Reporter: Kevin Sutter Assigned To: Kevin Sutter Attachments: openjpa-141.txt Abe's response to my committed changes for OPENJPA-138. I will be working with Abe and my performance team to work through these issues... == --- incubator/openjpa/trunk/openjpa-kernel/src/main/java/org/apache/ openjpa/ee/JNDIManagedRuntime.java (original) +++ incubator/openjpa/trunk/openjpa-kernel/src/main/java/org/apache/ openjpa/ee/JNDIManagedRuntime.java Sun Feb 11 18:33:05 2007 @@ -29,6 +29,7 @@ implements ManagedRuntime { private String _tmLoc = java:/TransactionManager; +private static TransactionManager _tm; Whoa, I didn't think you meant caching the TM statically. That has to be backed out. You can cache it in an instance variable, but not statically. Nothing should prevent someone having two different BrokerFactories accessing two different TMs at two different JNDI locations. BrokerImpl: + * Cache from/to assignments to avoid Class.isAssignableFrom overhead + * @param from the target Class + * @param to the Class to test + * @return true if the to class could be assigned to from class + */ +private boolean isAssignable(Class from, Class to) { + boolean isAssignable; + ConcurrentReferenceHashMap assignableTo = + (ConcurrentReferenceHashMap) _assignableTypes.get(from); + + if (assignableTo != null) { // to cache exists... + isAssignable = (assignableTo.get(to) != null); + if (!isAssignable) { // not in the map yet... + isAssignable = from.isAssignableFrom(to); + if (isAssignable) { + assignableTo.put(to, new Object()); + } + } + } else { // no to cache yet... + isAssignable = from.isAssignableFrom(to); + if (isAssignable) { + assignableTo = new ConcurrentReferenceHashMap( + ReferenceMap.HARD, ReferenceMap.WEAK); + _assignableTypes.put(from, assignableTo); + assignableTo.put(to, new Object()); + } + } + return isAssignable; +} This code could be simplified a lot. Also, I don't understand what you're trying to do from a memory management perspective. For the _assignableTypes member you've got the Class keys using hard refs and the Map values using weak refs. No outside code references the Map values, so all entries should be eligible for GC pretty much immediately. The way reference hash maps work prevents them from expunging stale entries except on mutators, but still... every time a new entry is added, all the old entries should be getting GC'd and removed. Same for the individual Map values, which again map a hard class ref to an unreferenced object value with a weak ref. Basically the whole map-of-maps system should never contain more than one entry total after a GC run and a mutation. I'd really like to see you run your tests under a different JVM, because it seems to me like (a) this shouldn't be necessary in the first place, and (b) if this is working, it's again only because of some JVM particulars or GC timing particulars or testing particulars (I've seen profilers skew results in random ways like this) or even a bug in ConcurrentReferenceHashMap. The same goes for all the repeat logic in FetchConfigurationImpl. And if we keep this code or some variant of it, I strongly suggest moving it to a common place like ImplHelper. +/** + * Generate the hashcode for this Id.
[jira] Updated: (OPENJPA-147) T T OpenJPAEntityManager.createInstance(ClassT cls) fails when T is interface
[ https://issues.apache.org/jira/browse/OPENJPA-147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pinaki Poddar updated OPENJPA-147: -- Attachment: iface.trace.1.txt T T OpenJPAEntityManager.createInstance(ClassT cls) fails when T is interface - Key: OPENJPA-147 URL: https://issues.apache.org/jira/browse/OPENJPA-147 Project: OpenJPA Issue Type: Bug Components: jpa Reporter: Pinaki Poddar Attachments: iface.trace.1.txt, iface.trace.2.txt, iface.trace.3.txt, iface.trace.4.txt According to JavaDoc, OpenJPAEntityManager.createInstance() method public T T createInstance(ClassT cls); behaves as follows: Create a new instance of type codecls/code. If codecls/code is an interface or an abstract class whose abstract methods follow the JavaBeans convention, this method will create a concrete implementation according to the metadata that defines the class The method fails when T is an interface. The failure may be due to incorrect user configuration, however, further information on this extension method is not available in OpenJPA documentation. Firstly, how to specify metadata for a interface that has bean-style methods? Possibilities are: a) Annotating the Java interface definition with @Entity b) Specifying in classorg.acme.IPerson/class in persistence.xml Either of the above fails. a) fails at parsing b) fails with no metadata There may be a correct but undocumented way of specifying a managed interface. If that is the case, then this JIRA report should be treated as a documentation bug. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (OPENJPA-147) T T OpenJPAEntityManager.createInstance(ClassT cls) fails when T is interface
[ https://issues.apache.org/jira/browse/OPENJPA-147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pinaki Poddar updated OPENJPA-147: -- Attachment: iface.trace.2.txt T T OpenJPAEntityManager.createInstance(ClassT cls) fails when T is interface - Key: OPENJPA-147 URL: https://issues.apache.org/jira/browse/OPENJPA-147 Project: OpenJPA Issue Type: Bug Components: jpa Reporter: Pinaki Poddar Attachments: iface.trace.1.txt, iface.trace.2.txt, iface.trace.3.txt, iface.trace.4.txt According to JavaDoc, OpenJPAEntityManager.createInstance() method public T T createInstance(ClassT cls); behaves as follows: Create a new instance of type codecls/code. If codecls/code is an interface or an abstract class whose abstract methods follow the JavaBeans convention, this method will create a concrete implementation according to the metadata that defines the class The method fails when T is an interface. The failure may be due to incorrect user configuration, however, further information on this extension method is not available in OpenJPA documentation. Firstly, how to specify metadata for a interface that has bean-style methods? Possibilities are: a) Annotating the Java interface definition with @Entity b) Specifying in classorg.acme.IPerson/class in persistence.xml Either of the above fails. a) fails at parsing b) fails with no metadata There may be a correct but undocumented way of specifying a managed interface. If that is the case, then this JIRA report should be treated as a documentation bug. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (OPENJPA-147) T T OpenJPAEntityManager.createInstance(ClassT cls) fails when T is interface
[ https://issues.apache.org/jira/browse/OPENJPA-147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pinaki Poddar updated OPENJPA-147: -- Attachment: TestInterface.java T T OpenJPAEntityManager.createInstance(ClassT cls) fails when T is interface - Key: OPENJPA-147 URL: https://issues.apache.org/jira/browse/OPENJPA-147 Project: OpenJPA Issue Type: Bug Components: jpa Reporter: Pinaki Poddar Attachments: iface.trace.1.txt, iface.trace.2.txt, iface.trace.3.txt, iface.trace.4.txt, TestInterface.java According to JavaDoc, OpenJPAEntityManager.createInstance() method public T T createInstance(ClassT cls); behaves as follows: Create a new instance of type codecls/code. If codecls/code is an interface or an abstract class whose abstract methods follow the JavaBeans convention, this method will create a concrete implementation according to the metadata that defines the class The method fails when T is an interface. The failure may be due to incorrect user configuration, however, further information on this extension method is not available in OpenJPA documentation. Firstly, how to specify metadata for a interface that has bean-style methods? Possibilities are: a) Annotating the Java interface definition with @Entity b) Specifying in classorg.acme.IPerson/class in persistence.xml Either of the above fails. a) fails at parsing b) fails with no metadata There may be a correct but undocumented way of specifying a managed interface. If that is the case, then this JIRA report should be treated as a documentation bug. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.