Re: Question regarding constant parameter replacement in JPQL
This looks like it might be a bug? As far as I can tell we're parsing the hint value and then using it at runtime but I can't find any tests where we actually exercise this functionality. Perhaps you could put together a small OpenJPA UT that recreates the problem[1]? Also, what version of OpenJPA are you using? [parse] -- org.apache.openjpa.persistence.XMLPersistenceMetaDataParser eclipse-javadoc:%E2%98%82=openjpa-persistence/src%5C/main%5C/java%3Corg.apache.openjpa.persistence%7BXMLPersistenceMetaDataParser.java%E2%98%83XMLPersistenceMetaDataParser .startQueryHint(...) [runtime] -- org.apache.openjpa.persistence.EntityManagerImpl eclipse-javadoc:%E2%98%82=openjpa-persistence/src%5C/main%5C/java%3Corg.apache.openjpa.persistence%7BEntityManagerImpl.java%E2%98%83EntityManagerImpl .createNamedQuery(...) [1] http://openjpa.apache.org/testing.html On Mon, Jul 20, 2015 at 9:58 AM, Kariem Hussein kariem.huss...@gmail.com wrote: Hi (again), I've asked the same question on StackOverflow [1] and was pointed to a similar question [2] and actually found the well-hidden option to change this behavior by setting the hint UseLiteralInSql to true [3]. It seems to work, when I set this on the javax.persistence.Query, but it does not work, when I set the appropriate hint in orm.xml: named-query name=person.v query select p from Person p where p.type = 'V' /query hint name=openjpa.hint.UseLiteralInSQL value=true / /named-query Something wrong with this, or is there a limitation I am running into? Thank you very much, Kariem [1] How can I prevent OpenJPA from replacing “constant” parameters in my queries? http://stackoverflow.com/questions/31516813 [2] jpa namedquery with literals changed to prepared statement http://stackoverflow.com/questions/28317482 [3] Javadoc org.apache.openjpa.kernel.QueryHints.HINT_USE_LITERAL_IN_SQL http://openjpa.apache.org/builds/2.4.0/apidocs/org/apache/openjpa/kernel/QueryHints.html#HINT_USE_LITERAL_IN_SQL On Thu, Jul 16, 2015 at 2:30 PM, Kariem Hussein kariem.huss...@gmail.com wrote: Hi there, I am in the process of migrating a big (old) code base from JPA 1 with Hibernate 3.3 to JPA 2 with OpenJPA. I've had a problem with a query that used to work in the old version and now does not and I wanted to know whether my reasoning is correct. I have already reduced the problem description to the minimal and I hope I did not lose relevant pieces of information on the way. Given this table in Oracle create table PERSON ( id char(10) not null, type char(3) not null, primary key (id) ) There are a lot of rows with in total three different types WTW, WAI, V (to be honest, I don't know what they stand for). However, we have an entity to work with this table: @Entity public class Person { String id; String type; } The following query is used in the application from an orm.xml file: named-query name=person.v query select p from Person p where p.type = 'V' /query /named-query As the `type` field is `char(3)`, Oracle will store `V ` ('V' followed by two spaces) for the string V. In Hibernate, I did not have a problem with this query, but with OpenJPA, there is some magic performance improvement on this query that reduces the number of queries by normalizing permutations -- at least that is what I think why my query was translated this way -- which results in the following SQL being sent to the DB select p.id, p.type from PERSON p where p.type = ? The constant parameter for `type` in my query was replaced with an SQL parameter and the OpenJPA log shows that V is passed as value. I believe that because of this replacement I do not get any results anymore. It works if I do one of the following - (a) Adapt the JPQL query to `where p.type = 'V '`, effectively knowing about the underlying `char(3)` field. - (b) Use a native query. OpenJPA will then not try to improve my query in a way that changes its semantics. Is there something I can do to improve this behavior in JPA? Is there any benefit in replacing a constant parameter in a named query that is only used in this way (there are no permutations). Shouldn't the parameter be converted correctly (including padding) into the DB type? Is this a bug, or do I have to specify some kind of hint? Thank you for your comments, Kariem -- *Rick Curtis*
Re: OpenJPA dependency in osgi bundle with only data model classes
is it possible to have no package dependencies to openjpa as I am using standard jpa stuff? Most likely not. The OpenJPA enhancement processing adds a hard dependency on classes in org.apache.openjpa.enhance 2015-06-17 5:19 GMT-05:00 Mihael Schmidt mschm...@sgbs.de: hi, i have a question about openjpa. i got an osgi bundle with data models containing jpa stuff. i have seen that the import-package line contains org.apache.openjpa.enhance;version=[2.2,3),org.apache.openjpa.util;version=[2.2,3) . is it possible to have no package dependencies to openjpa as I am using standard jpa stuff? pom.xml: http://pastebin.com/wSvT03ca Best Regards Mihael Schmidt -- Schulz Gebäudeservice GmbH Co. KG Dr.-Max-Ilgner-Straße 17 32339 Espelkamp Persönlich haftende Gesellschafterin: Gebäudereinigung Joachim Schulz Verwaltungsgesellschaft mbH Telefon: +49 5772 9100 0 Telefax: +49 5772 9100 11 Email: zentr...@sgbs.de Internet: www.sgbs.de Geschäftsführer: Joachim und Dirk Schulz, Norbert Kosica Handelsregister Bad Oeynhausen: HRA 5902, HRB 8591 UST-Id-Nr.: DE 125752702 -- *Rick Curtis*
Re: Serp, JDK 1.8 and Lambdas
If you are doing buildtime enhancement, the serp library is used very little at runtime. I know there are some uses, but very minimal. That being said, you'll need to be sure to have the updated library when enhancing. On Wed, Jun 3, 2015 at 2:10 PM, Pawel Veselov pawel.vese...@gmail.com wrote: Rick, On Mon, Jun 1, 2015 at 7:22 AM, Rick Curtis curti...@gmail.com wrote: Take a look at OPENJPA-2386[1]. In that JIRA we reference using the updated version of serp (1.15.1). Thank you. Is serp only used for enhancement? So, if I'm pre-enhancing the entities, serp library would not be used by OpenJPA at runtime at all? https://issues.apache.org/jira/browse/OPENJPA-2386 On Mon, Jun 1, 2015 at 3:31 AM, Pawel Veselov pawel.vese...@gmail.com wrote: Hi. I had some Lambda code in one of my entities, and the enhancement, using 2.2.2, failed with an exception. Is there a better version of Serp that I can replace whatever comes stock in 2.2.2 (that's 1.14.1)? Thank you, Pawel. Exception in thread main java.lang.IllegalArgumentException: type = 18 at serp.bytecode.lowlevel.Entry.create(Entry.java:78) at serp.bytecode.lowlevel.Entry.read(Entry.java:36) at serp.bytecode.lowlevel.ConstantPool.read(ConstantPool.java:412) at serp.bytecode.BCClass.read(BCClass.java:89) at serp.bytecode.BCClass.read(BCClass.java:144) at serp.bytecode.Project.loadClass(Project.java:139) at org.apache.openjpa.enhance.PCEnhancer.run(PCEnhancer.java:4884) at org.apache.openjpa.enhance.PCEnhancer.run(PCEnhancer.java:4831) at org.apache.openjpa.enhance.PCEnhancer$1.run(PCEnhancer.java:4801) at org.apache.openjpa.lib.conf.Configurations.launchRunnable(Configurations.java:761) at org.apache.openjpa.lib.conf.Configurations.runAgainstAllAnchors(Configurations.java:751) at org.apache.openjpa.enhance.PCEnhancer.run(PCEnhancer.java:4796) at org.apache.openjpa.enhance.PCEnhancer.main(PCEnhancer.java:4787) -- *Rick Curtis* -- With best of best regards Pawel S. Veselov -- *Rick Curtis*
Re: Serp, JDK 1.8 and Lambdas
Take a look at OPENJPA-2386[1]. In that JIRA we reference using the updated version of serp (1.15.1). https://issues.apache.org/jira/browse/OPENJPA-2386 On Mon, Jun 1, 2015 at 3:31 AM, Pawel Veselov pawel.vese...@gmail.com wrote: Hi. I had some Lambda code in one of my entities, and the enhancement, using 2.2.2, failed with an exception. Is there a better version of Serp that I can replace whatever comes stock in 2.2.2 (that's 1.14.1)? Thank you, Pawel. Exception in thread main java.lang.IllegalArgumentException: type = 18 at serp.bytecode.lowlevel.Entry.create(Entry.java:78) at serp.bytecode.lowlevel.Entry.read(Entry.java:36) at serp.bytecode.lowlevel.ConstantPool.read(ConstantPool.java:412) at serp.bytecode.BCClass.read(BCClass.java:89) at serp.bytecode.BCClass.read(BCClass.java:144) at serp.bytecode.Project.loadClass(Project.java:139) at org.apache.openjpa.enhance.PCEnhancer.run(PCEnhancer.java:4884) at org.apache.openjpa.enhance.PCEnhancer.run(PCEnhancer.java:4831) at org.apache.openjpa.enhance.PCEnhancer$1.run(PCEnhancer.java:4801) at org.apache.openjpa.lib.conf.Configurations.launchRunnable(Configurations.java:761) at org.apache.openjpa.lib.conf.Configurations.runAgainstAllAnchors(Configurations.java:751) at org.apache.openjpa.enhance.PCEnhancer.run(PCEnhancer.java:4796) at org.apache.openjpa.enhance.PCEnhancer.main(PCEnhancer.java:4787) -- *Rick Curtis*
Re: java.lang.Exception: Errors encountered while resolving metadata
Posting the entire stacktrace would be helpful. On Tue, May 26, 2015 at 8:45 AM, patrice patrice.au...@t2b.ch wrote: Hello, I am encountering the exception using openjpa 2.3, java.lang.Exception: Errors encountered while resolving metadata. See nested exceptions for details., when invoking this code: emf = Persistence.createEntityManagerFactory(d2pDatastoreUpserter_1.06PU, puProps); em = emf.createEntityManager(); Query query = em.createNamedQuery(D2padmManagedCodes.findAll); // this raises the exception ... I've turned on fine tracing everything looks good. For example for D2padmManagedCodes, I see this. 1729 d2pDatastoreUpserter_1.06PU TRACE [Root Thread] openjpa.MetaData - Parsing query D2padmManagedCodes.findAll. 1785 d2pDatastoreUpserter_1.06PU TRACE [Root Thread] openjpa.MetaData - Processing registered persistence-capable class class com.novartis.d2p106.jpa.entities.D2padmManagedCodes. 1785 d2pDatastoreUpserter_1.06PU TRACE [Root Thread] openjpa.MetaData - Loading metadata for class com.novartis.d2p106.jpa.entities.D2padmManagedCodes under mode [META][QUERY]. 1785 d2pDatastoreUpserter_1.06PU TRACE [Root Thread] openjpa.MetaData - Parsing class com.novartis.d2p106.jpa.entities.D2padmManagedCodes. 1794 d2pDatastoreUpserter_1.06PU TRACE [Root Thread] openjpa.MetaData - Set persistence-capable superclass of com.novartis.d2p106.jpa.entities.D2padmManagedCodes to null. 1794 d2pDatastoreUpserter_1.06PU TRACE [Root Thread] openjpa.MetaData - Resolving metadata for com.novartis.d2p106.jpa.entities.D2padmManagedCodes@-1614069114. 1794 d2pDatastoreUpserter_1.06PU TRACE [Root Thread] openjpa.MetaData - Resolving field com.novartis.d2p106.jpa.entities.D2padmManagedCodes@ -1614069114.assigningAuthorityName. etc The code was enhanced on Eclipse using the Ant script enhance.xml, as per OpenJPA doc. The persistence.xml declares all classes and the following parameters are used: exclude-unlisted-classestrue/exclude-unlisted-classes shared-cache-modeNONE/shared-cache-mode validation-modeNONE/validation-mode Any thoughts would be helpful before plowing through the source code. Thank you, Patrice -- View this message in context: http://openjpa.208410.n2.nabble.com/java-lang-Exception-Errors-encountered-while-resolving-metadata-tp7588128.html Sent from the OpenJPA Users mailing list archive at Nabble.com. -- *Rick Curtis*
Re: criteria API generates a parameter for literal in group by but does not provide the value
Shall I report this as a bug or am I doing something wrong in my code? I vote bug On Thu, Apr 23, 2015 at 6:43 AM, Mark Struberg strub...@yahoo.de wrote: Thanks Henno! Not quite sure if this workaround is good enough or whether we should try to solve this properly. I plan to do a follow up release for 2.4.0 rather soonish. So thanks for your test case. Did you already look at the OpenJPA codebase? Are you interested in turning this sample code into a unit test? txs and LieGrue, strub Am 23.04.2015 um 12:32 schrieb Henno Vermeulen he...@huizemolenaar.nl: One addition (my question is still open). I can confirm that a valid workaround for this problem is to use setHint(openjpa.hint.UseLiteralInSQL, true) and updating to OpenJPA 2.4.0 which is available in Maven central since a few days. Henno -Oorspronkelijk bericht- Van: Henno Vermeulen [mailto:he...@huizemolenaar.nl] Verzonden: donderdag 23 april 2015 11:49 Aan: users@openjpa.apache.org Onderwerp: criteria API generates a parameter for literal in group by but does not provide the value Hello, I have a query created using the criteria API where I group by an expression that contains a small calculation using literal values. OpenJPA generates the correct SQL but does not provide the value of the generated parameter in the group by clause. The query fails with a SQL exception The value is not set for the parameter number 9.. I can reproduce the issue with a minimal example. Suppose we have a person class with integer age and length columns and we wish to select the average length grouped by the person's age / 10: CriteriaBuilder cb = em.getCriteriaBuilder(); CriteriaQueryDouble query = cb.createQuery(Double.class); RootPerson person = query.from(Person.class); ExpressionDouble averageLength = cb.avg(person.Integer get(length)); CriteriaQueryDouble select = query.select(averageLength); select.groupBy(cb.quot(person.Integer get(age), cb.literal(10))); // optional where, useful to ensure parameters are logged select.where(cb.gt(person.Integer get(age), cb.literal(20))); System.out.println(result: + em.createQuery(query).getResultList()); Whe running this query with trace and displaying parameters on I get: 1067 testPU TRACE [main] openjpa.Query - Executing query: Query: org.apache.openjpa.kernel.QueryImpl@be4f81; candidate class: class entities.Person; query: null 1108 testPU TRACE [main] openjpa.jdbc.SQL - t 5763249, conn 7326702 executing prepstmnt 26531336 SELECT AVG(t0.length) FROM Person t0 WHERE (t0.age ?) GROUP BY (t0.age / ?) [params=(int) 20] You can clearly see that the query has two parameter placeholders but only one value is provided. Shall I report this as a bug or am I doing something wrong in my code? (As a workaround I can call setHint(openjpa.hint.UseLiteralInSQL, true) on em.createQuery(query). This doesn't work in my application because there is a bug where boolean literals aren't correctly handled: https://issues.apache.org/jira/browse/OPENJPA-2534. I think this is solved in the upcoming release.) Thank you, Henno -- *Rick Curtis*
Re: Occasional IllegalArgumentException (Illegal Capacity) inside ManagedCache
I haven't ever encountered this problem, and I'm pretty certain it hasn't been fixed in 2.2.2... that being said, the runtime itself isn't multithreaded. If I were going to take a off the cuff guess, I'd wager there is some odd interaction going on with the aries container. Once this has occurred, the only way to recovery is a platform restart. That is a very interesting comment. It doesn't make much sense that this sort of an exception on EM.clear() would be persistent. On Tue, Apr 14, 2015 at 6:30 AM, Raimund Klein raimund.kl...@monitise.com wrote: Hi all, We are using OpenJPA 2.2.0 as part of our JBoss Fuse installation. Every couple of months we run into a stack trace such as this one: Caused by: openjpa-2.2.0-r422266:1244990 nonfatal general error org.apache.openjpa.persistence.PersistenceException: Illegal Capacity: -12 at org.apache.openjpa.kernel.BrokerImpl.detachAll(BrokerImpl.java:3407) at org.apache.openjpa.kernel.DelegatingBroker.detachAll(DelegatingBroker.java:1206) at org.apache.openjpa.persistence.EntityManagerImpl.clear(EntityManagerImpl.java:1169) at org.apache.aries.jpa.container.impl.EntityManagerWrapper.clear(EntityManagerWrapper.java:49) at org.apache.aries.jpa.container.context.transaction.impl.SynchronizedEntityManagerWrapper.clear(SynchronizedEntityManagerWrapper.java:113) at org.apache.aries.jpa.container.context.transaction.impl.JTAEntityManager.createNamedQuery(JTAEntityManager.java:315) [...] ... 90 more Caused by: java.lang.IllegalArgumentException: Illegal Capacity: -12 at java.util.ArrayList.init(ArrayList.java:142)[:1.7.0_55] at org.apache.openjpa.kernel.ManagedCache.copy(ManagedCache.java:259) at org.apache.openjpa.kernel.BrokerImpl.getManagedStates(BrokerImpl.java:4054) at org.apache.openjpa.kernel.BrokerImpl.detachAllInternal(BrokerImpl.java:3418) at org.apache.openjpa.kernel.BrokerImpl.detachAll(BrokerImpl.java:3403) ... 101 more Once this has occurred, the only way to recovery is a platform restart. Our impression is that this is a multithreading issue inside OpenJPA. Hoping for the best, we've upgraded to 2.2.2 just now and will test with this one, but I'd just like to know if anyone has encountered this before and maybe can help us with an idea what's going here? We scanned the recent release notes, but haven't found anything like this mentioned. Kind regards *Raimund Klein* *Technical Architect* t. +44 (0)203 657 0481 raimund.kl...@monitisegroup.com [image: http://mailmedia.monitisegroup.com/MonitiseUpdate/Images/Email_logo.gif] *www.monitisegroup.com* http://www.monitisegroup.com/ [image: http://mailmedia.monitisegroup.com/NewsletterImages/youtube_sig.gif] http://www.youtube.com/user/MonitiseGroup[image: http://mailmedia.monitisegroup.com/MonitiseUpdate/Images/linked_in_sig.gif] http://www.linkedin.com/company/monitise[image: http://mailmedia.monitisegroup.com/MonitiseUpdate/Images/twitter_sig.gif] http://twitter.com/#!/MonitiseGroup -- This message contains confidential and proprietary information of the sender, and is intended only for the person(s) to whom it is addressed. Any use, distribution, copying, disclosure or taking of any action in reliance upon it by any other person is strictly prohibited. If you have received this message in error, please notify the e-mail sender immediately, and delete the original message without making a copy. Monitise accepts no liability if this email harms any systems or data of the recipient (including as a result of software virus infection or where this e-mail is modified or amended in any way during or following transmission) or if this email is accessed by anyone other than the person(s) to whom it is addressed. The Monitise group includes Monitise plc (Reg. No. 6011822), Monitise Group Limited (Reg. No. 5590897), Monitise International Limited (Reg. No. 5556711), Monitise Europe Limited (Reg. No. 4831976) and Mobile Money Network Limited (Reg. No. 7153130). These companies are registered in England and Wales and their registered office address is 95 Gresham Street, London, EC2V 7NA United Kingdom. -- This email message has been delivered safely and archived online by Mimecast. For more information please visit http://www.mimecast.com -- -- *Rick Curtis*
Re: Java 8/Java 7 end of life
Yes, supported wasn't added to 2.3.x. Try trunk or 2.2.x On Wed, Mar 11, 2015 at 8:44 AM, Hal Hildebrand hal.hildebr...@me.com wrote: Sorry, this fell out of my inbox. I'm using 2.3.0 and JDK 1.8 and maven. If I change the target to 1.8 from 1.7, I get: java.lang.IllegalArgumentException at org.apache.xbean.asm4.ClassReader.init(Unknown Source) at org.apache.xbean.asm4.ClassReader.init(Unknown Source) at org.apache.xbean.asm4.ClassReader.init(Unknown Source) at org.apache.openjpa.enhance.AsmAdaptor.toJava7ByteArray(AsmAdaptor.java:93) at org.apache.openjpa.enhance.AsmAdaptor.writeJava7(AsmAdaptor.java:84) at org.apache.openjpa.enhance.AsmAdaptor.write(AsmAdaptor.java:54) at org.apache.openjpa.enhance.PCEnhancer.record(PCEnhancer.java:633) at org.apache.openjpa.enhance.PCEnhancer.record(PCEnhancer.java:619) at org.apache.openjpa.enhance.PCEnhancer.run(PCEnhancer.java:4900) at org.apache.openjpa.ant.PCEnhancerTask.executeOn(PCEnhancerTask.java:89) at org.apache.openjpa.lib.ant.AbstractTask.execute(AbstractTask.java:184) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.Target.execute(Target.java:390) at org.apache.tools.ant.Target.performTasks(Target.java:411) at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1399) at org.apache.tools.ant.Project.executeTarget(Project.java:1368) at org.apache.maven.plugin.antrun.AntRunMojo.execute(AntRunMojo.java:327) at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:133) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:108) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:76) at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51) at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:116) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:361) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:155) at org.apache.maven.cli.MavenCli.execute(MavenCli.java:584) at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:213) at org.apache.maven.cli.MavenCli.main(MavenCli.java:157) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289) at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229) at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415) at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356) On Mar 9, 2015, at 11:30 AM, Rick Curtis curti...@gmail.com wrote: Hal - What are you seeing for problems? We've done some amount of testing Entity enhancement when using java 8 language features. Thanks, Rick On Mon, Mar 9, 2015 at 10:46 AM, Hal Hildebrand hal.hildebr...@me.com wrote: No. On Mar 9, 2015, at 8:44 AM, Boblitz John john.bobl...@bertschi.com wrote: Hello, Does the Byte Code Enhancement work when compiled for 1.8? Thanks Regards, John Boblitz -Original Message- From: Hal Hildebrand [mailto:hal.hildebr...@me.com] Sent: Montag, 9. März 2015 16:21 To: users@openjpa.apache.org Subject: Re: Java 8/Java 7 end of life I can certainly confirm that OpenJPA runs on java 8. And even compiles when using source 1.7, target 1.7. Byte code enhancement works fine on the code when compiled in that fashion. On Mar 9, 2015, at 6:06 AM, Rick Curtis curti...@gmail.com wrote: OpenJPA 2.3.x and trunk should
Re: Java 8/Java 7 end of life
I don't think my previous comment was entirely correct. OpenJPA 2.3.x and trunk should be functional with java8, but I don't think you can build OpenJPA with java8. Java 8 works with 2.2.x and trunk. I can't say for certain about the 2.3.x branch... that is maintained by the TomEE folks and I can't recall which changes they did / didn't pull in to support Java 8. Thanks, Rick On Tue, Mar 10, 2015 at 3:31 AM, Henno Vermeulen he...@huizemolenaar.nl wrote: Hal Hildebrand wrote: I can certainly confirm that OpenJPA runs on java 8. And even compiles when using source 1.7, target 1.7. Thank you Hal, that is some very useful information! It gives a solution to possible future security issues due to using an outdated VM. (I already tried using the released version 2.3.0 of OpenJPA with Java 8. Compiling works ok but enhancing the entities failed with some error message.) Henno -Oorspronkelijk bericht- Van: Rick Curtis [mailto:curti...@gmail.com] Verzonden: maandag 9 maart 2015 19:30 Aan: users Onderwerp: Re: Java 8/Java 7 end of life Hal - What are you seeing for problems? We've done some amount of testing Entity enhancement when using java 8 language features. Thanks, Rick On Mon, Mar 9, 2015 at 10:46 AM, Hal Hildebrand hal.hildebr...@me.com wrote: No. On Mar 9, 2015, at 8:44 AM, Boblitz John john.bobl...@bertschi.com wrote: Hello, Does the Byte Code Enhancement work when compiled for 1.8? Thanks Regards, John Boblitz -Original Message- From: Hal Hildebrand [mailto:hal.hildebr...@me.com] Sent: Montag, 9. März 2015 16:21 To: users@openjpa.apache.org Subject: Re: Java 8/Java 7 end of life I can certainly confirm that OpenJPA runs on java 8. And even compiles when using source 1.7, target 1.7. Byte code enhancement works fine on the code when compiled in that fashion. On Mar 9, 2015, at 6:06 AM, Rick Curtis curti...@gmail.com wrote: OpenJPA 2.3.x and trunk should be functional with java8, but I don't think you can build OpenJPA with java8. On Mon, Mar 9, 2015 at 3:52 AM, Henno Vermeulen he...@huizemolenaar.nl wrote: Hello, AFAIK, OpenJPA still doesn't work with Java 8. Are there any plans of fixing this soon? Perhaps OpenJPA committers could give this some more priority? Oracle public support for Java 7 will end after April this year, see http://www.oracle.com/technetwork/java/javase/eol-135779.html If I understand well, this means that security issues in Oracle's Java 7 runtime will no longer be fixed so that an application using OpenJPA on Java 7 will become more and more vulnerable over time. The ticket for Java 8 was last updated in October 2014: https://issues.apache.org/jira/browse/OPENJPA-2386 Regards, Henno Vermeulen -- *Rick Curtis* -- *Rick Curtis* -- *Rick Curtis*
Re: Java 8/Java 7 end of life
OpenJPA 2.3.x and trunk should be functional with java8, but I don't think you can build OpenJPA with java8. On Mon, Mar 9, 2015 at 3:52 AM, Henno Vermeulen he...@huizemolenaar.nl wrote: Hello, AFAIK, OpenJPA still doesn't work with Java 8. Are there any plans of fixing this soon? Perhaps OpenJPA committers could give this some more priority? Oracle public support for Java 7 will end after April this year, see http://www.oracle.com/technetwork/java/javase/eol-135779.html If I understand well, this means that security issues in Oracle's Java 7 runtime will no longer be fixed so that an application using OpenJPA on Java 7 will become more and more vulnerable over time. The ticket for Java 8 was last updated in October 2014: https://issues.apache.org/jira/browse/OPENJPA-2386 Regards, Henno Vermeulen -- *Rick Curtis*
Re: Java 8/Java 7 end of life
Hal - What are you seeing for problems? We've done some amount of testing Entity enhancement when using java 8 language features. Thanks, Rick On Mon, Mar 9, 2015 at 10:46 AM, Hal Hildebrand hal.hildebr...@me.com wrote: No. On Mar 9, 2015, at 8:44 AM, Boblitz John john.bobl...@bertschi.com wrote: Hello, Does the Byte Code Enhancement work when compiled for 1.8? Thanks Regards, John Boblitz -Original Message- From: Hal Hildebrand [mailto:hal.hildebr...@me.com] Sent: Montag, 9. März 2015 16:21 To: users@openjpa.apache.org Subject: Re: Java 8/Java 7 end of life I can certainly confirm that OpenJPA runs on java 8. And even compiles when using source 1.7, target 1.7. Byte code enhancement works fine on the code when compiled in that fashion. On Mar 9, 2015, at 6:06 AM, Rick Curtis curti...@gmail.com wrote: OpenJPA 2.3.x and trunk should be functional with java8, but I don't think you can build OpenJPA with java8. On Mon, Mar 9, 2015 at 3:52 AM, Henno Vermeulen he...@huizemolenaar.nl wrote: Hello, AFAIK, OpenJPA still doesn't work with Java 8. Are there any plans of fixing this soon? Perhaps OpenJPA committers could give this some more priority? Oracle public support for Java 7 will end after April this year, see http://www.oracle.com/technetwork/java/javase/eol-135779.html If I understand well, this means that security issues in Oracle's Java 7 runtime will no longer be fixed so that an application using OpenJPA on Java 7 will become more and more vulnerable over time. The ticket for Java 8 was last updated in October 2014: https://issues.apache.org/jira/browse/OPENJPA-2386 Regards, Henno Vermeulen -- *Rick Curtis* -- *Rick Curtis*
Re: Commercial Support?
Tomitribe[1] might provide commercial support. [1] http://www.tomitribe.com/ On Thu, Feb 26, 2015 at 1:15 AM, Jörn Gersdorf jo...@gersdorf.info wrote: Hi all, are you aware of any company offering commercial support for OpenJPA? Google did not help me in that case. Thank you. Kind regards, Jörn -- *Rick Curtis*
Re: Lifecycle events?
This is a bug. It will work if you specify a list of classes that you're interested in or null. (You shouldn't have to pass null, but that is the bug) openJpaEm.addLifecycleListener(this, Class1.class, Class2.class); or openJpaEm.addLifecycleListener(this, null); On Wed, Feb 25, 2015 at 8:14 AM, Hal Hildebrand hal.hildebr...@me.com wrote: So, this is just a dead parrot? I couldn't find anything through web searches. Can I get a read as to whether this is supposed to work or even a code sample? Thanks. On Feb 22, 2015, at 1:29 PM, Hal Hildebrand hal.hildebr...@me.com wrote: So, I'm trying to use the lifecycle events on the OpenJPAEntityManagerSPI. I've created a class which implements PersistListener, UpdateListener, DeleteListener. I've registered an instance of this class as a lifecycle listener: OpenJPAEntityManagerSPI openJpaEm = em.unwrap(OpenJPAEntityManagerSPI.class); openJpaEm.addLifecycleListener(this); This does not seem to do anything at all. I've got printlns in every event and I get nothing. I have registered the txn listener successfully, and that works great. Is there something I'm missing here? Any help appreciated. Thanks, -Hal -- *Rick Curtis*
Re: JPA metadata errors while starting TomEE
and configuring the javaagent fix everything? On Mon, Feb 23, 2015 at 7:51 PM, leo ks l6459...@gmail.com wrote: I was having strange and random errors complaining of entity mapping between entities or unrecognized mapping between entity attributes and the columns in the DB 2015-02-23 22:42 GMT-03:00 Rick Curtis curti...@gmail.com: Can you expand a bit? Were you having problems with openjpa.RuntimeUnenhancedClasses? On Mon, Feb 23, 2015 at 5:52 PM, leo ks l6459...@gmail.com wrote: the enhancer seems to have fixed it :-) 2015-02-23 19:17 GMT-03:00 Rick Curtis curti...@gmail.com: What sort of errors are you seeing? On Mon, Feb 23, 2015 at 1:07 PM, leo ks l6459...@gmail.com wrote: -- Forwarded message -- From: leo ks l6459...@gmail.com Date: 2015-02-23 16:05 GMT-03:00 Subject: JPA metadata errors while starting TomEE To: us...@tomee.apache.org Hi I am using tomee 1.6.0 and I'm experiencing some strange errors during startup. Sometimes, I get errors from JPA trying to resolve metadata (MetaDataException), sometimes it just starts without any problem and it works OK. I wonder if any process is starting before others (such as MDBs processing persistent messages for example). What's the best practice in this situation? Cheers K. -- *Rick Curtis* -- *Rick Curtis* -- *Rick Curtis*
Re: JPA metadata errors while starting TomEE
What sort of errors are you seeing? On Mon, Feb 23, 2015 at 1:07 PM, leo ks l6459...@gmail.com wrote: -- Forwarded message -- From: leo ks l6459...@gmail.com Date: 2015-02-23 16:05 GMT-03:00 Subject: JPA metadata errors while starting TomEE To: us...@tomee.apache.org Hi I am using tomee 1.6.0 and I'm experiencing some strange errors during startup. Sometimes, I get errors from JPA trying to resolve metadata (MetaDataException), sometimes it just starts without any problem and it works OK. I wonder if any process is starting before others (such as MDBs processing persistent messages for example). What's the best practice in this situation? Cheers K. -- *Rick Curtis*
Re: JPA metadata errors while starting TomEE
Can you expand a bit? Were you having problems with openjpa.RuntimeUnenhancedClasses? On Mon, Feb 23, 2015 at 5:52 PM, leo ks l6459...@gmail.com wrote: the enhancer seems to have fixed it :-) 2015-02-23 19:17 GMT-03:00 Rick Curtis curti...@gmail.com: What sort of errors are you seeing? On Mon, Feb 23, 2015 at 1:07 PM, leo ks l6459...@gmail.com wrote: -- Forwarded message -- From: leo ks l6459...@gmail.com Date: 2015-02-23 16:05 GMT-03:00 Subject: JPA metadata errors while starting TomEE To: us...@tomee.apache.org Hi I am using tomee 1.6.0 and I'm experiencing some strange errors during startup. Sometimes, I get errors from JPA trying to resolve metadata (MetaDataException), sometimes it just starts without any problem and it works OK. I wonder if any process is starting before others (such as MDBs processing persistent messages for example). What's the best practice in this situation? Cheers K. -- *Rick Curtis* -- *Rick Curtis*
Re: Huge String DB mapping
Are there any plans to add it? No that being said this is open source and all of the source is available[1]. I'll note that the MySQL dictionary has similar support for different blob types[2], but not for String types. If you wanted to make a change for all supported databases that would be a larger effort that just adding support for MySQL. Good luck! Thanks, Rick [1] http://openjpa.apache.org/source-code.html [2] org.apache.openjpa.jdbc.sql.MySQLDictionary eclipse-javadoc:%E2%98%82=openjpa-jdbc/src%5C/main%5C/java%3Corg.apache.openjpa.jdbc.sql%7BMySQLDictionary.java%E2%98%83MySQLDictionary .getTypeName On Tue, Jan 27, 2015 at 1:44 PM, Maxim Solodovnik solomax...@gmail.com wrote: Are there any plans to add it? Or maybe some other way like: @Lob(type=MEDIUM) WBR, Maxim (from mobile, sorry for the typos) On Jan 28, 2015 1:37 AM, Rick Curtis curti...@gmail.com wrote: It doesn't appear that there is support to take the column size into account when determining the type of a String field. On Tue, Jan 27, 2015 at 1:17 PM, Maxim Solodovnik solomax...@gmail.com wrote: Mediumtext for MySQL, and something similar for other databases we currently supporting :) WBR, Maxim (from mobile, sorry for the typos) On Jan 28, 2015 1:16 AM, Rick Curtis curti...@gmail.com wrote: What is the column type that you want OpenJPA to map your String to? On Tue, Jan 27, 2015 at 12:53 PM, Maxim Solodovnik solomax...@gmail.com wrote: Hello All, I'm trying to map Huge string to the DB: public static final int MAX_LOG_SIZE = 1 * 1024 * 1024; @Lob @Column(name=ful_message, length = MAX_LOG_SIZE) private String fullMessage; Unfortunately I get column of type TEXT in my MySQL database :( (64K max) I tried to remove @Lob annotation, no luck OpenJPA 2.3.0 Am I doing something wrong? Or it is bug in OpenJPA? Thanks in advance! -- WBR Maxim aka solomax -- *Rick Curtis* -- *Rick Curtis* -- *Rick Curtis*
Re: OpenJPA support for JPA 2.1: when?
For those that are interested in this development effort, please take a look at the jpa-2.1 development tasks[1]. There are some low hanging prelim tasks that need to get taken care of prior to any real 2.1 work happening. Pinaki -- I see you created a 2.1 development sandbox, do you have any info on the changes that you've already started to prototype? Thanks, Rick [1] http://openjpa.apache.org/jpa-2.1-tasks.html On Tue, Dec 9, 2014 at 10:50 PM, David Blevins david.blev...@gmail.com wrote: On Dec 8, 2014, at 11:55 AM, David Blevins david.blev...@gmail.com wrote: Throwing this out there as it never hurts to be explicit. On the Tomitribe side I'm willing to hire someone to work on OpenJPA full-time if there are enough users out there willing to share the cost of said developer. A full-time developer (or two) is really not that expensive when split 10 different ways. Email me offline -- not really an appropriate conversation for this list. As well this is the first and only time I'll mention it. I will say that at some point our belief in Open Source has to match some level of commitment. Here's your opportunity. Already getting emails from OpenJPA committers lining up to be said full-time developer. Up to users at this point what happens to OpenJPA. Reach out if OpenJPA means something to you. We'll do our best to help you champion it internally. Never hurts to try. You might succeed. -- David Blevins http://twitter.com/dblevins http://www.tomitribe.com -- *Rick Curtis*
Re: OpenJPA support for JPA 2.1: when?
Now that 2.3.x has been cut, I think we can most likely develop 2.1 in trunk. Other opinions? Should we just pick a task from the list here: http://openjpa.apache.org/jpa-2.1-tasks.html and contribute to it? Yes. If there isn't already a JIRA for a given work item, go ahead and create one. Once you have code to contribute you can attach it to that task. Thanks, Rick On Thu, Dec 11, 2014 at 9:44 AM, Roberto Cortez radcor...@yahoo.com.invalid wrote: Hi, Forgive me with these very basic questions: - All the JPA 2.1 work should be done in this svn sandbox: https://svn.apache.org/repos/asf/openjpa/sandboxes/21?- Should we just pick a task from the list here: http://openjpa.apache.org/jpa-2.1-tasks.html and contribute to it?- Since I'm not a committer, should I just send the patches for the code? Cheers,Roberto From: Rick Curtis curti...@gmail.com To: users users@openjpa.apache.org Cc: Pinaki Poddar pinaki.pod...@gmail.com Sent: Thursday, December 11, 2014 3:09 PM Subject: Re: OpenJPA support for JPA 2.1: when? For those that are interested in this development effort, please take a look at the jpa-2.1 development tasks[1]. There are some low hanging prelim tasks that need to get taken care of prior to any real 2.1 work happening. Pinaki -- I see you created a 2.1 development sandbox, do you have any info on the changes that you've already started to prototype? Thanks, Rick [1] http://openjpa.apache.org/jpa-2.1-tasks.html On Tue, Dec 9, 2014 at 10:50 PM, David Blevins david.blev...@gmail.com wrote: On Dec 8, 2014, at 11:55 AM, David Blevins david.blev...@gmail.com wrote: Throwing this out there as it never hurts to be explicit. On the Tomitribe side I'm willing to hire someone to work on OpenJPA full-time if there are enough users out there willing to share the cost of said developer. A full-time developer (or two) is really not that expensive when split 10 different ways. Email me offline -- not really an appropriate conversation for this list. As well this is the first and only time I'll mention it. I will say that at some point our belief in Open Source has to match some level of commitment. Here's your opportunity. Already getting emails from OpenJPA committers lining up to be said full-time developer. Up to users at this point what happens to OpenJPA. Reach out if OpenJPA means something to you. We'll do our best to help you champion it internally. Never hurts to try. You might succeed. -- David Blevins http://twitter.com/dblevins http://www.tomitribe.com -- *Rick Curtis* -- *Rick Curtis*
Re: OpenJPA and auto-commit
Charlie - Sorry I meant to reply to this thread while I was out on vacation. Can I have you post the entire stacktrace for the problem you're running into? This problem sounds familiar, but I can't recall the details. Thanks, Rick On Tue, Dec 2, 2014 at 3:09 AM, Charlie Mordant cmorda...@gmail.com wrote: Hi Kevin, Thank you for the idea, I'll do so :). Best regards, Charlie 2014-12-01 23:31 GMT+01:00 Kevin Sutter kwsut...@gmail.com: Hi Charlie, Most of my experience is in the Java EE space, not the OSGi/Aries environment. Since WebSphere is using both Aries and OpenJPA, and WebSphere supports both Java EE and OSGi programming models, you should be able to get this combination to work. I'm just not sure what, if any, additional magic WebSphere had to include... Have you been posting on the Aries site as well? Good luck, Kevin On Mon, Dec 1, 2014 at 3:53 PM, Charlie Mordant cmorda...@gmail.com wrote: Hi Kevin, I removed the non-jta-datasource (referencing the same connection) because I thought it was the issue (and that was failing the same way). Even if I add it, it also fails (randomly, sometimes it passes). I'm not sure it is really OpenJpa related, as I'm using Aries-JPA/Tx, Pax-JDBC. I'm currently investigating, and if you've any other pointers I'll sure try :). Thank you, and best regards, Charlie PS: if you're also interested in the case, you can also try to see where's the catch compiling this: https://github.com/OsgiliathEnterprise/net.osgiliath.parent https://github.com/OsgiliathEnterprise/net.osgiliath.parent (it will once on three times fail on the hello sample blueprint test). Le 1 déc. 2014 à 22:41, Kevin Sutter kwsut...@gmail.com a écrit : Hi Charlie, Since you are using SynchronizeMappings, you should be providing an alternate datasource (non-jta-data-source) in addition to the jta-data-source. OpenJPA requires access to the database in order to define or adjust your schemas based on your Entity definitions. Without a non-jta-data-source, OpenJPA will attempt to do this work within the global transaction. Unfortunately, the auto commit processing doesn't work well within a global transaction (as you have found out). Hope this helps. Kevin On Sun, Nov 30, 2014 at 9:05 AM, Charlie Mordant cmorda...@gmail.com wrote: Hi OpenJPA Guru's, I'm encountering an issue when openJPA participates to a global transaction, I've got this weird error happening sometimes: Caused by: openjpa-2.3.0-r422266:1540826 nonfatal general error org.apache.openjpa.persistence.PersistenceException: setAutoCommit(true) invalid during global transaction. at org.apache.openjpa.jdbc.meta.MappingTool.record(MappingTool.java:559) at org.apache.openjpa.jdbc.meta.MappingTool.record(MappingTool.java:455) My persistence.xml is as simple as it can be: [code] persistence-unit name=${project.artifactId}Pu transaction-type=JTA providerorg.apache.openjpa.persistence.PersistenceProviderImpl/provider jta-data-sourceosgi:service/javax.sql.DataSource/(amp;( osgi.jndi.service.name =${project.parent.artifactId}.database)(aries.managed=true))/ jta-data-source properties property name=openjpa.Log value=slf4j/ property name=openjpa.jdbc.SynchronizeMappings value= buildSchema(ForeignKeys=true,SchemaAction=refresh)/ property name=openjpa.jdbc.DBDictionary value=derby/ /persistence-unit [/code] Is there any property/option to set somewhere? Regards, -- Charlie Mordant Full OSGI/EE stack made with Karaf: https://github.com/OsgiliathEnterprise/net.osgiliath.parent -- Charlie Mordant Full OSGI/EE stack made with Karaf: https://github.com/OsgiliathEnterprise/net.osgiliath.parent -- *Rick Curtis*
Re: Set cascading-persist gobally or through ReverseMappingTool
Before we dive into looking at making a change to the ReverseMapping tool, lets look a bit closer at the problem that you've encountered. Can you post more details about what is going on when you get the Suggested actions: a) Set the cascade attribute for this field to CascadeType.PERSIST or CascadeType.ALL (JPA annotations) or persist or all (JPA orm.xml), message? Thanks, Rick On Sun, Nov 30, 2014 at 9:21 AM, Tester spamc...@gmail.com wrote: Dear OpenJPA community, I use the OpenJPA ReverseMappingTool to programmatically create my entities. The issues that I'm now encountering is the following error message: Suggested actions: a) Set the cascade attribute for this field to CascadeType.PERSIST or CascadeType.ALL (JPA annotations) or persist or all (JPA orm.xml), b) enable cascade-persist globally, c) ... I see 3 possible solutions to change the CascadingType to Persist (the ReverseMappingTool puts CascadinType.Merge by default it seems): 1) Pass a configuration to the ReverseMappingTool to add CascadingType.Persist instead of the default CascadingType.Merge = I have been looking in the docs of the ReverseMappingTool, but I cannot find a way to set it. Am I overlooking something? 2) Set CascadingType.Persist globally in a programmatic way. = I looked to add it through the JDBCConfiguration/OpenJPAConfiguration or EntityManager, but here I still don't see way to set this globally 3) Manipulate the generated entity files. This is definitely possible, but I would prefer option 1 or 2. Anybody has an idea on how to achieve point 1 or 2? Kind regards, -- *Rick Curtis*
Re: OpenJPA and auto-commit
setAutoCommit(true) non valide au cours de la transaction globale. ... 74 more at org.apache.openjpa.kernel.AbstractBrokerFactory.newBroker(AbstractBrokerFactory.java:218) at org.apache.openjpa.kernel.DelegatingBrokerFactory.newBroker(DelegatingBrokerFactory.java:155) at org.apache.openjpa.persistence.EntityManagerFactoryImpl.createEntityManager(EntityManagerFactoryImpl.java:226) at org.apache.openjpa.persistence.EntityManagerFactoryImpl.createEntityManager(EntityManagerFactoryImpl.java:59) at org.apache.aries.jpa.container.impl.CountingEntityManagerFactory.createEntityManager(CountingEntityManagerFactory.java:71) at org.apache.aries.jpa.container.context.transaction.impl.JTAPersistenceContextRegistry.getCurrentPersistenceContext(JTAPersistenceContextRegistry.java:152)[225:org.apache.aries.jpa.container.context:1.0.1] at org.apache.aries.jpa.container.context.transaction.impl.JTAEntityManager.getPersistenceContext(JTAEntityManager.java:87)[225:org.apache.aries.jpa.container.context:1.0.1] at org.apache.aries.jpa.container.context.transaction.impl.JTAEntityManager.getMetamodel(JTAEntityManager.java:409)[225:org.apache.aries.jpa.container.context:1.0.1] at org.springframework.data.jpa.repository.support.JpaEntityInformationSupport.getMetadata(JpaEntityInformationSupport.java:60) at org.springframework.data.jpa.repository.support.SimpleJpaRepository.init(SimpleJpaRepository.java:96) at net.osgiliath.hello.model.jpa.repository.impl.HelloObjectJpaRepository.init(HelloObjectJpaRepository.java:59) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[:1.7.0_71] at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)[:1.7.0_71] at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[:1.7.0_71] at java.lang.reflect.Constructor.newInstance(Constructor.java:526)[:1.7.0_71] at org.apache.aries.blueprint.utils.ReflectionUtils.newInstance(ReflectionUtils.java:329)[15:org.apache.aries.blueprint.core:1.4.1] at org.apache.aries.blueprint.container.BeanRecipe.newInstance(BeanRecipe.java:962)[15:org.apache.aries.blueprint.core:1.4.1] at org.apache.aries.blueprint.container.BeanRecipe.getInstance(BeanRecipe.java:331)[15:org.apache.aries.blueprint.core:1.4.1] ... 32 more Best regards, Charlie 2014-12-02 15:23 GMT+01:00 Rick Curtis curti...@gmail.com: Charlie - Sorry I meant to reply to this thread while I was out on vacation. Can I have you post the entire stacktrace for the problem you're running into? This problem sounds familiar, but I can't recall the details. Thanks, Rick On Tue, Dec 2, 2014 at 3:09 AM, Charlie Mordant cmorda...@gmail.com wrote: Hi Kevin, Thank you for the idea, I'll do so :). Best regards, Charlie 2014-12-01 23:31 GMT+01:00 Kevin Sutter kwsut...@gmail.com: Hi Charlie, Most of my experience is in the Java EE space, not the OSGi/Aries environment. Since WebSphere is using both Aries and OpenJPA, and WebSphere supports both Java EE and OSGi programming models, you should be able to get this combination to work. I'm just not sure what, if any, additional magic WebSphere had to include... Have you been posting on the Aries site as well? Good luck, Kevin On Mon, Dec 1, 2014 at 3:53 PM, Charlie Mordant cmorda...@gmail.com wrote: Hi Kevin, I removed the non-jta-datasource (referencing the same connection) because I thought it was the issue (and that was failing the same way). Even if I add it, it also fails (randomly, sometimes it passes). I'm not sure it is really OpenJpa related, as I'm using Aries-JPA/Tx, Pax-JDBC. I'm currently investigating, and if you've any other pointers I'll sure try :). Thank you, and best regards, Charlie PS: if you're also interested in the case, you can also try to see where's the catch compiling this: https://github.com/OsgiliathEnterprise/net.osgiliath.parent https://github.com/OsgiliathEnterprise/net.osgiliath.parent (it will once on three times fail on the hello sample blueprint test). Le 1 déc. 2014 à 22:41, Kevin Sutter kwsut...@gmail.com a écrit : Hi Charlie, Since you are using SynchronizeMappings, you should be providing an alternate datasource (non-jta-data-source) in addition to the jta-data-source. OpenJPA requires access to the database in order to define or adjust your schemas based on your Entity definitions. Without a non-jta-data-source, OpenJPA will attempt to do this work within the global transaction. Unfortunately, the auto commit processing
Re: Dynamic DataSource Selection with JPA
Dave - Not sure if this will help, but you can pass in a DataSource an EntityManager creation time[1]. [1] http://ci.apache.org/projects/openjpa/trunk/docbook/manual.html#ref_guide_dbsetup_setDSPerEM Thanks, Rick On Wed, Nov 12, 2014 at 8:46 AM, Dave Westerman dlwes...@us.ibm.com wrote: I am working with a group that has an application that has a very critical need for high availability and performance. To make a long story short, they want to use DB2 HADR for availability. But they also want to use the DB2 HADR read-on-standby facility if the application server is co-located with the database server, for performance reasons. The above details aren't really important here. But what I would like to know is if there is a way to dynamically choose which datasource the JPA calls will be using at runtime, based on some algorithm. All update calls to a JPA entity will always use the primary datasource. But if the entity is only being read, then the standby datasource may be the one used if the servers are co-located. I'm not sure if this is even feasible with JPA, but hopefully someone here can tell me. Thanks! -- *Rick Curtis*
Re: Fw: Please unsubscribe me.
Sandhya - I tired to send you an unsubscribe email.. but I'm not certain if it made it through. I also sent a direct email to you but it bounced. I'm not sure what is going on with your email address. Thanks, Rick On Thu, Nov 6, 2014 at 10:51 PM, Sandhya Turaga turagasa...@yahoo.com.invalid wrote: Hi Kevin, I have sent an email to users-unsubscr...@openjpa.apache.org to unsubscribe me from user list as well. But I am not unsubscribed . Need help with this . Thanks. On Thursday, November 6, 2014 12:48 PM, Sandhya Turaga turagasa...@yahoo.com wrote: Please unsubscribe me . thanks. -- *Rick Curtis*
Re: Enable Query Cache, but selectively
Is there any other reason that you need a custom BrokerFactory than this bit of functionality? On Tue, Nov 11, 2014 at 8:53 AM, Jörn Gersdorf joern.gersd...@gmail.com wrote: Hi all, I´ve reported the OSGi ClassLoader issue as https://issues.apache.org/jira/browse/OPENJPA-2542. Kind regards, Jörn On Tue, Nov 11, 2014 at 3:33 PM, Jörn Gersdorf joern.gersd...@gmail.com wrote: Hi Rick, thanks for you thought. The reason I´m using the custom JDBCBrokerFactory is that I want to have the QueryCache disabled _by default_ and only enable it on occasion. With the out-of-the-box-JDBCBrokerFactory it´s only possible the other way around (disable on occasion). Now, my custom JDBCBrokerFactory is working in a non-OSGi-Environment. Unfortunately, due to OPENJPA-1491, I´m currently not able to load my custom JDBCBrokerFactory (which is living in my own bundle) as openjpa will only load a BrokerFactory using the OpenJPA-Bundle´s classloader. Kind regards, Jörn On Mon, Nov 10, 2014 at 7:24 PM, Rick Curtis curti...@gmail.com wrote: However I wonder if there is an easier way to achieve this? I thought there was an easier way... but after digging around a bit I didn't come up with anything better. While what you are doing is completely valid, I'd almost recommend the creation of a helper method that will disable/enable the QueryReultsCache when you create a query. A custom JDBCBrokerFactory seems like quite a lot of complexity when you could just call .setQueryCacheEnabled(false) when you do/don't want caching... then again, that is just my opinion. I think the ideal solution would involve the addition of a new configuration property to OpenJPAConfigurationImpl. Something similar to openjpa.MaxFetchDepth[1]. Thanks, Rick [1] http://ci.apache.org/projects/openjpa/trunk/docbook/manual.html#openjpa.MaxFetchDepth On Mon, Nov 10, 2014 at 1:30 AM, Jörn Gersdorf joern.gersd...@gmail.com wrote: Hi, I´d like to enable OpenJPA´s query cache in a selective way, i. e. it should be disabled unless I enable it explicitly using query.getFetchPlan().setQueryResult(true). I´ve figured out that this requires FetchConfigurationImpl$ConfigurationState#queryCache to be set to false by default, however, there does not seem to be a configuration property for this. So I ended up subclassing JDBCBrokerFactory and configure this via property openjpa.BrokerFactory (see code below). However I wonder if there is an easier way to achieve this? Thanks and best regards, Jörn Code: property name=openjpa.BrokerFactory value=de.dwpbank.wp2d.wprecon.model.cache.CustomJDBCBrokerFactory / public class CustomJDBCBrokerFactory extends JDBCBrokerFactory { public CustomJDBCBrokerFactory(JDBCConfiguration conf) { super(conf); } @Override protected StoreManager newStoreManager() { return new JDBCStoreManager() { @Override public FetchConfiguration newFetchConfiguration() { return super.newFetchConfiguration().setQueryCacheEnabled(false); } }; } public static CustomJDBCBrokerFactory newInstance(ConfigurationProvider cp) { JDBCConfigurationImpl conf = new JDBCConfigurationImpl(); cp.setInto(conf); return new CustomJDBCBrokerFactory(conf); } } -- *Rick Curtis* -- *Rick Curtis*
Re: Enable Query Cache, but selectively
However I wonder if there is an easier way to achieve this? I thought there was an easier way... but after digging around a bit I didn't come up with anything better. While what you are doing is completely valid, I'd almost recommend the creation of a helper method that will disable/enable the QueryReultsCache when you create a query. A custom JDBCBrokerFactory seems like quite a lot of complexity when you could just call .setQueryCacheEnabled(false) when you do/don't want caching... then again, that is just my opinion. I think the ideal solution would involve the addition of a new configuration property to OpenJPAConfigurationImpl. Something similar to openjpa.MaxFetchDepth[1]. Thanks, Rick [1] http://ci.apache.org/projects/openjpa/trunk/docbook/manual.html#openjpa.MaxFetchDepth On Mon, Nov 10, 2014 at 1:30 AM, Jörn Gersdorf joern.gersd...@gmail.com wrote: Hi, I´d like to enable OpenJPA´s query cache in a selective way, i. e. it should be disabled unless I enable it explicitly using query.getFetchPlan().setQueryResult(true). I´ve figured out that this requires FetchConfigurationImpl$ConfigurationState#queryCache to be set to false by default, however, there does not seem to be a configuration property for this. So I ended up subclassing JDBCBrokerFactory and configure this via property openjpa.BrokerFactory (see code below). However I wonder if there is an easier way to achieve this? Thanks and best regards, Jörn Code: property name=openjpa.BrokerFactory value=de.dwpbank.wp2d.wprecon.model.cache.CustomJDBCBrokerFactory / public class CustomJDBCBrokerFactory extends JDBCBrokerFactory { public CustomJDBCBrokerFactory(JDBCConfiguration conf) { super(conf); } @Override protected StoreManager newStoreManager() { return new JDBCStoreManager() { @Override public FetchConfiguration newFetchConfiguration() { return super.newFetchConfiguration().setQueryCacheEnabled(false); } }; } public static CustomJDBCBrokerFactory newInstance(ConfigurationProvider cp) { JDBCConfigurationImpl conf = new JDBCConfigurationImpl(); cp.setInto(conf); return new CustomJDBCBrokerFactory(conf); } } -- *Rick Curtis*
Re: Standalone Example
as samples, depending on the specific function you are looking for. http://openjpa.apache.org/source-code.html http://openjpa.apache.org/testing.html Also, JPA is a widely accepted and widely documented programming model. So, any standard JPA application should work just fine with OpenJPA. Hope this helps, Kevin On Thu, Nov 6, 2014 at 1:25 AM, Trenton D. Adams tr...@trentonadams.ca wrote: Hi Guys, Where can I find the best openjpa standalone example? Preferably with a maven build. I have checked the site, and googled, and nothing decent seems to come up. Thanks. -- *Rick Curtis*
Re: javax.persistence.EntityListeners is never called
I don't see your patch... and I also don't see this same problem when running in Eclipse. What vendor / version of java are you running? I remember seeing similar problems when running with early version of java 7 (or maybe it was 8.. I don't really remember) Is your original problem resolved? On Fri, Oct 17, 2014 at 9:22 AM, goues...@orange.fr wrote: Please find enclosed my patch. It fixes the compile errors and it is better than a useless cast. Message du 16/10/14 18:32 De : Rick Curtis A : users , goues...@orange.fr Copie à : Objet : Re: javax.persistence.EntityListeners is never called I had to disable checkstyle and to fix a compile error to build OpenJPA. The test passes. Yes, sorry I just committed a fix for that. Please do an update and let me know if you're still having compile problems. I think that you should have a separate listener class and have the singleton bean injected into it. On Thu, Oct 16, 2014 at 11:08 AM, wrote: I had to disable checkstyle and to fix a compile error to build OpenJPA. The test passes. However, the contract of the annotation javax.ejb.Singleton isn't respected by OpenEJB whereas it is respected by Hibernate. This is the only difference that I have found. I just put a log message into the constructor of the annotated class. Message du 15/10/14 17:25 De : Rick Curtis A : users , goues...@orange.fr Copie à : Objet : Re: javax.persistence.EntityListeners is never called Yes, there are numerous unit tests, please take a look at the one that I've noted below. https://svn.apache.org/repos/asf/openjpa/trunk/openjpa-persistence-jdbc/src/test/java/org/apache/openjpa/persistence/callbacks/TestEntityListeners.java On Wed, Oct 15, 2014 at 10:00 AM, wrote: Are there any unit tests that I can run and modify to reproduce my problem? This is typically what I do with JogAmp. Message du 15/10/14 16:42 De : Rick Curtis A : users , goues...@orange.fr Copie à : Objet : Re: javax.persistence.EntityListeners is never called Getting rid of this property doesn't solve my problem. Sorry about muddying the waters. As I stated, that suggestion isn't related to the current problem... it is a best practice. That property is busted and you can fairly easily get into deadlocks. I am still confused The reason @PostLoad isn't called is because your snippets look good. Can I have you put together some sort of a recreatable test? That will help speed up diagnosis. Thanks, Rick On Wed, Oct 15, 2014 at 4:35 AM, wrote: Getting rid of this property doesn't solve my problem. My listener: @Singleton public class MultiLangStringEntityListener { @PostLoad @SuppressWarnings(UseSpecificCatch) public void postLoad(Object entity) { An entity: @Entity @EntityListeners({MultiLangStringEntityListener.class}) @Table(name = THEME) @XmlRootElement(name = Theme) @NamedQueries({ @NamedQuery(name = DmTheme.findAll, query = SELECT d FROM DmTheme d)}) public class Theme implements Serializable { private static final long serialVersionUID = 1L; // @Max(value=?) @Min(value=?)//if you know range of your decimal fields consider using these annotations to enforce field validation @Id @Basic(optional = false) @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = SEQ_THEME) @SequenceGenerator(name = SEQ_THEME, sequenceName = SEQ_THEME, allocationSize = 1) @Column(name = ID) private BigInteger id; @Embedded @AttributeOverrides({ @AttributeOverride(name = id, column = @Column(name = DESCR, nullable=false)), @AttributeOverride(name = lang, column = @Column(insertable = false, updatable = false, name = DESCR)), @AttributeOverride(name = text, column = @Column(insertable = false, updatable = false, name = DESCR)) }) private MultiLangString descr; The embeddable class: @Embeddable public class MultiLangString implements Serializable { private static final long serialVersionUID = 1L; private String id; private String lang; private String text; public MultiLangString() { } Some of my entity classes use both @Embedded and @EmbeddedId but not on the same field. I don't know what is wrong as it still works with Hibernate whereas I try to stay far from its specific features as you can see in this bug report: https://hibernate.atlassian.net/browse/HHH-9437 Message
Re: javax.persistence.EntityListeners is never called
I had to disable checkstyle and to fix a compile error to build OpenJPA. The test passes. Yes, sorry I just committed a fix for that. Please do an update and let me know if you're still having compile problems. I think that you should have a separate listener class and have the singleton bean injected into it. On Thu, Oct 16, 2014 at 11:08 AM, goues...@orange.fr wrote: I had to disable checkstyle and to fix a compile error to build OpenJPA. The test passes. However, the contract of the annotation javax.ejb.Singleton isn't respected by OpenEJB whereas it is respected by Hibernate. This is the only difference that I have found. I just put a log message into the constructor of the annotated class. Message du 15/10/14 17:25 De : Rick Curtis A : users , goues...@orange.fr Copie à : Objet : Re: javax.persistence.EntityListeners is never called Yes, there are numerous unit tests, please take a look at the one that I've noted below. https://svn.apache.org/repos/asf/openjpa/trunk/openjpa-persistence-jdbc/src/test/java/org/apache/openjpa/persistence/callbacks/TestEntityListeners.java On Wed, Oct 15, 2014 at 10:00 AM, wrote: Are there any unit tests that I can run and modify to reproduce my problem? This is typically what I do with JogAmp. Message du 15/10/14 16:42 De : Rick Curtis A : users , goues...@orange.fr Copie à : Objet : Re: javax.persistence.EntityListeners is never called Getting rid of this property doesn't solve my problem. Sorry about muddying the waters. As I stated, that suggestion isn't related to the current problem... it is a best practice. That property is busted and you can fairly easily get into deadlocks. I am still confused The reason @PostLoad isn't called is because your snippets look good. Can I have you put together some sort of a recreatable test? That will help speed up diagnosis. Thanks, Rick On Wed, Oct 15, 2014 at 4:35 AM, wrote: Getting rid of this property doesn't solve my problem. My listener: @Singleton public class MultiLangStringEntityListener { @PostLoad @SuppressWarnings(UseSpecificCatch) public void postLoad(Object entity) { An entity: @Entity @EntityListeners({MultiLangStringEntityListener.class}) @Table(name = THEME) @XmlRootElement(name = Theme) @NamedQueries({ @NamedQuery(name = DmTheme.findAll, query = SELECT d FROM DmTheme d)}) public class Theme implements Serializable { private static final long serialVersionUID = 1L; // @Max(value=?) @Min(value=?)//if you know range of your decimal fields consider using these annotations to enforce field validation @Id @Basic(optional = false) @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = SEQ_THEME) @SequenceGenerator(name = SEQ_THEME, sequenceName = SEQ_THEME, allocationSize = 1) @Column(name = ID) private BigInteger id; @Embedded @AttributeOverrides({ @AttributeOverride(name = id, column = @Column(name = DESCR, nullable=false)), @AttributeOverride(name = lang, column = @Column(insertable = false, updatable = false, name = DESCR)), @AttributeOverride(name = text, column = @Column(insertable = false, updatable = false, name = DESCR)) }) private MultiLangString descr; The embeddable class: @Embeddable public class MultiLangString implements Serializable { private static final long serialVersionUID = 1L; private String id; private String lang; private String text; public MultiLangString() { } Some of my entity classes use both @Embedded and @EmbeddedId but not on the same field. I don't know what is wrong as it still works with Hibernate whereas I try to stay far from its specific features as you can see in this bug report: https://hibernate.atlassian.net/browse/HHH-9437 Message du 14/10/14 17:45 De : Rick Curtis A : users , goues...@orange.fr Copie à : Objet : Re: javax.persistence.EntityListeners is never called One thing that jumps out of your p.xml is the openjpa.Multithreaded property. I suggest you get rid of that property and ensure that you aren't sharing EntityManager's across threads... but I don't think that is related to the problem you are currently having. Can you post relevant snippets of your Entity? On Tue, Oct 14, 2014 at 10:39 AM, wrote: Sorry for the confusion. No I'm not using those callbacks on an Embeddable but when I switched to OpenJPA, I remember that I had to add the classes with @Embeddable into persistence.xml whereas it wasn't necessary with Hibernate. The class
Re: javax.persistence.EntityListeners is never called
Yes, there are numerous unit tests, please take a look at the one that I've noted below. https://svn.apache.org/repos/asf/openjpa/trunk/openjpa-persistence-jdbc/src/test/java/org/apache/openjpa/persistence/callbacks/TestEntityListeners.java On Wed, Oct 15, 2014 at 10:00 AM, goues...@orange.fr wrote: Are there any unit tests that I can run and modify to reproduce my problem? This is typically what I do with JogAmp. Message du 15/10/14 16:42 De : Rick Curtis A : users , goues...@orange.fr Copie à : Objet : Re: javax.persistence.EntityListeners is never called Getting rid of this property doesn't solve my problem. Sorry about muddying the waters. As I stated, that suggestion isn't related to the current problem... it is a best practice. That property is busted and you can fairly easily get into deadlocks. I am still confused The reason @PostLoad isn't called is because your snippets look good. Can I have you put together some sort of a recreatable test? That will help speed up diagnosis. Thanks, Rick On Wed, Oct 15, 2014 at 4:35 AM, wrote: Getting rid of this property doesn't solve my problem. My listener: @Singleton public class MultiLangStringEntityListener { @PostLoad @SuppressWarnings(UseSpecificCatch) public void postLoad(Object entity) { An entity: @Entity @EntityListeners({MultiLangStringEntityListener.class}) @Table(name = THEME) @XmlRootElement(name = Theme) @NamedQueries({ @NamedQuery(name = DmTheme.findAll, query = SELECT d FROM DmTheme d)}) public class Theme implements Serializable { private static final long serialVersionUID = 1L; // @Max(value=?) @Min(value=?)//if you know range of your decimal fields consider using these annotations to enforce field validation @Id @Basic(optional = false) @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = SEQ_THEME) @SequenceGenerator(name = SEQ_THEME, sequenceName = SEQ_THEME, allocationSize = 1) @Column(name = ID) private BigInteger id; @Embedded @AttributeOverrides({ @AttributeOverride(name = id, column = @Column(name = DESCR, nullable=false)), @AttributeOverride(name = lang, column = @Column(insertable = false, updatable = false, name = DESCR)), @AttributeOverride(name = text, column = @Column(insertable = false, updatable = false, name = DESCR)) }) private MultiLangString descr; The embeddable class: @Embeddable public class MultiLangString implements Serializable { private static final long serialVersionUID = 1L; private String id; private String lang; private String text; public MultiLangString() { } Some of my entity classes use both @Embedded and @EmbeddedId but not on the same field. I don't know what is wrong as it still works with Hibernate whereas I try to stay far from its specific features as you can see in this bug report: https://hibernate.atlassian.net/browse/HHH-9437 Message du 14/10/14 17:45 De : Rick Curtis A : users , goues...@orange.fr Copie à : Objet : Re: javax.persistence.EntityListeners is never called One thing that jumps out of your p.xml is the openjpa.Multithreaded property. I suggest you get rid of that property and ensure that you aren't sharing EntityManager's across threads... but I don't think that is related to the problem you are currently having. Can you post relevant snippets of your Entity? On Tue, Oct 14, 2014 at 10:39 AM, wrote: Sorry for the confusion. No I'm not using those callbacks on an Embeddable but when I switched to OpenJPA, I remember that I had to add the classes with @Embeddable into persistence.xml whereas it wasn't necessary with Hibernate. The class that uses those callbacks uses the annotation @Singleton, removing it doesn't solve my problem. Please find enclosed the file. Message du 14/10/14 17:31 De : Rick Curtis A : users , goues...@orange.fr Copie à : Objet : Re: javax.persistence.EntityListeners is never called 2.4.x is the latest. If I understand your previous posts, are you using callbacks(@Preload, @Postload, etc, etc) on an Embeddable? If that is the case, I'm not sure if it is suppose to work? Can I have you post some Entity/embeddable snippets so we can better understand what you want to do? Thanks, Rick On Tue, Oct 14, 2014 at 8:20 AM, wrote: I use OpenJPA 2.4.0. I'm going to try with a more recent version if any. Message du 14/10/14 00:14 De : Kevin Sutter A : users@openjpa.apache.org, goues...@orange.fr Copie à : Objet : Re: javax.persistence.EntityListeners is never
Re: javax.persistence.EntityListeners is never called
2.4.x is the latest. If I understand your previous posts, are you using callbacks(@Preload, @Postload, etc, etc) on an Embeddable? If that is the case, I'm not sure if it is suppose to work? Can I have you post some Entity/embeddable snippets so we can better understand what you want to do? Thanks, Rick On Tue, Oct 14, 2014 at 8:20 AM, goues...@orange.fr wrote: I use OpenJPA 2.4.0. I'm going to try with a more recent version if any. Message du 14/10/14 00:14 De : Kevin Sutter A : users@openjpa.apache.org, goues...@orange.fr Copie à : Objet : Re: javax.persistence.EntityListeners is never called Hi, The complete trace can be turned on via this property in your p.xml: Good luck, Kevin On Mon, Oct 13, 2014 at 1:01 PM, wrote: Hi I use Apache OpenEJB 4.7.1 (probably OpenJPA 2.2 or 2.3). I have looked at the logs and I have already done my best to force the persistence of all entity classes including those Hibernate was able to discover alone, for example the class using @Embeddable. As I'm currently not at work, I can't post the persistence.xml but I'll do it tomorrow. What should I turn on to get some more trace? Thank you for your help. Message du 13/10/14 19:07 De : Kevin Sutter A : users@openjpa.apache.org, goues...@orange.fr Copie à : Objet : Re: javax.persistence.EntityListeners is never called Hi, EntityListeners should work just fine with OpenJPA. What version of OpenJPA are you using? The basic support is documented here: http://ci.apache.org/projects/openjpa/trunk/docbook/manual.html#jpa_overview_pc_callbacks Are there any other messages in the logs that indicate an issue? Have you tried turning on Trace to ensure that normal persistence processing is happening? Can you post your p.xml? Like I mentioned, this should all just work. We'll need a bit more context to help figure out the problem. Kevin On Mon, Oct 13, 2014 at 10:56 AM, wrote: Hello I use javax.persistence.EntityListeners. The persistent classes of the entities are correctly added into persistence.xml. My test case works correctly with Hibernate and OpenJPA except that the annotated methods (with @PostLoad, @PreUpdate, @PrePersist and @PostRemove) are never called by OpenJPA whereas they are called by Hibernate. Am I missing anything obvious? Best regards. -- *Rick Curtis*
Re: exclude-unlisted-classes ignored
It sounds like this is working as expected. Excerpt from the spec : xsd:element name=exclude-unlisted-classes type=xsd:boolean default=true minOccurs=0 xsd:annotation xsd:documentation When set to true then only listed classes and jars will be scanned for persistent classes, otherwise the enclosing jar or directory will also be scanned. Not applicable to Java SE persistence units. /xsd:documentation /xsd:annotation /xsd:element On Fri, Sep 26, 2014 at 11:08 PM, Mansour Al Akeel mansour.alak...@gmail.com wrote: When creating EntityManagerFactory using OpenEJB, exclude-unlisted-classes works as expected. However, when creating it directly as in: props.setProperty(javax.persistence.jdbc.driver, this.driver); props.setProperty(javax.persistence.jdbc.url, this.url); props.setProperty(javax.persistence.jdbc.user, this.username); props.setProperty(javax.persistence.jdbc.password, this.password); props.setProperty(javax.persistence.transactionType, PersistenceUnitTransactionType.RESOURCE_LOCAL.name()); EntityManagerFactory emf = Persistence.createEntityManagerFactory(null, props); The classes are not scanned. The only classes that are loaded and created tables into the DB are those listed in the persistence.xml I noticed that openejb and openjpa has their own implementation of PersistenceUnitInfoImpl. I just need to confirm if this is a bug, or I am missing something. Thank you. -- *Rick Curtis*
Re: EntityManager.find ClassCastException for wrong but existing id
I looked at your test, and it looks like an OpenJPA bug. I'd suggest opening a JIRA[1] and attach your test case. Thanks, Rick [1] https://issues.apache.org/jira/browse/OPENJPA On Thu, Sep 11, 2014 at 2:38 AM, olyalikov oleg.lyali...@alcatel-lucent.com wrote: I did a mistake - I'm getting not ClassNotFoundException but ClassCastException: java.lang.ClassCastException: org.apache.openjpa.find.entities.Document cannot be cast to org.apache.openjpa.find.entities.Person at org.apache.openjpa.find.FindTest.testFind(FindTest.java:54) It's a result of executing final Person p = em.find(Person.class, doc.getId()); All this can be found/reproduced in the project attached to the previous post. Oleg Rick Curtis wrote Can we see the exception? Thanks, Rick On Wed, Sep 10, 2014 at 9:26 AM, olyalikov lt; oleg.lyalikov@ gt; wrote: Hello, I have base entity and 2 inheritors e.g. Person and Document. If I try to find Person entity and provide id of the Document entity I get ClassNotFoundException but it should return either null or EntityNotFoundException. If I provide just some wrong non existing id I get null. Here is link to the maven project with test: openjpa-find-test.zip lt; http://openjpa.208410.n2.nabble.com/file/n7587085/openjpa-find-test.zipgt ; Should I post a bug in issue tracker? Thanks, Oleg -- View this message in context: http://openjpa.208410.n2.nabble.com/EntityManager-find-ClassCastException-for-wrong-but-existing-id-tp7587085.html Sent from the OpenJPA Users mailing list archive at Nabble.com. -- *Rick Curtis* -- View this message in context: http://openjpa.208410.n2.nabble.com/EntityManager-find-ClassCastException-for-wrong-but-existing-id-tp7587085p7587090.html Sent from the OpenJPA Users mailing list archive at Nabble.com. -- *Rick Curtis*
Re: EntityManager.find ClassCastException for wrong but existing id
Can we see the exception? Thanks, Rick On Wed, Sep 10, 2014 at 9:26 AM, olyalikov oleg.lyali...@alcatel-lucent.com wrote: Hello, I have base entity and 2 inheritors e.g. Person and Document. If I try to find Person entity and provide id of the Document entity I get ClassNotFoundException but it should return either null or EntityNotFoundException. If I provide just some wrong non existing id I get null. Here is link to the maven project with test: openjpa-find-test.zip http://openjpa.208410.n2.nabble.com/file/n7587085/openjpa-find-test.zip Should I post a bug in issue tracker? Thanks, Oleg -- View this message in context: http://openjpa.208410.n2.nabble.com/EntityManager-find-ClassCastException-for-wrong-but-existing-id-tp7587085.html Sent from the OpenJPA Users mailing list archive at Nabble.com. -- *Rick Curtis*
Re: OpenJPA Enhancement in Play Framework
Have you taken a read through the OpenJPA build time enhancement documentation[1] ? Please let me us know if you are having problems with going that approach. Thanks, Rick [1] http://openjpa.apache.org/enhancement-with-ant.html [1] http://openjpa.apache.org/enhancement-with-maven.html On Mon, Aug 25, 2014 at 8:15 AM, rrac robert@asr-group.hr wrote: We are trying to set up Play Framework 2.3.3 with OpenJPA. We are unable to make Runtime or Build Time enhancement work. OpenJPA works when setting the RuntimeUnenhancedClasses property to supported in persistance.xml, but that's not recommended for production and is a no-go. We tried using the javaagent JVM parameter with no luck like this: activator -J-javaagent:lib/openjpa-all-2.3.0.jar run. How can the enhancer be configured? -- View this message in context: http://openjpa.208410.n2.nabble.com/OpenJPA-Enhancement-in-Play-Framework-tp7587064.html Sent from the OpenJPA Users mailing list archive at Nabble.com. -- *Rick Curtis*
Re: openjpa.Enhance - You have enabled runtime enhancement, but have not specified the set of persistent classes.
If i could see that the enhancement is running when i clean and install the pom of my maven project, then why this message is show? I believe this message is just telling you that TomEE has enabled the OpenJPA runtime enhancer. I'm pretty certain you can ignore the message. Thanks, Rick On Tue, Aug 19, 2014 at 12:26 PM, José Luis Cetina maxtorz...@gmail.com wrote: Hi, i have a maven project, in my project i have a maven plugin for enhancement, when i execute the plugin i can see the enhancement running lines for each entity, like this: INFO [main] openjpa.Tool - Enhancer running on type class com.xx.yy.zz.MyClassName. But when i run my server (Apache TomEE 1.6.0) i see a warning message: 239 classpath-bootstrap INFO [main] openjpa.Enhance - You have enabled runtime enhancement, but have not specified the set of persistent classes. OpenJPA must look for metadata for every loaded class, which might increase class load times significantly. If i could see that the enhancement is running when i clean and install the pom of my maven project, then why this message is show? Plugin: !--OPENJPA ENHANCMENT-- plugin groupIdorg.apache.openjpa/groupId artifactIdopenjpa-maven-plugin/artifactId version${plugins.openjpa.maven.plugin}/version configuration includes com/xx/yy/entities/**/* /includes excludes com/xx/yy/entities/**/*_.class /excludes addDefaultConstructortrue/addDefaultConstructor enforcePropertyRestrictionstrue/enforcePropertyRestrictions /configuration executions execution idenhancer/id phaseprocess-classes/phase goals goalenhance/goal /goals /execution /executions dependencies dependency groupIdorg.apache.openjpa/groupId artifactIdopenjpa/artifactId version${apache.openjpa.version}/version /dependency /dependencies /plugin -- *Rick Curtis*
Re: Container Managed EntityManager - is there a chance to control flush calls ?
Nothing immediately comes to my mind. Perhaps a better explanation of what you are trying to do will expose some other solution. One question though, what is currently driving the unwanted flush() calls? Thanks, Rick On Mon, Aug 11, 2014 at 1:24 PM, Christoph Weiss christoph.we...@de.ibm.com wrote: Dear Community, We are running OpenJPa with container managed EntityManagers (using WebSphere Application Server). Because of technical reasons we want to explicitly control when the EntityManager executes the flush. First I thought that the parameter FlushMode could be a solution, but reading the documentation I understood it isn't. So is there any other way to really control when the flush is executed? Any reply is appreciated. Thanks for your help in advance! Cheers Christoph Weiss -- *Rick Curtis*
Re: Container Managed EntityManager - is there a chance to control flush calls ?
Yes OpenJPA will occasionally flush when/if it is deemed necessary. One example that comes to mind is the openjpa.FlushBeforeQueries property[1]... I'm sure there are others, but I can't come up with all of the different ones. The thing that concerns me is that you have Entities that aren't really Entities... is there a reason why these things that shouldn't be persisted aren't just treated as POJOs? It seems that might be a better way to go so then you won't have to fake OpenJPA out. [1] http://ci.apache.org/projects/openjpa/trunk/docbook/manual.html#openjpa.FlushBeforeQueries On Mon, Aug 11, 2014 at 1:55 PM, Christoph Weiss christoph.we...@de.ibm.com wrote: Hi Rick, Thanks for your reply! Well actually we do not have any unwanted flush calls but we are scared that they might happen ; ) But let's take about the details: We try to upgrade an old JEE application (with an custom developed PersistenceFramework) to JPA. Because of effort and QA reason we want to maintain the old business logic. Here we had objects which are created, modified but never stored in the database. However those objects are now becoming JPAEntities and so we implemented our own UpdateManager. This UpdateManager is taking care that only selected objects are flushed. (as written in one of the previous posts). The complete concept and implementation is working fine as long as we can control when the flush is executed (-- goal at the end of our EJB transaction) However talking to colleagues I understand that the EntityManager might do flush operations beside the transactions. Reason could be e.g. to free up cache or memory. And that is something we want to avoid. Now I am not sure if this are just rumors from the colleagues or if there are cases where the EntityManager might execute a flush w/o any association / trigger from a transaction. Can you help on this topic? Cheers Christoph From: Rick Curtis curti...@gmail.com To: users users@openjpa.apache.org Date: 11.08.2014 20:38 Subject:Re: Container Managed EntityManager - is there a chance to control flush calls ? Nothing immediately comes to my mind. Perhaps a better explanation of what you are trying to do will expose some other solution. One question though, what is currently driving the unwanted flush() calls? Thanks, Rick On Mon, Aug 11, 2014 at 1:24 PM, Christoph Weiss christoph.we...@de.ibm.com wrote: Dear Community, We are running OpenJPa with container managed EntityManagers (using WebSphere Application Server). Because of technical reasons we want to explicitly control when the EntityManager executes the flush. First I thought that the parameter FlushMode could be a solution, but reading the documentation I understood it isn't. So is there any other way to really control when the flush is executed? Any reply is appreciated. Thanks for your help in advance! Cheers Christoph Weiss -- *Rick Curtis* -- *Rick Curtis*
Re: Fetch Plain. what it is?
Fetch Plans allow you to dynamically/statically control which fields are loaded from the datastore. Please take a read through the OpenJPA user manual[1][2] for additional details. [1] http://ci.apache.org/projects/openjpa/trunk/docbook/manual.html#ref_guide_runtime_jpafetch [2] http://ci.apache.org/projects/openjpa/trunk/docbook/manual.html#ref_guide_fetch Thanks, Rick On Wed, Aug 6, 2014 at 3:47 AM, maurojava mauro2java2...@gmail.com wrote: hi all . While i read from http://www.slideshare.net/pinaki.poddar/jest-rest-on-openjpa http://www.slideshare.net/pinaki.poddar/jest-rest-on-openjpa i have view that OPENJPA has Fetch Plain . What it is please ? i not know about it . tank you . Mauro -- View this message in context: http://openjpa.208410.n2.nabble.com/Fetch-Plain-what-it-is-tp7587024.html Sent from the OpenJPA Users mailing list archive at Nabble.com. -- *Rick Curtis*
Re: Own UpdateManager - but issue with isPersistent and isFlushed for new JPA entities
This means that we use the UpdateManager to filter the set of JPA objects and to pass forward only those JPA entities that we want be explicitly insert or update in the database. I don't think that your approach is going to work. If you look at the UpdateManager interface, you'll see that the contract states that when UpdateManager.flush(..) is called, all instances must be flushed to the datastore. Since you are selectively deciding which StateManagers are going to be flushed, you're not upholding that contract. The reason that you're running into problems is that BrokerImpl drives the calls to the UpdateManager and it expects that all data was flushed and as such, the Broker marks all StateManagers as being flushed. If you want to see where OpenJPA is transitioning the state of your Entities, take a look at BrokerImpl:2220 (trunk). @Search for mark states as flushed. If you wanted to really dive into OpenJPA, you could potentially also write your own Broker.flush(..) implementation that would do the filtering prior to passing Entities to your UpdateManager. That being said, this would be a rather complicated task. Hope this helps! Thanks, Rick On Thu, Jul 31, 2014 at 2:13 PM, Christoph Weiss christoph.we...@de.ibm.com wrote: Dear Community, For our project we added an own UpdateManager for OpenJPA. We use the UpdateManager to do explicit updates for our JPA entities. This means that we use the UpdateManager to filter the set of JPA objects and to pass forward only those JPA entities that we want be explicitly insert or update in the database. The UpdateManager is defined in the persistence.xml and working fine: property name=openjpa.jdbc.UpdateManager value=OurOwnUpdateManager / However we have an issue with new JPA entities we create within our use case / transaction. The issue: The new entities are created at the beginning and known to the EntityManager when we create them. They are in the initial newState with isFlush=false and isPersist = false. So far so good. However when we run a flush() operation - e.g. to update the another objects. The new objects change the state to isPersist and isFlushed. However no update is happening on the database. Even if they are not passed from our UpdateManager to the BatchingConstraintUpdateManager.update(). At the end when we pass the objects (e.g. in the third flush operation) the EntityManager thinks that the objects are already existing and does an update on the tables instead of the an insert. And so our application fails. Question: Is there are chance to prevent OpenJPA to set new created objects to isPersist and isFlushed? Is it the expected behavior of OpenJPA (meaning that the EntityManager might change the state of objects during the flush operations that are not passed to the BatchingConstraintUpdateManager.udpate... ). Is there a way to reset the state of the objects to tell OpenJPA in one of the following flushs to do the insert instead of an update? Looking forward to any helpful hints, tips and explanations! Thanks in advance ; ) Cheers Christoph -- *Rick Curtis*
Re: Create H2 database even before adding records
Hrmm, odd. What version of TomEE are you using? On Thu, Jul 24, 2014 at 12:25 AM, Kalpa Welivitigoda callka...@gmail.com wrote: Well it worked for me for some reason which I don't know. Yes your point is valid, it is a JPA 2.1 property and OpenJPA support JPA 2.0. I observed the following, Without any property set in persistence.xml, if I restart TomEE it works as expected. However when I tried again the same process, it doesn't work (it complains that the table doesn't exist). It is strange. If I redeploy the app (without any modifications) and try the functionality, it complains that the table doesn't exist. On Tue, Jul 22, 2014 at 6:37 PM, Rick Curtis curti...@gmail.com wrote: property name=javax.persistence.schema-generation.database.action value=drop-and-create/property entry in persistence.xml worked for me. ... that doesn't make much sense to me. That property is a JPA 2.1 spec defined proeprty, and OpenJPA doesn't yet support that spec. Are you sure that you are really using OpenJPA? If you are using OpenJPA, I'm quite certain that the openjpa.jdbc.SynchronizeMapping property is doing all of the work. Thanks, Rick On Tue, Jul 22, 2014 at 12:11 AM, Kalpa Welivitigoda callka...@gmail.com wrote: property name=javax.persistence.schema-generation.database.action value=drop-and-create/property entry in persistence.xml worked for me. Since it is a in memory database and used temporary, dropping at the end of the application ok for me. On Tue, Jul 22, 2014 at 12:40 AM, Rick Curtis curti...@gmail.com wrote: The configuration(openjpa.jdbc.SynchronizeMapping) you have provided should cause tables to be created the first time and EntityManager is created. Thanks, Rick On Sun, Jul 20, 2014 at 1:14 PM, Kalpa Welivitigoda callka...@gmail.com wrote: Hi, I am developing an application with H2 in memory database. The issue is that the table is created only after I add a record. Before that, if I search for a record it says that the table is not found. I want to create the table at the time the application starts rather than waiting for a record to be added. Is there any property that serves this requirement. Following is the content of persistence.xml, ?xml version=1.0 encoding=UTF-8? persistence xmlns=http://java.sun.com/xml/ns/persistence; version=2.0 persistence-unit name=rest-jpa providerorg.apache.openjpa.persistence.PersistenceProviderImpl/provider jta-data-sourcejava:/comp/env/jdbc/restDB/jta-data-source classorg.wso2.as.ee.Student/class properties property name=openjpa.jdbc.SynchronizeMappings value=buildSchema(ForeignKeys=true)/ /properties /persistence-unit /persistence I have the datasource defined in context.xml as follows, ?xml version=1.0 encoding=UTF-8? Context Resource name=jdbc/restDb auth=Container type=javax.sql.DataSource driverClassName=org.h2.Driver url=jdbc:h2:mem:restDb username=admin password=admin JtaManaged=true / /Context -- Best Regards, Kalpa Welivitigoda +94776509215 http://about.me/callkalpa -- *Rick Curtis* -- Best Regards, Kalpa Welivitigoda +94776509215 http://about.me/callkalpa -- *Rick Curtis* -- Best Regards, Kalpa Welivitigoda +94776509215 http://about.me/callkalpa -- *Rick Curtis*
Re: MariaDB vs. MySQL
I haven't seen any MariaDB specific problems reported on this list yet... can you explain a bit better what you're observing? Thanks, Rick On Thu, Jul 24, 2014 at 4:05 PM, Marc Logemann marc.logem...@gmail.com wrote: Hi, i am fighting weird issues with an openjpa application on MariaDB while all these are non existant on MySQL. Things like fetch join are just different or better said with openjpa 2.2.0 and MariaDB problematic in some JPQLs. I even see different SQLs being created from the same JPAQL Queries. Is this possible at all? Thanks for infos. -- *Rick Curtis*
Re: Create H2 database even before adding records
property name=javax.persistence.schema-generation.database.action value=drop-and-create/property entry in persistence.xml worked for me. ... that doesn't make much sense to me. That property is a JPA 2.1 spec defined proeprty, and OpenJPA doesn't yet support that spec. Are you sure that you are really using OpenJPA? If you are using OpenJPA, I'm quite certain that the openjpa.jdbc.SynchronizeMapping property is doing all of the work. Thanks, Rick On Tue, Jul 22, 2014 at 12:11 AM, Kalpa Welivitigoda callka...@gmail.com wrote: property name=javax.persistence.schema-generation.database.action value=drop-and-create/property entry in persistence.xml worked for me. Since it is a in memory database and used temporary, dropping at the end of the application ok for me. On Tue, Jul 22, 2014 at 12:40 AM, Rick Curtis curti...@gmail.com wrote: The configuration(openjpa.jdbc.SynchronizeMapping) you have provided should cause tables to be created the first time and EntityManager is created. Thanks, Rick On Sun, Jul 20, 2014 at 1:14 PM, Kalpa Welivitigoda callka...@gmail.com wrote: Hi, I am developing an application with H2 in memory database. The issue is that the table is created only after I add a record. Before that, if I search for a record it says that the table is not found. I want to create the table at the time the application starts rather than waiting for a record to be added. Is there any property that serves this requirement. Following is the content of persistence.xml, ?xml version=1.0 encoding=UTF-8? persistence xmlns=http://java.sun.com/xml/ns/persistence; version=2.0 persistence-unit name=rest-jpa providerorg.apache.openjpa.persistence.PersistenceProviderImpl/provider jta-data-sourcejava:/comp/env/jdbc/restDB/jta-data-source classorg.wso2.as.ee.Student/class properties property name=openjpa.jdbc.SynchronizeMappings value=buildSchema(ForeignKeys=true)/ /properties /persistence-unit /persistence I have the datasource defined in context.xml as follows, ?xml version=1.0 encoding=UTF-8? Context Resource name=jdbc/restDb auth=Container type=javax.sql.DataSource driverClassName=org.h2.Driver url=jdbc:h2:mem:restDb username=admin password=admin JtaManaged=true / /Context -- Best Regards, Kalpa Welivitigoda +94776509215 http://about.me/callkalpa -- *Rick Curtis* -- Best Regards, Kalpa Welivitigoda +94776509215 http://about.me/callkalpa -- *Rick Curtis*
Re: Create H2 database even before adding records
The configuration(openjpa.jdbc.SynchronizeMapping) you have provided should cause tables to be created the first time and EntityManager is created. Thanks, Rick On Sun, Jul 20, 2014 at 1:14 PM, Kalpa Welivitigoda callka...@gmail.com wrote: Hi, I am developing an application with H2 in memory database. The issue is that the table is created only after I add a record. Before that, if I search for a record it says that the table is not found. I want to create the table at the time the application starts rather than waiting for a record to be added. Is there any property that serves this requirement. Following is the content of persistence.xml, ?xml version=1.0 encoding=UTF-8? persistence xmlns=http://java.sun.com/xml/ns/persistence; version=2.0 persistence-unit name=rest-jpa providerorg.apache.openjpa.persistence.PersistenceProviderImpl/provider jta-data-sourcejava:/comp/env/jdbc/restDB/jta-data-source classorg.wso2.as.ee.Student/class properties property name=openjpa.jdbc.SynchronizeMappings value=buildSchema(ForeignKeys=true)/ /properties /persistence-unit /persistence I have the datasource defined in context.xml as follows, ?xml version=1.0 encoding=UTF-8? Context Resource name=jdbc/restDb auth=Container type=javax.sql.DataSource driverClassName=org.h2.Driver url=jdbc:h2:mem:restDb username=admin password=admin JtaManaged=true / /Context -- Best Regards, Kalpa Welivitigoda +94776509215 http://about.me/callkalpa -- *Rick Curtis*
Re: Problems with OpenJPA when deploying in Apache Servicemix and Spring. (OSGi environment)
Please keep us updated when you figure out the problem. Sent from my iPhone On Jul 16, 2014, at 2:35 AM, artaxerxe mapand...@gmail.com wrote: Thanks Rick. I'll consider your suggestion. So, I think that's a spring or servicemix issue. Have a great day! artaxerxe -- View this message in context: http://openjpa.208410.n2.nabble.com/Problems-with-OpenJPA-when-deploying-in-Apache-Servicemix-and-Spring-OSGi-environment-tp7586952p7586956.html Sent from the OpenJPA Users mailing list archive at Nabble.com.
Re: Problems with OpenJPA when deploying in Apache Servicemix and Spring. (OSGi environment)
.run(J2DoPrivHelper.java:946) at org.apache.openjpa.lib.util.J2DoPrivHelper$43.run(J2DoPrivHelper.java:944) at java.security.AccessController.doPrivileged(Native Method) at org.apache.openjpa.meta.AbstractCFMetaDataFactory.parsePersistentTypeNames(AbstractCFMetaDataFactory.java:769) at org.apache.openjpa.meta.AbstractCFMetaDataFactory.getPersistentTypeNames(AbstractCFMetaDataFactory.java:623) ... 26 more here is how I configured the EntityManagerFactory in my spring config file: bean id=entityManagerFactory class=org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean property name=persistenceUnitName value=openjpa-test / property name=jpaVendorAdapter ref=jpaAdapter / property name=loadTimeWeaver bean class=org.springframework.instrument.classloading.SimpleLoadTimeWeaver / /property property name=dataSource ref=dataSource / property name=jpaProperties map entry key=openjpa.Log value=DefaultLevel=TRACE, Tool=INFO / entry key=openjpa.jdbc.SynchronizeMappings value=validate/ /map /property /bean Can anybody help me to solve my issue? -- View this message in context: http://openjpa.208410.n2.nabble.com/Problems-with-OpenJPA-when-deploying-in-Apache-Servicemix-and-Spring-OSGi-environment-tp7586952.html Sent from the OpenJPA Users mailing list archive at Nabble.com. -- *Rick Curtis*
Re: Avoiding N+1 on collection in recursive relation
David - It seems like this question has been asked numerous times on this mailing list and no one has come up with a good answer. I too don't have an answer, but if you come up with anything please post back to the list. Thanks, Rick On Tue, Jul 8, 2014 at 6:56 PM, David Minor davemi...@gmail.com wrote: I have an entity (A) that has a recursive OneToMany relation with itself (children). A also has a OneToMany relation to entity B. Whenever A is fetched, the queries to fetch B for entity A and its children are all separate queries. Is there any way to avoid this? -- _ David Minor -- *Rick Curtis*
Re: Something is missing in OpenJPA 2.3.0 distribution
You're right, I'll check with Kevin to see if he has any ideas why that was missed when we built the binaries. On Sat, Jul 5, 2014 at 11:23 PM, Pawel Veselov pawel.vese...@gmail.com wrote: On Sat, Jul 5, 2014 at 9:20 PM, Pawel Veselov pawel.vese...@gmail.com wrote: Hi. Something seems to be missing from the libraries in the OpenJPA distro. I don't normally use the all jar, and without it, the enhancer fails with class not found of org/apache/xbean/asm4/ClassVisitor.class. The class is there in all jar, but not in any of the library jars... Seems that what's missing is Apache Geronimo XBean, and its download is utterly broken. -- *Rick Curtis*
Re: Something is missing in OpenJPA 2.3.0 distribution
In OPENJPA-2283 we added a dependency on xbean-asm but the assembly.xml files in openjpa-project weren't updated. I'll try to get changes put in today and then you could grab one of the nightly 2.3.1-SNAPSHOT builds. On Tue, Jul 8, 2014 at 6:43 AM, Rick Curtis curti...@gmail.com wrote: You're right, I'll check with Kevin to see if he has any ideas why that was missed when we built the binaries. On Sat, Jul 5, 2014 at 11:23 PM, Pawel Veselov pawel.vese...@gmail.com wrote: On Sat, Jul 5, 2014 at 9:20 PM, Pawel Veselov pawel.vese...@gmail.com wrote: Hi. Something seems to be missing from the libraries in the OpenJPA distro. I don't normally use the all jar, and without it, the enhancer fails with class not found of org/apache/xbean/asm4/ClassVisitor.class. The class is there in all jar, but not in any of the library jars... Seems that what's missing is Apache Geronimo XBean, and its download is utterly broken. -- *Rick Curtis* -- *Rick Curtis*
Re: OpenJPA 2.3.0 packages Derby?
Yes, you can safely remove derby if you aren't using it. On Sat, Jul 5, 2014 at 10:58 PM, Pawel Veselov pawel.vese...@gmail.com wrote: Hi. I see that 2.3.0 has a packaged derby jar (10.8.2.2) I was wondering what the reason is, and can I yank it out even if I'm not using Derby (in my deployment, derby.jar is deployed at application server level, and openjpa is packaged along .war file) Thank you, Pawel. -- *Rick Curtis*
Re: Cannot use local transaction during installDBDictionary.
Try to set the following property in your persistence.xml : property name=openjpa.jdbc.sql.DBDictionary value=oracle/ Thanks, Rick On Thu, Jun 26, 2014 at 7:54 AM, Rupert Smith rupertlssm...@googlemail.com wrote: I am getting this error during installDBDictionary: Caused by: java.sql.SQLException: could not use local transaction commit in a global transaction at oracle.jdbc.driver.PhysicalConnection.disallowGlobalTxnMode(PhysicalConnection.java:6825) at oracle.jdbc.driver.PhysicalConnection.commit(PhysicalConnection.java:3812) at oracle.jdbc.driver.PhysicalConnection.commit(PhysicalConnection.java:3857) at oracle.jdbc.OracleConnectionWrapper.commit(OracleConnectionWrapper.java:140) at org.tranql.connector.jdbc.ManagedXAConnection.localTransactionCommit(ManagedXAConnection.java:102) at org.tranql.connector.AbstractManagedConnection$LocalTransactionImpl.commit(AbstractManagedConnection.java:199) at org.tranql.connector.jdbc.ConnectionHandle.setAutoCommit(ConnectionHandle.java:160) at org.apache.openjpa.lib.jdbc.DelegatingConnection.setAutoCommit(DelegatingConnection.java:167) at org.apache.openjpa.lib.jdbc.DelegatingConnection.setAutoCommit(DelegatingConnection.java:167) at org.apache.openjpa.lib.jdbc.ConfiguringConnectionDecorator$ConfiguringConnection.setAutoCommit(ConfiguringConnectionDecorator.java:117) at org.apache.openjpa.lib.jdbc.ConfiguringConnectionDecorator$ConfiguringConnection.init(ConfiguringConnectionDecorator.java:111) at org.apache.openjpa.lib.jdbc.ConfiguringConnectionDecorator.decorate(ConfiguringConnectionDecorator.java:93) at org.apache.openjpa.lib.jdbc.DecoratingDataSource.decorate(DecoratingDataSource.java:99) at org.apache.openjpa.lib.jdbc.DecoratingDataSource.getConnection(DecoratingDataSource.java:94) at org.apache.openjpa.jdbc.schema.DataSourceFactory.installDBDictionary(DataSourceFactory.java:236) ... 82 more I am using an XADataSource, and this data source is set up in my persistence.xml: jta-data-sourceosgi:service/javax.sql.DataSource/( osgi.jndi.service.name=jdbc/AppDataSource)/jta-data-source non-jta-data-sourceosgi:service/javax.sql.DataSource/( osgi.jndi.service.name=jdbc/AppDataSource)/non-jta-data-source I am thinking the above error might be caused because the installDBDictionary process tries to take the data source and put it in autocommit mode? Should I be setting up a non-XA datasource and configuring that as the non-jta-data-source in the persistence.xml? Thanks for your help. Rupert -- *Rick Curtis*
Re: openjpa2.2 cannot enhance multiple persistence-unit?
Click on the nabble post from the first email on this thread... it looks like they are reporting a problem with the OpenJPA ant enhancer task. On Thu, Jun 26, 2014 at 11:45 AM, Kevin Sutter kwsut...@gmail.com wrote: The issue described is not clear enough? Ahhh, no... Especially when Jose replies with this... Yes im using that exactly situtation, and i could enhance multiple persistence unit. So, something is not consistent. You haven't described your persistence unit definitions (persistence.xml). You haven't described your enhancement process -- dynamic or build time. Are you running within an application server, or as a standalone Java SE application? There are many variables that can affect the entity enhancement processing. Kevin On Thu, Jun 26, 2014 at 11:05 AM, gembin gem...@gmail.com wrote: The issue described is not clear enough? only classes in unit1 are enhanced, none is enhanced in unit2 Enhanced org.example.Entity1 **NOT Enhanced** org.example.Entity2 Kevin Sutter wrote I don't think there ever was consensus that a problem existed... What exact scenario is not working with 2.2.0? On Tue, Jun 24, 2014 at 7:15 PM, robertgass lt; robertgass@ gt; wrote: Anyone find the solution to this yet? -- View this message in context: http://openjpa.208410.n2.nabble.com/openjpa2-2-cannot-enhance-multiple-persistence-unit-tp7582993p7586879.html Sent from the OpenJPA Users mailing list archive at Nabble.com. -- View this message in context: http://openjpa.208410.n2.nabble.com/openjpa2-2-cannot-enhance-multiple-persistence-unit-tp7582993p7586884.html Sent from the OpenJPA Users mailing list archive at Nabble.com. -- *Rick Curtis*
Re: @MappedSuperclass fields not found
Are you listing the class elements in your persistence.xml? If so, try to add the MappedSuperclass to that list. On Tue, Jun 24, 2014 at 11:17 AM, Maxim Solodovnik solomax...@gmail.com wrote: Hello All, I'm trying to use @MappedSuperclass as follows: @MappedSuperclass public abstract class FileItem implements Serializable { private static final long serialVersionUID = 1L; @Column(name = deleted) protected boolean deleted; public boolean isDeleted() { return deleted; } public void setDeleted(boolean deleted) { this.deleted = deleted; } } @Entity @NamedQueries({ @NamedQuery(name = get, query = SELECT f FROM FlvRecording f WHERE f.deleted = false) }) @Table(name = flvrecording) public class FlvRecording extends FileItem { private static final long serialVersionUID = 1L; } While trying to call named query I got: An error occurred while parsing the query filter SELECT f FROM FlvRecording f WHERE f.deleted = false. Error message: No field named deleted in FlvRecording. I also tried to make fields of abstract superclass private tried to add @Inheritance(strategy = InheritanceType.JOINED) to FlvRecording class What am I doing wrong? Thanks advance for any help -- WBR Maxim aka solomax -- *Rick Curtis*
Re: Enable cascade persist globally and pro grammatically
Hmm, I don't think there is an easy/elegant way to modify the cascade behavior on the fly. I think with some work you could write some code where if a field didn't have cascade set, we would just look at some default setting... but that would take some hacking at OpenJPA. Sorry I don't have a better answer for you. On Sat, Jun 14, 2014 at 2:56 PM, Mansour Al Akeel mansour.alak...@gmail.com wrote: Yes, you are right. This is very hacky. But it even gets worse. I need to switch this temporary, and revert it back. So the Cascade All will take effect only to load the graph, and insert/update the date base from file. Then I need it back to the original configuration. I think I need to reconsider this approach. If you have any an advice please let me know. On Fri, Jun 13, 2014 at 4:43 PM, Rick Curtis curti...@gmail.com wrote: I am wondering if this can be done programmatically AND at runtime (After the entityManager is created). Yes I'm sure it is possible to do programmatically. The problem you're going to run into with changing this after the EntityManager is loaded is that we will have already processed all of our metadata and the setting will have already taken hold. It's possible to walk through the entire metadata tree and reset this value for each relationship, but that would be very hacky. On Fri, Jun 13, 2014 at 3:14 PM, Mansour Al Akeel mansour.alak...@gmail.com wrote: Rick, Thank you a lot for your help. I am wondering if this can be done programmatically AND at runtime (After the entityManager is created). Thank you. On Fri, Jun 13, 2014 at 9:18 AM, Rick Curtis curti...@gmail.com wrote: You can enable cascade persist globally via the jpa defined persistence-unit-metadata defauls element in the orm.xml See the snippet below: entity-mappings xmlns=http://java.sun.com/xml/ns/persistence/orm; xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance; xsi:schemaLocation=http://java.sun.com/xml/ns/persistence/orm orm_2_0.xsd version=2.0 persistence-unit-metadata persistence-unit-defaults cascade-persist/ /persistence-unit-defaults /persistence-unit-metadata ... /entity-mappings Thanks, Rick On Thu, Jun 12, 2014 at 9:09 PM, Mansour Al Akeel mansour.alak...@gmail.com wrote: Is the a way to enable cascade persist globally and pro grammatically using an EntityManager instance ?? Thank you. -- *Rick Curtis* -- *Rick Curtis* -- *Rick Curtis*
Re: Data cache does not work in OpenJPA 2.2.2
I cc'd the owners of 2.2.x and 2.3.x to see if either of them will merge it into their branches. On Mon, Jun 16, 2014 at 2:50 AM, LYALIKOV, Oleg (Oleg) oleg.lyali...@alcatel-lucent.com wrote: Hello I would appreciate if anybody can tell if it is possible to merge this fix to 2.2.x or 2.3.x branches and what is required to do that merge... Without fix there is a big data cache performance impact for majority of systems which make use of data cache. Thanks, Oleg -Original Message- From: LYALIKOV, Oleg (Oleg) [mailto:oleg.lyali...@alcatel-lucent.com] Sent: Tuesday, May 27, 2014 2:47 PM To: users@openjpa.apache.org Subject: RE: Data cache does not work in OpenJPA 2.2.2 Hi Rick, Yes, it passes with latest 2.4.0-SNAPSHOT, and OpenJPA log file does not contain SQL queries while executing JPQL query for the second time. So do you know when this fix will be available in stable versions? And for which stable versions? Thanks, Oleg -Original Message- From: Rick Curtis [mailto:curti...@gmail.com] Sent: Tuesday, May 27, 2014 5:12 AM To: users Subject: Re: Data cache does not work in OpenJPA 2.2.2 Does this test pass on trunk? This sounds pretty familiar to a change[1] that I put into trunk. [1] https://issues.apache.org/jira/browse/OPENJPA-2285 On Mon, May 26, 2014 at 9:11 AM, LYALIKOV, Oleg (Oleg) oleg.lyali...@alcatel-lucent.com wrote: Forgot to update project archive to successfully run in OpenJPA 2.0.0 - you need to not only change openjpa version in pom file but also add property property name=openjpa.RemoteCommitProvider value=sjvm/ in persistence.xml file (and also use -XX:-UseSplitVerifier as project configured to be compiled by jdk 7) Thanks, Oleg From: LYALIKOV, Oleg (Oleg) Sent: Monday, May 26, 2014 5:56 PM To: users@openjpa.apache.org Subject: Data cache does not work in OpenJPA 2.2.2 Hello, I have a simple project with several entities and several relations (a little more complex than trivial) and when I execute query SELECT obj FROM MyEntity obj several times OpenJPA still executes SQL queries on database to retrieve some data. Some details: here are entities (you can find full descriptions in the attached project): Person 1-* DocumentBase Document 1-* Link1 Document 1-* Link2 and Document entity inherits from DocumentBase entity. All relations are eager OneToMany. I create one Person with one Document which also has one Link1 and one Link2 objects. Then I execute query SELECT DISTINCT obj FROM Person obj twice and still OpenJPA executes SQL queries for the second time to retrieve Document/Link1/Link2 data. It makes 2 SQL queries. If I use following setting (which I primarily interested in): property name=openjpa.jdbc.SubclassFetchMode value=none/ then these queries are: SELECT t0.docName, t1.DOCUMENT_ID, t2.id FROM Document t0 LEFT OUTER JOIN Document_Link1 t1 ON t0.id = t1.DOCUMENT_ID LEFT OUTER JOIN Link1 t2 ON t1.LINK1_ID = t2.id WHERE t0.id = ? ORDER BY t1.DOCUMENT_ID ASC [params=(String) 51] and SELECT t1.id FROM Document_Link2 t0 INNER JOIN Link2 t1 ON t0.LINK2_ID = t1.id WHERE t0.DOCUMENT_ID = ? [params=(String) 51] if I use default value for openjpa.jdbc.SubclassFetchMode property then anyway 2 queries are executed: SELECT t1.id FROM Document_Link1 t0 INNER JOIN Link1 t1 ON t0.LINK1_ID = t1.id WHERE t0.DOCUMENT_ID = ? [params=(String) 51] and SELECT t1.id FROM Document_Link2 t0 INNER JOIN Link2 t1 ON t0.LINK2_ID = t1.id WHERE t0.DOCUMENT_ID = ? [params=(String) 51] If there are a lot of such objects and a lot of such relations then application just spends all its time in database. It is serious performance impact... There is junit test in the attached project which fails in OpenJPA 2.2.2 and it passes in OpenJPA 2.0.0 so it looks like a regression. After some analysis it seems that the change was in DataCacheStoreManager::load method. In the OpenJPA 2.0.0 the method cacheStateManager is executed which properly put fully loaded objects in cache while in OpenJPA 2.2.2 it firstly check CacheStoreMode, the value is USE, the objects are alreadyCached (they are actually in cache but only partially loaded) and so OpenJPA does not update cache and so only partially loaded objects are in cache and so OpenJPA always executes SQL queries to complete these objects on every query. So should I post it as a bug in issue tracker? And also maybe you know some workaround for this issue? Thanks, Oleg -- *Rick Curtis* -- *Rick Curtis*
Re: Enable cascade persist globally and pro grammatically
You can enable cascade persist globally via the jpa defined persistence-unit-metadata defauls element in the orm.xml See the snippet below: entity-mappings xmlns=http://java.sun.com/xml/ns/persistence/orm; xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance; xsi:schemaLocation=http://java.sun.com/xml/ns/persistence/orm orm_2_0.xsd version=2.0 persistence-unit-metadata persistence-unit-defaults cascade-persist/ /persistence-unit-defaults /persistence-unit-metadata ... /entity-mappings Thanks, Rick On Thu, Jun 12, 2014 at 9:09 PM, Mansour Al Akeel mansour.alak...@gmail.com wrote: Is the a way to enable cascade persist globally and pro grammatically using an EntityManager instance ?? Thank you. -- *Rick Curtis*
Re: Enable cascade persist globally and pro grammatically
I am wondering if this can be done programmatically AND at runtime (After the entityManager is created). Yes I'm sure it is possible to do programmatically. The problem you're going to run into with changing this after the EntityManager is loaded is that we will have already processed all of our metadata and the setting will have already taken hold. It's possible to walk through the entire metadata tree and reset this value for each relationship, but that would be very hacky. On Fri, Jun 13, 2014 at 3:14 PM, Mansour Al Akeel mansour.alak...@gmail.com wrote: Rick, Thank you a lot for your help. I am wondering if this can be done programmatically AND at runtime (After the entityManager is created). Thank you. On Fri, Jun 13, 2014 at 9:18 AM, Rick Curtis curti...@gmail.com wrote: You can enable cascade persist globally via the jpa defined persistence-unit-metadata defauls element in the orm.xml See the snippet below: entity-mappings xmlns=http://java.sun.com/xml/ns/persistence/orm; xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance; xsi:schemaLocation=http://java.sun.com/xml/ns/persistence/orm orm_2_0.xsd version=2.0 persistence-unit-metadata persistence-unit-defaults cascade-persist/ /persistence-unit-defaults /persistence-unit-metadata ... /entity-mappings Thanks, Rick On Thu, Jun 12, 2014 at 9:09 PM, Mansour Al Akeel mansour.alak...@gmail.com wrote: Is the a way to enable cascade persist globally and pro grammatically using an EntityManager instance ?? Thank you. -- *Rick Curtis* -- *Rick Curtis*
Re: behavior change from 2.1.0 to 2.2.0 on persist
Marc -- I'm thinking that there was a change in cascade persist behavior that you might be running into. http://openjpa.apache.org/builds/2.2.2/apache-openjpa/docs/jpa_2.2.html#jpa_2.2_cascadePersist On Mon, Jun 2, 2014 at 9:53 AM, Marc Logemann marc.logem...@gmail.com wrote: Kevin, thanks for fast feedback. To your questions: 1) of course we could do the em.find() and do it the way it should be done ;-) 2) no, we have not tried using em.merge(), this would be an option we could check out. And yes. WE dont want to persist the CustomerType since its already there. We just want to create the relationship. Thanks again. And now we will happily wait for Java8 Support in your bytecode enhancer so that we could upgrade to latest Version of OpenJPA instead of being stuck to 2.2.0 ;-) Marc 2014-06-02 16:11 GMT+02:00 Kevin Sutter kwsut...@gmail.com: Hi Marc, Sorry for the troubles. Technically, it looks like you were lucky and coding to a bug in the OpenJPA code. Since you just created this CustomerType, we have to assume that it's unmanaged. And, we can't automatically cascade the persist operation to this unmanaged entity. And, in your particular case, we wouldn't want to persist this entity since it already exists. Just to be clear, you don't want this CustomerType to be persisted, right? You are just creating this to satisfy the relationship from Person, right? A couple of ideas come to mind... 1) Can you do an em.find() operation on your CustomerType? I realize this is an extra SQL, but then this CustomerType would be managed and satisfy the requirement. 2) Have you tried using em.merge(p) instead of em.persist(p)? The merge should do either the update or insert based on the state of the object. When we get to the CustomerType, we might have to do the extra SQL to determine if it exists already, but then we should be okay. This JIRA [1] from the 2.2.0 Release Notes [2] makes me think this might work... Maybe somebody else has some ideas on how to get around this scenario. [1] https://issues.apache.org/jira/browse/OPENJPA-1896 [2] http://openjpa.apache.org/builds/2.2.0/apache-openjpa/RELEASE-NOTES.html On Mon, Jun 2, 2014 at 7:48 AM, Marc Logemann marc.logem...@gmail.com wrote: Hey, we recently switched to 2.2.0 (cant go higher because we use Java8) and we found a change in behavior. Asumme we created a new Entity which looks like this: Person.java -- int oid String name CustomerType adress we created the object like so: Person p = new Person(); p.setName(foo); CustomerType ct = new CustomerType(); ct.setOid(1); // THIS OID already exists and we want to map the existant object to Person p.setCustomerType(ct); persist(p); = In 2.1.0 OpemJPA knew that there is a CustomerType in the DB with this oid and loads it automaticly and the child object is managed. With 2.2.0 this is no longer the case and we get a Unmanaged bla bla bla Exception. We relied on that behavior heavily and the rewrite is a tough for all areas. Is there some kind of config setting where i can set the old behavior. Or was this old behavior a bug? ;-) Thanks for hints. Marc -- *Rick Curtis*
Re: Utilizing a slice for online migration
Your post seems to be in rather poor taste for a open source developer. I disagree as my post had the best interest of the developer in mind. From my point of view, slice doesn't come up on the list very often and when it does, there isn't anyone with experience to help. OpenJPA has very, very few people supporting this community these days and I don't want to steer people toward a feature that they won't get much/any help with. It sucks, but that's the way it is. On Sat, May 31, 2014 at 2:11 AM, Pinaki Poddar ppod...@apache.org wrote: Hello Rick, Your post seems to be in rather poor taste for a open source developer. Instead of encouraging users to use a module from a project of which you are a member and the module has been in operation for last four years and have been successfully used by others, your answer seem to be: Nobody uses Slice, so do not bother. It is not only factually incorrect, it is inappropriate. - Pinaki Poddar Chair, Apache OpenJPA Project -- View this message in context: http://openjpa.208410.n2.nabble.com/Utilizing-a-slice-for-online-migration-tp7586062p7586726.html Sent from the OpenJPA Users mailing list archive at Nabble.com. -- *Rick Curtis*
Re: Data cache does not work in OpenJPA 2.2.2
Does this test pass on trunk? This sounds pretty familiar to a change[1] that I put into trunk. [1] https://issues.apache.org/jira/browse/OPENJPA-2285 On Mon, May 26, 2014 at 9:11 AM, LYALIKOV, Oleg (Oleg) oleg.lyali...@alcatel-lucent.com wrote: Forgot to update project archive to successfully run in OpenJPA 2.0.0 - you need to not only change openjpa version in pom file but also add property property name=openjpa.RemoteCommitProvider value=sjvm/ in persistence.xml file (and also use -XX:-UseSplitVerifier as project configured to be compiled by jdk 7) Thanks, Oleg From: LYALIKOV, Oleg (Oleg) Sent: Monday, May 26, 2014 5:56 PM To: users@openjpa.apache.org Subject: Data cache does not work in OpenJPA 2.2.2 Hello, I have a simple project with several entities and several relations (a little more complex than trivial) and when I execute query SELECT obj FROM MyEntity obj several times OpenJPA still executes SQL queries on database to retrieve some data. Some details: here are entities (you can find full descriptions in the attached project): Person 1-* DocumentBase Document 1-* Link1 Document 1-* Link2 and Document entity inherits from DocumentBase entity. All relations are eager OneToMany. I create one Person with one Document which also has one Link1 and one Link2 objects. Then I execute query SELECT DISTINCT obj FROM Person obj twice and still OpenJPA executes SQL queries for the second time to retrieve Document/Link1/Link2 data. It makes 2 SQL queries. If I use following setting (which I primarily interested in): property name=openjpa.jdbc.SubclassFetchMode value=none/ then these queries are: SELECT t0.docName, t1.DOCUMENT_ID, t2.id FROM Document t0 LEFT OUTER JOIN Document_Link1 t1 ON t0.id = t1.DOCUMENT_ID LEFT OUTER JOIN Link1 t2 ON t1.LINK1_ID = t2.id WHERE t0.id = ? ORDER BY t1.DOCUMENT_ID ASC [params=(String) 51] and SELECT t1.id FROM Document_Link2 t0 INNER JOIN Link2 t1 ON t0.LINK2_ID = t1.id WHERE t0.DOCUMENT_ID = ? [params=(String) 51] if I use default value for openjpa.jdbc.SubclassFetchMode property then anyway 2 queries are executed: SELECT t1.id FROM Document_Link1 t0 INNER JOIN Link1 t1 ON t0.LINK1_ID = t1.id WHERE t0.DOCUMENT_ID = ? [params=(String) 51] and SELECT t1.id FROM Document_Link2 t0 INNER JOIN Link2 t1 ON t0.LINK2_ID = t1.id WHERE t0.DOCUMENT_ID = ? [params=(String) 51] If there are a lot of such objects and a lot of such relations then application just spends all its time in database. It is serious performance impact... There is junit test in the attached project which fails in OpenJPA 2.2.2 and it passes in OpenJPA 2.0.0 so it looks like a regression. After some analysis it seems that the change was in DataCacheStoreManager::load method. In the OpenJPA 2.0.0 the method cacheStateManager is executed which properly put fully loaded objects in cache while in OpenJPA 2.2.2 it firstly check CacheStoreMode, the value is USE, the objects are alreadyCached (they are actually in cache but only partially loaded) and so OpenJPA does not update cache and so only partially loaded objects are in cache and so OpenJPA always executes SQL queries to complete these objects on every query. So should I post it as a bug in issue tracker? And also maybe you know some workaround for this issue? Thanks, Oleg -- *Rick Curtis*
Re: Broken Link
is the link still valid / the page still reachable? I don't believe that was ever a valid link, I'm quite certain that it is just used for illustrative purposes. Thanks, Rick On Tue, May 20, 2014 at 8:21 AM, Boblitz John john.bobl...@bertschi.comwrote: Hello, I was looking about for an example of the JSON format in JEST and thought I found a link on the page: http://openjpa.apache.org/jest-syntax.html The Page contains, in part: For example, to find a persistent Person instance with primary key 1234 and receive the result in JSON format will be: http://www.example.com:8080/jest/find/format=json?type=Person1234 following that link however does not seem to lead anywhere ... is the link still valid / the page still reachable? Freundliche Grüsse / best regards John Boblitz EDV-Abteilung Bertschi AG __ Hutmattstr. 22, CH-5724 Dürrenäsch Tel: +41 (0)62 767 67 04 john.bobl...@bertschi.commailto:john.bobl...@bertschi.com www.bertschi.comhttp://www.bertschi.com/ -- *Rick Curtis*
Re: Slow startup when using a postgresql db and connecting remotely
I am not aware of any properties that will avoid reading metadata from the connection. It would be helpful to take a few thread dumps during the minutes where opencms is attempting to connect to the DB so we can see what is taking all of the time. Hope this helps. Thanks, Rick On Tue, May 6, 2014 at 1:02 AM, Christian Bjørnbak c...@touristonline.dkwrote: Hi I'm trying to get opencms that uses openjpa to connection to a remote postgresql db. When I start opencms on my laptop at home connecting to a database in the office it takes minutes to start.. I had the same experience with another app using hibernate but here I solved it by using this secret setting: hibernate.temp.use_jdbc_metadata_defaults. See http://stackoverflow.com/questions/10075081/hibernate-slow-to-acquire-postgres-connection Does openjpa have a similar setting to avoid reading all metadata? Opencms uses apache commons dbcp as a layer between openjpa and the postgresql driver. I don't know if this influences solving the problem? Med venlig hilsen / Kind regards, Christian Bjørnbak Chefudvikler / Lead Developer TouristOnline A/S Islands Brygge 43 2300 København S Denmark TLF: +45 32888230 Dir. TLF: +45 32888235 -- *Rick Curtis*
Re: Advice Requested: Instance Loading bug in 2.3.0
I¹m unsure as to why this code/data combo used to work in version 1.2.2 and stopped working in 2.x This sounds somewhat familiar, but I'm unable to find any posts on the mailing lists. My best guess is that in 2.x we may have found a bug where we weren't properly enforcing fetch depth... but that is just a gut feeling. I'm glad you were able to figure it out, as I know these things can be pretty painful to debug. Thanks, Rick On Mon, May 5, 2014 at 11:46 AM, Jeff Oh jeff...@elasticpath.com wrote: Hi Rick, Thanks for your suggestions. The good news is that I managed to determine what was causing the issue. The bad news is that it turns out, that despite my best intentions, that many of my initial diagnoses and thoughts were quite wrong. So - the parts that were incorrect: Initially, I thought that each instance was created multiple times, and later on a winner was selected. It turns out that each instance is only created once, as (I should have) expected. The multiple instances were an unfortunate side-effect of viewing multiple concurrent threads loading the same data in the debugger. Instead, the actual problem occurred because OpenJPA was halting the load because it hit the maximum recursion depth (defaulted to 1). I was able to fix the problem by adding ³recursionDepth = -1² to the @FetchAttribute annotations between A-B and B-A. I¹m unsure as to why this code/data combo used to work in version 1.2.2 and stopped working in 2.x. My best theory is that perhaps OpenJPA 1.x might have ignored the recursionDepth because the recursive/cyclic relationship wasn¹t direct (e.g. A-C-B-A instead of A-B-A), but that 2.x might be a little bit smarter. Or perhaps something else, perhaps not related to OpenJPA at all, might have jiggled the starting conditions enough to cause the problem. In any case, the ultimate fix was simple, and I¹m happy about that. Thanks, Jeff On 2014-05-04, 6:24 AM, Rick Curtis curti...@gmail.com wrote: A couple quick thoughts/debug suggestions for you : - Can we get a copy of your persistence.xml file? - Can you try to run with 2.0.x to see if something was introduced in the original 2.0 implementation effort, or if it came sometime after? - Have you tried setting your fetch graph to be entirely loaded by the default fetch group? I know this probably isn't ideal, but it might be an interesting data point. If you are able to trim the test down into something consumable by others, I'd be interested in taking a look at it. Good luck, Rick On Fri, May 2, 2014 at 5:43 PM, Jeff Oh oh.j...@gmail.com wrote: Hello All, We've recently upgraded to 2.3.0 (from 1.2.2) and have encountered a nasty bug where when loading a complicated cyclic object graph using a fetch group, some relationships are not being populated under some circumstances. Unfortunately, the graph really is quite complicated, and while I have a set of test data where the problem can be reliably duplicated, I haven't been able to make an integration test do the same yet. A simplified version of the graph is that A is 1:M to B and A is 1:M to C, B is M:1 to A (Bidirectional), and C is M:1 to A and C is also M:1 to B. The net effect is a graph where cycles can exist, although in practice they generally don't other than the bidirectional relationship between A and B. Our failing case is interesting. When we load single instances of B (call them B1 and B2) one at a time the load is always successful, with all of the A, B, and C relationships populated - but only if loaded one at a time. In other words, if we run entityManager.find(B1) or entityManager.find(B2), then all is well. An ascii art version of the graph traversals might look something like this: B1 - A1 - C1- B3 - A3 - B3- A3 - B3-1 - A3 - B3-2 - A3 - C1-2 - ... B2 - A2 - C2- A3 - B3 - A3 - B3-1- A3 - B3-2- A3 - C2-2 - ... where B is M:1 to A, A is 1:M to C and A is 1:M to B. Interestingly, while B1 and B2 are separate objects, they do share several common objects in their graphs - call them A3 and B3 (as well as B3-1 and B3-2). If we run a query that loads both B1 and B2 in the same query - entityManager.find(B1 + B2), then one of the relationships from one of the other B objects in the graph (call it B3) B3-A is null (not populated), where B3-A should == A3. To clarify, B1.A1.C1.B3.getA() should equal A3, and instead is null, and B2.A2.C2.A3.B3.getA() should also equal A3, but instead is null. Of course, the graph is being detached after load, so unfortunately lazy loading the B3
Re: CheckDatabaseForCascadePersistToDetachedEntity=true in OSGi
If I understand you correctly, you're going to need manually keep track of the FK from model - catalog. On Thu, May 1, 2014 at 10:57 AM, john j...@zode64.com wrote: Hi Rick Thanks for the response. There are too many classes with different characteristics for me to enjoy pulling it into 1 persistence unit. I can't make the catalog transient as I need to save a reference of it to the model. However I only need to save the foreign key to the model and don't want to be able to modify it. Is there as simple way of doing this? Cheers John On 1 May 2014 12:07, Rick Curtis [via OpenJPA] ml-node+s208410n7586327...@n2.nabble.com wrote: When I try save a model which has a reference to one of the readonly catalogs (which I don't want to update) and which I get through the catalogs-persistence service (so the model-persistence is handling an object from another persistence unit) The problem is that you're trying persist an Entity(model) that has a relationship to another Entity(catalog) that is from a different persistence unit. From the model-persistence persistence unit's point of view, the catalog that is referenced by your model is a non-persistent field. I'm reaching a bit here, but I think you have two options. Either compress down to one persistence unit, or mark model-catalog as @Transient so that relationship will be ignored by OpenJPA. Thanks, Rick On Wed, Apr 30, 2014 at 1:26 PM, John Bower [hidden email] http://user/SendEmail.jtp?type=nodenode=7586327i=0 wrote: Hi My setup is - servicemix 5.0.0, openjpa 2.2.2, Mysql 5.5 database. I have 4 bundles, model-persistence, catalogs-persistence, models and catalogs. The 2 persistence bundles each have a persistence unit. The model-persistence bundle provides an OSGi service to save a model which is defined in in the models bundle. The catalogs-persistence provide a readonly get services which gets catalogs which are referenced by the model. When I try save a model which has a reference to one of the readonly catalogs (which I don't want to update) and which I get through the catalogs-persistence service (so the model-persistence is handling an object from another persistence unit) 15:09:53,324 | DEBUG | ectronica/create | ServiceRecipe | 7 - org.apache.aries.blueprint.core - 1.4.0 | Method entry: getService, args org.apache.karaf.jndi.KarafInitialContextFactory@2cf5e0f0 15:09:53,339 | DEBUG | ectronica/create | context | 199 - org.apache.aries.jpa.container.context - 1.0.1 | Created a new persistence context org.apache.aries.jpa.container.impl.EntityManagerWrapper@7933da2e for transaction [Xid:globalId=10ffe51effb4451006f72672e6170616368652e61726965732e7472616e73616374696f6e,length=64,branchId=,length=64]. 15:09:53,512 | WARN | ectronica/create | Transaction | 230 - org.apache.aries.transaction.manager - 1.1.0 | Unexpected exception from beforeCompletion; transaction will roll back openjpa-2.2.2-r422266:1468616 nonfatal user error org.apache.openjpa.persistence.InvalidStateException: Encountered unmanaged object catalog.Catalog-673 in life cycle state unmanaged while cascading persistence via field model.Model.catalog during flush. However, this field does not allow cascade persist. You cannot flush unmanaged objects or graphs that have persistent associations to unmanaged objects. Suggested actions: a) Set the cascade attribute for this field to CascadeType.PERSIST or CascadeType.ALL (JPA annotations) or persist or all (JPA orm.xml), b) enable cascade-persist globally, c) manually persist the related field value prior to flushing. d) if the reference belongs to another context, allow reference to it by setting StoreContext.setAllowReferenceToSiblingContext(). FailedObject: catalog.Catalog-673 at org.apache.openjpa.kernel.SingleFieldManager.preFlushPC(SingleFieldManager.java:786) at org.apache.openjpa.kernel.SingleFieldManager.preFlush(SingleFieldManager.java:621) at org.apache.openjpa.kernel.SingleFieldManager.preFlush(SingleFieldManager.java:589) at org.apache.openjpa.kernel.SingleFieldManager.preFlush(SingleFieldManager.java:505) at org.apache.openjpa.kernel.StateManagerImpl.preFlush(StateManagerImpl.java:3018) at org.apache.openjpa.kernel.PNewState.beforeFlush(PNewState.java:44) at org.apache.openjpa.kernel.StateManagerImpl.beforeFlush(StateManagerImpl.java:1034) at org.apache.openjpa.kernel.BrokerImpl.flush(BrokerImpl.java:2122) at org.apache.openjpa.kernel.BrokerImpl.flushSafe(BrokerImpl.java:2082
Re: Advice Requested: Instance Loading bug in 2.3.0
A couple quick thoughts/debug suggestions for you : - Can we get a copy of your persistence.xml file? - Can you try to run with 2.0.x to see if something was introduced in the original 2.0 implementation effort, or if it came sometime after? - Have you tried setting your fetch graph to be entirely loaded by the default fetch group? I know this probably isn't ideal, but it might be an interesting data point. If you are able to trim the test down into something consumable by others, I'd be interested in taking a look at it. Good luck, Rick On Fri, May 2, 2014 at 5:43 PM, Jeff Oh oh.j...@gmail.com wrote: Hello All, We've recently upgraded to 2.3.0 (from 1.2.2) and have encountered a nasty bug where when loading a complicated cyclic object graph using a fetch group, some relationships are not being populated under some circumstances. Unfortunately, the graph really is quite complicated, and while I have a set of test data where the problem can be reliably duplicated, I haven't been able to make an integration test do the same yet. A simplified version of the graph is that A is 1:M to B and A is 1:M to C, B is M:1 to A (Bidirectional), and C is M:1 to A and C is also M:1 to B. The net effect is a graph where cycles can exist, although in practice they generally don't other than the bidirectional relationship between A and B. Our failing case is interesting. When we load single instances of B (call them B1 and B2) one at a time the load is always successful, with all of the A, B, and C relationships populated - but only if loaded one at a time. In other words, if we run entityManager.find(B1) or entityManager.find(B2), then all is well. An ascii art version of the graph traversals might look something like this: B1 - A1 - C1- B3 - A3 - B3- A3 - B3-1 - A3 - B3-2 - A3 - C1-2 - ... B2 - A2 - C2- A3 - B3 - A3 - B3-1- A3 - B3-2- A3 - C2-2 - ... where B is M:1 to A, A is 1:M to C and A is 1:M to B. Interestingly, while B1 and B2 are separate objects, they do share several common objects in their graphs - call them A3 and B3 (as well as B3-1 and B3-2). If we run a query that loads both B1 and B2 in the same query - entityManager.find(B1 + B2), then one of the relationships from one of the other B objects in the graph (call it B3) B3-A is null (not populated), where B3-A should == A3. To clarify, B1.A1.C1.B3.getA() should equal A3, and instead is null, and B2.A2.C2.A3.B3.getA() should also equal A3, but instead is null. Of course, the graph is being detached after load, so unfortunately lazy loading the B3-A relationship is not possible. When running through a debugger, it looks like what happens is that the A3 and B3-* instances are each being created more than one time. This seems to make sense because the graph has cycles, relationships are loaded recursively, and instances are not added to the transactional cache (ManagedCache) until after they are fully initialized with all fields loaded. Therefore new instances will not always be fully created when they are needed again. As the call stack goes forward, the newest A and B instances get fully initialized with all fields loaded. However, as the call stack unwinds, the oldest A and B instances are the ones that eventually win, and one of the B instances that wins is not fully initialized and has a null field in it's A relationship (e.g. B3.getA() is null, because B3's A was not ever set). I'm currently tracing this through to try and determine exactly why the outer (eldest) B3 isn't getting loaded with its A3, but while I do so, I was wondering if anyone else has encountered a similar problem or has any suggestions as to where I should focus my efforts. My current thinking is that the multiple loading issue is OK and expected, and that the problem is that the oldest B's aren't getting loaded with their A's. But it is possible that the problem is that each individual entity should only be initialized once, and that this is the root issue. Comments would be welcome. Thanks, Jeff -- *Rick Curtis*
Re: CheckDatabaseForCascadePersistToDetachedEntity=true in OSGi
But it doesn't change anything. Should this option remove the problem I am having? Should it work in OSGi? Are there any suggestions as to what I am doing wrong and if it is possible at all? Cheers John -- *Rick Curtis*
Re: Left fetch join = distinct
I don't have time to dig into this issue today, but I know there is quite a bit of history on the mailing list[1] revolving around left fetch join and distinct Thanks, Rick [1] http://openjpa.markmail.org/search/?q=left%20fetch%20join%20distinct On Tue, Apr 29, 2014 at 11:51 AM, Jim Talbut jtal...@spudsoft.co.uk wrote: Hi, This is from the definition of a named query: query = SELECT o from AssessmentImpl o + left join fetch o.assessmentResults + where + o.assessmentKey.dateCreated :outstandingLimit + and o.pollResultsAttempts :pollCountLimit + and ( + o.resultsAvailable = true + or o.status = AssessmentStatus.Pending + or o.status = AssessmentStatus.InProgress + )) The assessmentResults field is a OneToMany join, usually fetched lazily. For the purposes of this query I want to pull in all the assessmentResults so I want it eagerly and I added the left join fetch line. Unfortunately this adds a DISTINCT to the SQL query and fails because one of the columns is a LOB. Why does adding left fetch join make it DISTINCT? And is there any way to avoid it? Thanks. Jim -- *Rick Curtis*
Re: openjpa.jdbc.SynchronizeMappings Values...
As Kevin mentioned in a prior email, set this[1] property if you only want validation. Thanks, Rick [1] property name=openjpa.jdbc.SynchronizeMappings value=validate/ On Tue, Apr 29, 2014 at 6:53 AM, tdias tiagoftd...@gmail.com wrote: I know this is an old post but I'm facing a similar scenario. Currently I'm using buildSchema during development, which is great. But I need to configure my project to be able to build the schema when it is executed for the first time and only to validate the schema the next times it will run. Anybody has some suggestion? I'm not so familiarized with OpenJPA properties, sorry. Dias, T. -- View this message in context: http://openjpa.208410.n2.nabble.com/openjpa-jdbc-SynchronizeMappings-Values-tp7582837p7586293.html Sent from the OpenJPA Users mailing list archive at Nabble.com. -- *Rick Curtis*
Re: OpenJPA retrieving one column a seperate select
Sorry but I've been completely underwater. I'll take a look as soon as I get a few spare cycles. On Tue, Apr 8, 2014 at 2:59 AM, mxvs mxvs...@gmail.com wrote: Hello Rick, Were you able to download my Unit Test and make sense of what might be going on? Thanks Max -- View this message in context: http://openjpa.208410.n2.nabble.com/OpenJPA-retrieving-one-column-with-a-seperate-select-tp7586156p7586194.html Sent from the OpenJPA Users mailing list archive at Nabble.com. -- *Rick Curtis*
Re: Confusing conflicts between OpenJPA and Postgresql
(DelegatingStoreManager.java:163) at org.apache.openjpa.kernel.BrokerImpl.retainConnection(BrokerImpl.java:3710) at org.apache.openjpa.kernel.BrokerImpl.beginStoreManagerTransaction(BrokerImpl.java:1283) at org.apache.openjpa.kernel.BrokerImpl.flush(BrokerImpl.java:1968) at org.apache.openjpa.kernel.BrokerImpl.flushSafe(BrokerImpl.java:1908) at org.apache.openjpa.kernel.BrokerImpl.beforeCompletion(BrokerImpl.java:1826) at bitronix.tm.BitronixTransaction.fireBeforeCompletionEvent(BitronixTransaction.java:532) at bitronix.tm.BitronixTransaction.commit(BitronixTransaction.java:235) ... 8 more Caused by: org.postgresql.util.PSQLException: Cannot change transaction isolation level in the middle of a transaction at org.postgresql.jdbc2.AbstractJdbc2Connection.setTransactionIsolation(AbstractJdbc2Connection.java:944) at sun.reflect.GeneratedMethodAccessor28.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at bitronix.tm.resource.jdbc.BaseProxyHandlerClass.invoke(BaseProxyHandlerClass.java:64) at $Proxy5.setTransactionIsolation(Unknown Source) at sun.reflect.GeneratedMethodAccessor28.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at bitronix.tm.resource.jdbc.BaseProxyHandlerClass.invoke(BaseProxyHandlerClass.java:64) at $Proxy7.setTransactionIsolation(Unknown Source) at org.apache.openjpa.lib.jdbc.DelegatingConnection.setTransactionIsolation(DelegatingConnection.java:257) at org.apache.openjpa.lib.jdbc.ConfiguringConnectionDecorator.decorate(ConfiguringConnectionDecorator.java:95) at org.apache.openjpa.lib.jdbc.DecoratingDataSource.decorate(DecoratingDataSource.java:100) at org.apache.openjpa.lib.jdbc.DecoratingDataSource.getConnection(DecoratingDataSource.java:88) at org.apache.openjpa.jdbc.kernel.JDBCStoreManager.connectInternal(JDBCStoreManager.java:941) at org.apache.openjpa.jdbc.kernel.JDBCStoreManager.connect(JDBCStoreManager.java:926) ... 17 more Regards, Vito -- *Rick Curtis*
Re: Unecessary database update?
Alain - I looked at your unit test and the reason that an extra update is being sent to the database is because you're updating your Entity in the @PreUpdate method. If you didn't update that field, no update would be issued. That being said, I think we're getting down to the real problem. I don't think @PreUpdate should be called in this instance because even though Parent is dirty, it doesn't result in a DB update and as such, @PreUpdate shouldn't be invoked. I thought we had a feature to control this behavior, but I'm struggling to find it right now. I'll attach my unit test for future reference. Thanks, Rick On Tue, Mar 25, 2014 at 9:22 AM, Alain Brazeau abraz...@rogers.com wrote: Hi Rick, As requested, here is a test I wrote in the 'openjpa-persistence-jdbc' project: package org.apache.openjpa.persistence.jdbc.update; import java.util.Date; import javax.persistence.EntityManager; import org.apache.openjpa.persistence.test.SingleEMFTestCase; public class TestCascadePersist extends SingleEMFTestCase { public void setUp() throws Exception { super.setUp(CLEAR_TABLES, Parent.class, Child.class); } public void testAddChildShouldNotUpdateParent() { EntityManager em = emf.createEntityManager(); em.getTransaction().begin(); Parent parent = new Parent(); parent.setName(parent); em.persist(parent); em.getTransaction().commit(); long parentId = parent.getId(); Date expectedLastModifiedDate = parent.getLastModifiedDate(); em.getTransaction().begin(); parent = em.find(Parent.class, parentId); parent.newChild(child); em.getTransaction().commit(); Date actualModifiedDate = parent.getLastModifiedDate(); assertEquals(The last modified date should not change., expectedLastModifiedDate.getTime(), actualModifiedDate.getTime()); } } In order for the test to work, the following instance variable and methods have to be added to the existing org.apache.openjpa.persistence.jdbc.update.Parent class: private Date lastModifiedDate; public Date getLastModifiedDate() { return lastModifiedDate; } @PrePersist @PreUpdate public void onUpdate() { this.lastModifiedDate = new Date(); } Thanks! Alain -- View this message in context: http://openjpa.208410.n2.nabble.com/Unecessary-database-update-tp7586121p7586153.html Sent from the OpenJPA Users mailing list archive at Nabble.com. -- *Rick Curtis*
Re: OpenJPA retrieving one column a seperate select
Can you put together a small unit test that recreates the issue? That will help us understand what is going on. Thanks, Rick On Tue, Mar 25, 2014 at 1:51 PM, mxvs mxvs...@gmail.com wrote: Hello, I've been using OpenJPA 2.1.2 (JPA 2.0) to retrieve data from a table called LOGISCHRAPPORT which has about 28 columns. In some cases OpenJPA decides to issue seperate select statements for one of the columns for no particular reason, leading to a dramatic decrease in query performance. Initially everything goes fine and all my columns are retrieved in a performant, single SELECT statement by JPA. As soon as I add a relationship to another entity called RAPTAALMETADATA @OneToMany(fetch=FetchType.EAGER, cascade = CascadeType.ALL) @JoinColumns({ @JoinColumn(name = RAPPORTNR, referencedColumnName = RAPPORTNR), @JoinColumn(name = RAPPORTTYPE, referencedColumnName = RAPPORTTYPE) }) private ListRaptaalmetadata raptaalmetadata; --- Queried using Criteria API as follows: --- JoinLogischRapport, Raptaalmetadata metadata = reportRoot.join( raptaalmetadata); JPA no longer includes one of my original columns called REPORT_COMMENTS instead it is issuing separate select statements to retrieve the REPORT_COMMENTS column for each instance of LOGISCHRAPPORT. All other columns (including the ones coming from RAPTAALMETADATA are retrieved properly as part of the intial SELECT. REPORT_COMMENTS is of the HUGEBLOB type in Oracle and I've mapped in in my Entity as follows: @Lob @Basic @Column(name = REPORT_COMMENTS) private byte[] reportComments; I now get tons of these: SELECT t0.REPORT_COMMENTS FROM dwhsd001.LogischRapport t0 WHERE t0.rapportnr = ? AND t0.rapporttype = ? [params=(long) 1473, (String) RAP] Additionally: as soon as I remove the fetch=FetchType.EAGER attribute from the @OneToMany annotation described above I start seeing the exact same behavior for the relationship as I've been getting for the REPORT_COMMENTS column. This means I'm also getting separate SELECT statements for retrieving the entity relationship on top of the seperate selects for the column thereby further degrading performance. In other words I'm then also getting tons of these: SELECT t0.isotaalcode, t0.rapportnr, t0.rapporttype, t0.FUNCDESC_MODIFIED_BY, t0.FUNCDESC_MODIFIED_DATE, t0.FUNCTIONAL_DESCRIPTION, t0.omschrijving, t0.titel FROM dwhsd001.Raptaalmetadata t0 WHERE t0.rapportnr = ? AND t0.rapporttype = ? Its not a LAZY loading problem as I've specifically tested that case. I don't see any other reason why OpenJPA decides to retrieve this one column using separate statements. Can anyone point out why I might be seeing this behavior and how I can avoid it? -- View this message in context: http://openjpa.208410.n2.nabble.com/OpenJPA-retrieving-one-column-a-seperate-select-tp7586156.html Sent from the OpenJPA Users mailing list archive at Nabble.com. -- *Rick Curtis*
Re: Unecessary database update?
Can you put together a small unit test[1] that recreates the issue? [1] https://openjpa.apache.org/writing-test-cases-for-openjpa.html Thanks, Rick On Sat, Mar 22, 2014 at 6:53 PM, Alain Brazeau abraz...@rogers.com wrote: My application has the following bidirectional one-to-many association: @Entity @Table(name=TB_CUSTOMERS) public class Customer { @OneToMany(mappedBy=customer, cascade={CascadeType.PERSIST, CascadeType.REMOVE}) @OrderBy(createdDate DESC) private ListOrder orders = new ArrayListOrder(); ... } @Entity @Table(name=TB_ORDERS) public class Order { @ManyToOne @JoinColumn(name=CUSTOMER_ID) private Customer customer; ... } I recently noticed that when a new Order object is added to a Customer object's Order List, the underlying Customer database table record gets updated, even though the Customer object was not modified. This is especially problematic since I have a database trigger defined on the Customer table that inserts the old version of a record in a Customer history table for auditing purposes whenever an update occurs. I am currently using OpenJPA version 2.0.1. I have tried upgrading to version 2.2.2, but still have the same problem. If I use Hibernate instead of OpenJPA as the JPA implementation, the Customer database table does NOT get updated. Based on my understanding of JPA, the Customer record should not get updated unless LockModeType.OPTIMISTIC_FORCE_INCREMENT is used (for optimistic locking). In case it matters, I am using build time enhancement. Here is the relevant section from my Maven pom.xml file: plugin groupIdorg.apache.openjpa/groupId artifactIdopenjpa-maven-plugin/artifactId version${openjpa.version}/version configuration addDefaultConstructortrue/addDefaultConstructor /configuration executions execution idenhancer/id phaseprocess-classes/phase goals goalenhance/goal /goals /execution /executions dependencies dependency groupIdorg.apache.openjpa/groupId artifactIdopenjpa/artifactId version${openjpa.version}/version /dependency /dependencies /plugin Any help or advice would be greatly appreciated. Thanks, Alain -- View this message in context: http://openjpa.208410.n2.nabble.com/Unecessary-database-update-tp7586121.html Sent from the OpenJPA Users mailing list archive at Nabble.com. -- *Rick Curtis*
Re: The id class specified by type does not match the primary key fields of the class
Joe - First off, the exception that I assume you meant to post isn't showing up? Second, the patch I posted wasn't for you to modify your Entities. The important part of that patch is a code fix to the OpenJPA runtime. I believe this is an OpenJPA code bug and I wanted to get others to take a look at it. I'll try to bug someone else to review the patch today. Thanks, Rick On Thu, Mar 20, 2014 at 5:34 AM, Joe_DAMS orl...@kizux.fr wrote: Hello Rick, Ty for your answer. I tried to update my entities like mentionned in your link (Not same entities inside my first post but same problem): Entity with embeddedId Embeddable class And persistence.xml But always got this log message: Are there any solutions to fix it ? Ty ! (I'm copying this reply to SO) -- View this message in context: http://openjpa.208410.n2.nabble.com/The-id-class-specified-by-type-does-not-match-the-primary-key-fields-of-the-class-tp7586109p7586114.html Sent from the OpenJPA Users mailing list archive at Nabble.com. -- *Rick Curtis*
Re: Using a time UUID as a sequence
Todd - Honestly, I don't have much experience in this area. If you are able to put together a small unit test I'll try to take a look when I get some time. Thanks, Rick On Wed, Mar 19, 2014 at 10:05 PM, Todd Nine t...@spidertracks.com wrote: Just to follow up my own email, my configuration in the persistence XML does work as expected. However, I'm receiving this error. Caused by: org.apache.openjpa.lib.jdbc.ReportingSQLException: BLOB/TEXT column 'id' used in key specification without a key length {stmnt 992554479 CREATE TABLE AlertAcknowlege (id BLOB NOT NULL, createTime DATETIME, imeiNumber VARCHAR(255), queuedDate DATETIME, statusCode INTEGER, DTYPE VARCHAR(255), PRIMARY KEY (id)) ENGINE = innodb} [code=1170, state=42000] at org.apache.openjpa.lib.jdbc.LoggingConnectionDecorator.wrap(LoggingConnectionDecorator.java:247) Here is my mapping class. ** * The value handler for converting com.eaio.uuid.UUID objects to byte arrays. * * * @author Todd Nine */ public class UUIDValueHandler extends ByteArrayValueHandler { @Override public Object toDataStoreValue(ValueMapping vm, Object val, JDBCStore store) { final byte[] data = UUIDSerializer.toBytes((com.eaio.uuid.UUID) val); return super.toDataStoreValue(vm, data, store); } @Override public Object toObjectValue(ValueMapping vm, Object val) { byte[] data = (byte[]) val; final UUID uuid = UUIDSerializer.fromBytes(data); return uuid; } @Override public Column[] map(ValueMapping vm, DBIdentifier name, ColumnIO io, boolean adapt) { Column col = new Column(); col.setIdentifier(name); col.setJavaType(JavaSQLTypes.BYTES); //we should always be binary 16 for the uuid col.setType(Types.BINARY); col.setSize(UUIDSerializer.LENGTH); return new Column[]{ col }; } } As you can see on the type, I'm definitely setting the type to binary, and the length to 16 for every UUID type I encounter. Am I doing this incorrectly for the schema generation to work properly? Thanks, Todd On 19 March 2014 18:56, Todd Nine t...@spidertracks.com wrote: Thanks for the reply Rick. That does the trick for one field, but this class is used heavily throughout the model. If possible I'd like to create a custom field mapping, so that every time a UUID is encountered, this mapping happens automatically.I've created a custom field mapping. public class UUIDValueHandler extends ByteArrayValueHandler { @Override public Object toDataStoreValue(ValueMapping vm, Object val, JDBCStore store) { final byte[] data = UUIDSerializer.toBytes((com.eaio.uuid.UUID) val); return super.toDataStoreValue(vm, data, store); } @Override public Object toObjectValue(ValueMapping vm, Object val) { byte[] data = (byte[]) val; final UUID uuid = UUIDSerializer.fromBytes(data); return uuid; } } However, it's not clear to me how to configure this as a plugin via the JPA configuration. I searched through the documentation, but I can't find any examples for how to do this. I referenced this section. http://ci.apache.org/projects/openjpa/trunk/docbook/manual.html#ref_guide_mapping_custom_field_conf However when I navigate to the reference of section 4 it takes me to this section. http://ci.apache.org/projects/openjpa/trunk/docbook/manual.html#ref_guide_mapping_defaults Should I define a property as follows? property name=openjpa.jdbc.MappingDefaults value=FieldStrategies='com.eaio.uuid.UUID=com.spidertracks.aviator.dataaccess.jpa.mysql.UUIDValueHandler'/ On 19 March 2014 08:56, Rick Curtis curti...@gmail.com wrote: Todd - Take a look at @Externalizer/@Factory in the user manual[1]. Below is a snippet of code where I have a String field in my Entity, but the backing column in the DB is an int. The Externalizer/Factory methods convert values from/to the database. Let me know how this goes. Thanks, Rick @Id @Externalizer(org.apache.openjpa.persistence.kernel.common.apps.RuntimeTest1.toDb) @Factory(org.apache.openjpa.persistence.kernel.common.apps.RuntimeTest1.fromDb) private String intField; public static int toDb(String val){ return Integer.valueOf(val); } public static String fromDb(int val) { return String.valueOf(val); } [1] http://ci.apache.org/projects/openjpa/trunk/docbook/manual.html#ref_guide_pc_extern On Wed, Mar 19, 2014 at 12:10 AM, Todd Nine t...@spidertracks.com wrote: Hi all, We're migrating from a Key/Value system to MySQL for some components of our system for easier administration and maintenance. As part of this migration, we need to retain the time UUIDs that have been generated for primary keys
Re: The id class specified by type does not match the primary key fields of the class
Joe - As I mentioned on your SO post, it appears that this problem(or a very similar one) was reported a (nubmer of times?) long time back[1] but was never resolved. Attached below is a code fix for trunk along with a unit test, could I get some additional eyes on this change as I'm not 100% confident that the change is correct. Thanks, Rick [1] http://openjpa.markmail.org/thread/zbjghovmx3u7hdjy#query:+page:1+mid:bqjomb4mtqj66p2p+state:results http://openjpa.markmail.org/thread/bqjomb4mtqj66p2p On Wed, Mar 19, 2014 at 9:03 AM, Joe_DAMS orl...@kizux.fr wrote: Hello everybody ! I am faced a migrate project problem. I've got a very very very old java ee project that currently runs on jboss 4 server. I'm trying to migrate it on a fresh apache tomee installation. Entity classes was coded like this: PK class Box class Actually this is works on the jboss 4 server installation but when i'm trying to start it on apache tomee I got the following error : I need to keep the @ManyToOne relation inside the PK class. Does someone know an issue in order to help me ? Tanks a lot :) -- View this message in context: http://openjpa.208410.n2.nabble.com/The-id-class-specified-by-type-does-not-match-the-primary-key-fields-of-the-class-tp7586109.html Sent from the OpenJPA Users mailing list archive at Nabble.com. -- *Rick Curtis*
Re: Oracle error ORA-00604: ?? SQL ?? 1 ???? ORA-01000
No, your question still isn't very clear. Actually I just want to find a OpenJpa equivalence to close() method of JDBC Statement class I believe that after OpenJPA is done processing a native query we will close the statement. If that isn't happening, you could try to cast your query to an OpenJPAQuery and call .closeAll() on it once you are done using it. Hope this helps. Rick On Tue, Mar 11, 2014 at 5:21 AM, yu wang wangy...@gmail.com wrote: Actually I just want to find a OpenJpa equivalence to close() method of JDBC Statement class On Tue, Mar 11, 2014 at 5:56 PM, yu wang wangy...@gmail.com wrote: Hi Rick, Is my case description clear enough? Any suggestions? Regards, Yu Wang On Sat, Mar 8, 2014 at 3:36 PM, yu wang wangy...@gmail.com wrote: Hi Rick, I have two very big master/slave tale I made them equal-partitioned by time stamp columns. So when users query something from two tables, I separate the sql into a lot small sql for a lot of very small interval in a loop to expedite the Execution of the SQL. Then I got too many cursors opened error from Oracle. What I am trying to is getting a way to close the cursor explicitly after get its result list. Regards, Yu Wang On Fri, Mar 7, 2014 at 10:00 PM, Rick Curtis curti...@gmail.com wrote: You're going to have to give a better description of your scenario for us to help you. Thanks, Rick On Fri, Mar 7, 2014 at 3:37 AM, yu wang wangy...@gmail.com wrote: Hi Gurus, I have manager.createNativeQuery() in a loop eventually lead to Oracle error: ORA-00604 and ORA-01000, which means cursors open in the oracle exceed the maximum. My question is how can I close some cursors explicitly in a loop? I try manager.clear() but seems it does not work. We are using OpenJPA 1.2.3. Regards, Yu Wang -- *Rick Curtis* -- *Rick Curtis*
Re: Utilizing a slice for online migration
Todd - Does anyone have any experience with using the policy in this manner? My general feeling is that not many people are using slice as there has been very little mailing list traffic regarding it's usage. Be sure to let us know how it goes! Thanks, Rick On Sun, Mar 9, 2014 at 12:54 PM, Todd Nine t...@spidertracks.com wrote: Hi all, We're migrating from a Cassandra based JPA adapter to Amazon's RDS. In order to do this, we want a no downtime migration. To do this, we would need the following flow. 1) Start writing to both systems. Cassandra is still the authoritative record, and records are replicated to RDS. All queries will still be served from Cassandra on read. 2) In the background, read all records from Cassandra and write them to RDS. Use update timestamps to ensure we don't overwrite newly updated data. 3) Switch our read path (probably with a new deployment configured to only read from RDS) I've been looking at the Slices documentation, and it seems like defining a Data Replication Policy will perform the dual writes I need in steps 1) and 2). Does anyone have any experience with using the policy in this manner? Thanks in advance, Todd -- *Rick Curtis*
Re: Issue with OpenJPA default cache, use external cache
http://ci.apache.org/projects/openjpa/trunk/docbook/manual.html#ref_guide_cache_conf On Tue, Mar 11, 2014 at 11:04 AM, Alice ashanai...@yahoo.com wrote: Thank you Rick for your response. I will check again to make sure we have bi-directional relation. We see this issue only when the parent entity has been saved before the FK entity is saved. I will confirm the bi-directional relation. Is there a way to plug in an external cache regardless of the issue? best, Alice -- View this message in context: http://openjpa.208410.n2.nabble.com/Issue-with-OpenJPA-default-cache-use-external-cache-tp7586064p7586073.html Sent from the OpenJPA Users mailing list archive at Nabble.com. -- *Rick Curtis*
Re: Issue saving entities with fields that reference read only data
If you are trying to persist an Entity that has a relationship to an (readonly) Entity that already exists in your DB, you should ensure that the existing Entity is a part of your persistence context (ie managed). OpenJPA is most likely complaining because we encountered an unmanaged field when persisting your non readonly Entity to the DB. So : ReadOnlyEntity e = em.find(...); // I assume this is different than what you are currently doing. NewEntity ne = new NewEntity(); ne.setRelationship(e); em.persist(e); Thanks, Rick On Tue, Mar 11, 2014 at 11:26 AM, Alice ashanai...@yahoo.com wrote: Hello, We are running into issues with saving entities with fields that reference read only data. If the entity has a field mapped with 'insertable = false' and/or 'updatable = false' for @Column or @JoinColumn, OpenJPA still tries to do an insert into the read only entity. It also complains that CascadeType.PERSIST has not been specified on the relationship. Is there a way to fix this issue? -- View this message in context: http://openjpa.208410.n2.nabble.com/Issue-saving-entities-with-fields-that-reference-read-only-data-tp7586076.html Sent from the OpenJPA Users mailing list archive at Nabble.com. -- *Rick Curtis*
Re: Issue with OpenJPA default cache, use external cache
An external cache provider will not solve the problem you've describe. If you use bi-directional relationships AND you set both sides correctly you shouldn't have this problem. Thanks, Rick On Mon, Mar 10, 2014 at 6:39 PM, Alice ashanai...@yahoo.com wrote: When we use OpenJPA's cache, there are issues when retrieving data with foreign key constraints. For example, we have an entity A with FK entity B. We add a row in Table A and save it. We create a row in table B and set it to A. When we retrieve that data now, if cache is enabled, it will only get A, and not B. This happens only in cases where the FK was created and saved after parent entity was saved. Right now, we disabled cache to get around this issue, but we may run into performance issues later on. Is there a way to fix this issue? Can we use an external cache provider like memcache or ehcache? thanks, Alice -- View this message in context: http://openjpa.208410.n2.nabble.com/Issue-with-OpenJPA-default-cache-use-external-cache-tp7586064.html Sent from the OpenJPA Users mailing list archive at Nabble.com. -- *Rick Curtis*
Re: Oracle error ORA-00604: ?? SQL ?? 1 ???? ORA-01000
You're going to have to give a better description of your scenario for us to help you. Thanks, Rick On Fri, Mar 7, 2014 at 3:37 AM, yu wang wangy...@gmail.com wrote: Hi Gurus, I have manager.createNativeQuery() in a loop eventually lead to Oracle error: ORA-00604 and ORA-01000, which means cursors open in the oracle exceed the maximum. My question is how can I close some cursors explicitly in a loop? I try manager.clear() but seems it does not work. We are using OpenJPA 1.2.3. Regards, Yu Wang -- *Rick Curtis*
Re: DataCache Performance Issues
Jeff - I recommitted the changes for OPENJPA-2285 to trunk. Please give them a try and let me know how it goes. Thanks, Rick On Mon, Feb 17, 2014 at 10:35 AM, Jeff Oh jeff...@elasticpath.com wrote: Rick - that would be fantastic. Thanks for your help! Jeff On 2/15/2014, 5:32 AM, Rick Curtis curti...@gmail.com wrote: Jeff - I'll recommit the changes in OPENJPA-2285 back to trunk on Monday. Will those changes be sufficient for your scenarios? Thanks, Rick Jeff Oh, Sr. Software Engineer Phone: 604.408.8078 ext. 104 Email: jeff...@elasticpath.com Elastic Path Software Inc. Web: www.elasticpath.com Blog: www.getelastic.com Community: grep.elasticpath.com Careers: www.elasticpath.com/jobs Confidentiality Notice: This message is intended only for the use of the designated addressee(s), and may contain information that is privileged, confidential and exempt from disclosure. Any unauthorized viewing, disclosure, copying, distribution or use of information contained in this e-mail is prohibited and may be unlawful. If you received this e-mail in error, please reply to the sender immediately to inform us you are not the intended recipient and delete the email from your computer system. Thank you. -- *Rick Curtis*
Re: OpenJpa 2.2.0 - Literal is not truly literal when generating sql.
Craig - It looks like OPENJPA-2324[1] might be what you are looking for? Heath -- Any chance that we can get some documentation for this change? [1] https://issues.apache.org/jira/browse/OPENJPA-2324 Thanks, Rick On Tue, Feb 18, 2014 at 1:48 PM, Craig Taylor ctalk...@ctalkobt.net wrote: It appears the the criteriaBuilder.literal() is not treated as literal consistently during generation of HQL / SQL. For criteria queries, the HQL (OpenJPAQuery.getQueryString()) is generated as what I would consider proper HQL in that parameters are :prefixed and literals are hard valued within the query. eg: (SELECT sample, sample2 FROM table WHERE sample=:parameter AND sample2='literal value'; ) When the sql is generated however, both via : OpenJPAQuery.getDataStoreActions(parm)[0] the paramaters (other than data parameters) are directly substituted (eg: SELECT sample, sample2 FROM table WHERE sample='paramter value' AND sample2='literal') When the query is actually performed however, via logging I see that everything is treated as a parameter. (SELECT sample, sample2 FROM table WHERE sample=:parameter AND sample2=:sample2 ). This is an issue as when dealing with partitioned tables Postgres is unable to perform a valid query plan without the associated values on the column that we have partitioned on. Running this from within OpenJPA 2.2.0 and OpenJPA 2.2.2 yield the same results. I believe that this is an attempt to help the database cache query plans yet this optimization is actually causing a major performance problem. Suggestions? -- --- Craig Taylor ctalk...@ctalkobt.net -- *Rick Curtis*
Re: ReverseMapping generated code not compiling (due to non-escaped delimiter)
I would start with option #1 Thanks, Rick On Mon, Feb 17, 2014 at 11:31 AM, Aron Lurie a...@cambridgesemantics.comwrote: Hi All, Running on trunk, I'm currently facing a problem very similar to OPENJPA-1540. When I run the ReverseMappingTool on an Oracle database with a lowercase column name, I end up with syntactically incorrect source code. Whereas I should get output like this: @Column(name=\foobar\) I instead get un-compileable output like this: @Column(name=foobar) The reason why the problem only affects lowercase column names is because Oracle generally requires names in all upper case. If a name is specified with quotes around it, then it can be allowed to contain lowercase characters. I generally see two ways I can fix this: 1) Change the way the DBIdentifier's are initialized to surround the identifier with a delimiter that includes the escape \. 2) Change the serialization stack (i.e. AnnotationPersistenceMetaDataSerializer.serialize and AnnotationEntry.toString) to run some character escape utility (like StringEscapeUtils.escapeJava) at String construction time. Which approach here is best? Or is there a better way I haven't considered? Thanks, Aron -- *Rick Curtis*
Re: DataCache Performance Issues
Jeff - I'll recommit the changes in OPENJPA-2285 back to trunk on Monday. Will those changes be sufficient for your scenarios? Thanks, Rick On Fri, Feb 14, 2014 at 9:15 AM, Rick Curtis curti...@gmail.com wrote: Jeff - As I said earlier, if there¹s work to be done, I¹m happy to submit a patch - but I¹d like some consensus around what the patch should do first... I talked with a few teammates yesterday and I think we're on the same page as you. I will do some digging to try to figure out why I backed this change out originally(rather than changed the testcase). I'll get you and update a bit later today. Thanks, Rick On Thu, Feb 13, 2014 at 12:41 PM, Jeff Oh jeff...@elasticpath.com wrote: As I eluded to in the JIRA, the root problem is that in the JPA 2.0 spec, the concept of Cache/CacheStoreMode/CacheRetrieveMode/etc/etc was added and this change was not honoring that contract. The crux of the problem is that the default value for CacheStoreMode is USE and the spec says[1] that elements in the cache are not to be updated. So in your example, after Foo-1[b,c] gets loaded the subsequent load of Foo-1[a] will not be reflected in the cache. I agree that most likely isn't the expected behavior, but we need to satisfy the spec. Hmm. So upon reading the spec, this is one possible interpretation of the spec - but I¹m not sure that it¹s the only possible interpretation. The spec for CacheStoreMode.USE states: /** * Insert/update entity data into cache when read * from database and when committed into database: * this is the default behavior. Does not force refresh * of already cached items when reading from database. */ And the spec for CacheStoreMode.REFRESH states: /** * Insert/update entity data into cache when read * from database and when committed into database. * Forces refresh of cache for items read from database. */ So the interesting thing is that CacheStoreMode.USE does not forbid insert/update operations - it merely states that refreshes of already cached items should not be ³forced². I think there¹s multiple ways to interpret this. Assume we have an entity A with a eagerly fetched many-to-one relationship to B. If we load an instance of A from db, OpenJPA will almost certainly also load B via a join when it issues the SQL select for A. Normally, if B is already in the L2 cache, and CacheStoreMode.USE is in effect, then I agree that OpenJPA should not refresh the instance of B in the L2 cache. However, if B is already in the L2 cache, *and* B¹s data loaded from the db is both newer and different than B¹s representation in cache, then I think there is a grey area in the spec. Specifically, the spec does not claim that the JPA implementation must refresh the item in cache, but it also does not claim that the JPA implementation must not. In an ideal world, I think if JPA becomes aware that cached data is out of date it should remove/replace it, and I don¹t think that anything in the spec forbids this. So ideally, in the second case, I claim that OpenJPA should update the entity data for B in cache, even if the data is read using CacheStoreMode.USE. I think that there is a similar argument to be had with the fetch group example I used earlier. CacheStoreMode.USE states that refreshes of already cached items should not be forced, but it does not *forbid* inserts and updates of already cached items when reading from the database, particularly when reading data that was not already in cache. Reading the JSR-317 spec (pg. 105) doesn¹t shed any more light in the issue. While CacheStoreMode.REFRESH may be an option to work around this, it¹s not a one-to-one replacement. My worry is that using CacheStoreMode.REFRESH instead of CacheStoreMode.USE is that depending on the object model and the load, CacheStoreMode.REFRESH may prevent objects from ever expiring from cache, because they¹re constantly being REFRESHed. Worse, the data merge mechanics mean that being refreshed in cache doesn¹t necessarily imply that the data in cache is being refreshed as well. That could be a pretty significant downside for us. As I said earlier, if there¹s work to be done, I¹m happy to submit a patch - but I¹d like some consensus around what the patch should do first... Cheers, Jeff On 2/13/2014, 7:00 AM, Rick Curtis curti...@gmail.com wrote: Another question we have is what exactly was the internal test regression that caused the rollback? The company I work for (IBM) does a huge amount of functional server based testing internally. When changes are made, the OpenJPA unit tests are the first line of tests and occasionally the UTs miss a scenario that the internal tests catch. This is one of those scenarios. As I eluded to in the JIRA, the root problem is that in the JPA 2.0 spec, the concept of Cache/CacheStoreMode
Re: DataCache Performance Issues
Jeff - As I said earlier, if there¹s work to be done, I¹m happy to submit a patch - but I¹d like some consensus around what the patch should do first... I talked with a few teammates yesterday and I think we're on the same page as you. I will do some digging to try to figure out why I backed this change out originally(rather than changed the testcase). I'll get you and update a bit later today. Thanks, Rick On Thu, Feb 13, 2014 at 12:41 PM, Jeff Oh jeff...@elasticpath.com wrote: As I eluded to in the JIRA, the root problem is that in the JPA 2.0 spec, the concept of Cache/CacheStoreMode/CacheRetrieveMode/etc/etc was added and this change was not honoring that contract. The crux of the problem is that the default value for CacheStoreMode is USE and the spec says[1] that elements in the cache are not to be updated. So in your example, after Foo-1[b,c] gets loaded the subsequent load of Foo-1[a] will not be reflected in the cache. I agree that most likely isn't the expected behavior, but we need to satisfy the spec. Hmm. So upon reading the spec, this is one possible interpretation of the spec - but I¹m not sure that it¹s the only possible interpretation. The spec for CacheStoreMode.USE states: /** * Insert/update entity data into cache when read * from database and when committed into database: * this is the default behavior. Does not force refresh * of already cached items when reading from database. */ And the spec for CacheStoreMode.REFRESH states: /** * Insert/update entity data into cache when read * from database and when committed into database. * Forces refresh of cache for items read from database. */ So the interesting thing is that CacheStoreMode.USE does not forbid insert/update operations - it merely states that refreshes of already cached items should not be ³forced². I think there¹s multiple ways to interpret this. Assume we have an entity A with a eagerly fetched many-to-one relationship to B. If we load an instance of A from db, OpenJPA will almost certainly also load B via a join when it issues the SQL select for A. Normally, if B is already in the L2 cache, and CacheStoreMode.USE is in effect, then I agree that OpenJPA should not refresh the instance of B in the L2 cache. However, if B is already in the L2 cache, *and* B¹s data loaded from the db is both newer and different than B¹s representation in cache, then I think there is a grey area in the spec. Specifically, the spec does not claim that the JPA implementation must refresh the item in cache, but it also does not claim that the JPA implementation must not. In an ideal world, I think if JPA becomes aware that cached data is out of date it should remove/replace it, and I don¹t think that anything in the spec forbids this. So ideally, in the second case, I claim that OpenJPA should update the entity data for B in cache, even if the data is read using CacheStoreMode.USE. I think that there is a similar argument to be had with the fetch group example I used earlier. CacheStoreMode.USE states that refreshes of already cached items should not be forced, but it does not *forbid* inserts and updates of already cached items when reading from the database, particularly when reading data that was not already in cache. Reading the JSR-317 spec (pg. 105) doesn¹t shed any more light in the issue. While CacheStoreMode.REFRESH may be an option to work around this, it¹s not a one-to-one replacement. My worry is that using CacheStoreMode.REFRESH instead of CacheStoreMode.USE is that depending on the object model and the load, CacheStoreMode.REFRESH may prevent objects from ever expiring from cache, because they¹re constantly being REFRESHed. Worse, the data merge mechanics mean that being refreshed in cache doesn¹t necessarily imply that the data in cache is being refreshed as well. That could be a pretty significant downside for us. As I said earlier, if there¹s work to be done, I¹m happy to submit a patch - but I¹d like some consensus around what the patch should do first... Cheers, Jeff On 2/13/2014, 7:00 AM, Rick Curtis curti...@gmail.com wrote: Another question we have is what exactly was the internal test regression that caused the rollback? The company I work for (IBM) does a huge amount of functional server based testing internally. When changes are made, the OpenJPA unit tests are the first line of tests and occasionally the UTs miss a scenario that the internal tests catch. This is one of those scenarios. As I eluded to in the JIRA, the root problem is that in the JPA 2.0 spec, the concept of Cache/CacheStoreMode/CacheRetrieveMode/etc/etc was added and this change was not honoring that contract. The crux of the problem is that the default value for CacheStoreMode is USE and the spec says[1] that elements in the cache
Re: DataCache Performance Issues
with the newly loaded object. The disadvantage would be it's hard to see how this could reliably extend to entities related to the original entity that are loaded as part of the query. However, it avoids the whipsaw problems in option #2. This option also results in the most db access, at least initially. 4. Remove incomplete entities from cache. Merge cached and loaded data together as is done currently, but remove the cached entity afterwards. This isn't much of a solution, but at least a sparsely loaded entity doesn't have the potential to degrade the cache indefinitely... 5. Some ability to enable one or more of the solutions via a config option, if none of these solutions are considered acceptable for core use. If there's agreement on what behaviour folks would like to see, I'd be happy to submit a patch. Cheers, Jeff [http://elasticpath.com/images/ep.gif] Jeff Oh, Sr. Software Engineer Phone: 604.408.8078 ext. 104 Email: jeff...@elasticpath.commailto:jeff...@elasticpath.com Elastic Path Software Inc. Web elasticpath.com http://www.elasticpath.com/ | Blog getelastic.com http://www.getelastic.com/ | Twitter twitter.com/elasticpath http://www.twitter.com/elasticpath Careers elasticpath.com/jobshttp://www.elasticpath.com/jobs | Community grep.elasticpath.com http://grep.elasticpath.com/ Confidentiality Notice: This message is intended only for the use of the designated addressee(s), and may contain information that is privileged, confidential and exempt from disclosure. Any unauthorized viewing, disclosure, copying, distribution or use of information contained in this e-mail is prohibited and may be unlawful. If you received this e-mail in error, please reply to the sender immediately to inform us you are not the intended recipient and delete the email from your computer system. -- *Rick Curtis*
Re: [org.apache.openjpa.persistence.ArgumentException - An error occurred while parsing the query filter select x from Person x. Error message: The name Person is not a recognized entity or identi
Is it possibly some sort of classloading issue? I know in the past we've seen the 'The name x is not a recognized entity or identifier. Perhaps you meant n' in the past, but it seems like most of those were cleaned up. One shot it the dark would be to try to enable property name=openjpa. MetaDataRepository value=Preload=true/ to see if that helps. Thanks, Rick On Wed, Feb 12, 2014 at 12:52 PM, Sebarry seanjamesba...@yahoo.co.ukwrote: Hi Rick, Here's the contents of my persistence.xml persistence xmlns=http://java.sun.com/xml/ns/persistence; xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance; version=1.0 persistence-unit name=person2 transaction-type=JTA jta-data-sourceosgi:service/javax.sql.XADataSource/( osgi.jndi.service.name=jdbc/derbydsxa)/jta-data-source classnet.lr.tutorial.karaf.camel.jpa2jms.model.Person/class exclude-unlisted-classestrue/exclude-unlisted-classes properties property name=openjpa.jdbc.SynchronizeMappings value=buildSchema/ property name=openjpa.jdbc.DBDictionary value=derby/ /properties /persistence-unit /persistence Regards, Sean -- View this message in context: http://openjpa.208410.n2.nabble.com/org-apache-openjpa-persistence-ArgumentException-An-error-occurred-while-parsing-the-query-filter-se-tp7585976p7585979.html Sent from the OpenJPA Users mailing list archive at Nabble.com. -- *Rick Curtis*
Re: [org.apache.openjpa.persistence.ArgumentException - An error occurred while parsing the query filter select x from Person x. Error message: The name Person is not a recognized entity or identi
Please post the contents of your persistence.xml. Thanks, Rick On Wed, Feb 12, 2014 at 10:45 AM, Sebarry seanjamesba...@yahoo.co.ukwrote: Hi, I'm getting the following error trying to poll a JPA entity Person. There's no error in the code. Everything compiles ok with a mvn clean install and the Person entity does exist in the net.lr.tutorial.karaf.camel.jpa2jms.model package. I have installed all the necessary features I believe and I know it's correctly connection to the database because it's creating a sequence database for the primary key. 2014-01-30 21:33:59,848 | WARN | jms.model.Person | JpaConsumer | 125 - org.apache.camel.camel-core - 2.12.0 | Consumer Consumer[jpa://net.lr.tutorial.karaf.camel.jpa2jms.model.Person?consumer.delay=3500] failed polling endpoint: Endpoint[jpa://net.lr.tutorial.karaf.camel.jpa2jms.model.Person?consumer.delay=3500]. Will try again at next poll. Caused by: [org.apache.openjpa.persistence.ArgumentException - An error occurred while parsing the query filter select x from Person x. Error message: The name Person is not a recognized entity or identifier. Perhaps you meant Person, which is a close match. Known entity names: [Person]] openjpa-2.1.1-r422266:1148538 nonfatal user error org.apache.openjpa.persistence.ArgumentException: An error occurred while parsing the query filter select x from Person x. Error message: The name Person is not a recognized entity or identifier. Perhaps you meant Person, which is a close match. Known entity names: [Person] Any ideas? Sean -- View this message in context: http://openjpa.208410.n2.nabble.com/org-apache-openjpa-persistence-ArgumentException-An-error-occurred-while-parsing-the-query-filter-se-tp7585976.html Sent from the OpenJPA Users mailing list archive at Nabble.com. -- *Rick Curtis*
Re: PostgreSQL BIT/Boolean madness
Is the problem that we're calling stmnt.setNull(idx, Types.BOOLEAN); and the driver wants us to call stmnt.setNull(idx, Types.BIT);? Just so we're on the same page, your Entity has a boolean field that maps to a BIT column? On Wed, Jan 8, 2014 at 11:18 AM, Hal Hildebrand hal.hildebr...@me.comwrote: I tried this same trick in my app server - DB test and oddly it had no effect either way. I’m wondering if I’m actually doing anything here with this property. I do see the log line: 606 CoRE INFO [main] openjpa.jdbc.JDBC - Using dictionary class org.apache.openjpa.jdbc.sql.PostgresDictionary”. So I’m assuming that this property will in fact, do something to the db dictionary. But the default already is “BIT”, so it’s unclear that this is where the problem lies. On Jan 8, 2014, at 8:29 AM, Hal Hildebrand hal.hildebr...@me.com wrote: So, I tried it with both “BIT” and “BOOLEAN” with the same result. Wonder where OpenJPA is getting this metadata from? Here’s the stack trace, in case that catches someone’s eye: Caused by: org.apache.openjpa.lib.jdbc.ReportingSQLException: column boolean_value is of type boolean but expression is of type bit {prepstmnt 1254242409 INSERT INTO ruleform.job_attribute (id, notes, update_date, binary_value, boolean_value, integer_value, numeric_value, sequence_number, text_value, timestamp_value, job, updated_by, attribute, unit) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) [params=(long) 301, (null) null, (null) null, (null) null, (null) null, (int) 1500, (null) null, (int) 1, (null) null, (null) null, (long) 605, (long) 4, (long) 56, (null) null]} [code=0, state=42804] at org.apache.openjpa.lib.jdbc.LoggingConnectionDecorator.wrap(LoggingConnectionDecorator.java:219) at org.apache.openjpa.lib.jdbc.LoggingConnectionDecorator.wrap(LoggingConnectionDecorator.java:195) at org.apache.openjpa.lib.jdbc.LoggingConnectionDecorator.access$1000(LoggingConnectionDecorator.java:59) at org.apache.openjpa.lib.jdbc.LoggingConnectionDecorator$LoggingConnection$LoggingPreparedStatement.executeUpdate(LoggingConnectionDecorator.java:1134) at org.apache.openjpa.lib.jdbc.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:275) at org.apache.openjpa.lib.jdbc.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:275) at org.apache.openjpa.jdbc.kernel.JDBCStoreManager$CancelPreparedStatement.executeUpdate(JDBCStoreManager.java:1792) at org.apache.openjpa.jdbc.kernel.PreparedStatementManagerImpl.executeUpdate(PreparedStatementManagerImpl.java:268) at org.apache.openjpa.jdbc.kernel.PreparedStatementManagerImpl.flushAndUpdate(PreparedStatementManagerImpl.java:119) On Jan 7, 2014, at 7:31 PM, Rick Curtis curti...@gmail.com wrote: It isn't clear to me what is going on, but with the DBDictionary you can change the type of column that boolean / bit field types are mapped to. Perhaps you can change the bitTypeName / booleanTypeName to see if you can get something working? To change these values you can change the type to set the property openjpa.jdbc.DBDictionary=postgres(bitTypeName=BIT). HTH, Rick On Tue, Jan 7, 2014 at 7:38 PM, Hal Hildebrand hal.hildebr...@me.com wrote: So, I have a serious problem. I have tables that are defined to have BOOLEAN columns. When I run this with PostgreSQL JDBC drivers, from the middle tier, this works just fine. Everything peachy keen. However…. I’m also running this code inside the database via PL/Java. When running inside the session as a stored procedure, a different JDBC driver is used - i.e. the one integrated into PL/Java. This JDBC driver works fine for the most part. But the problem is that when I try to set NULL to BOOLEAN columns, OpenJPA barfs: Caused by: openjpa-2.2.2-r422266:1468616 fatal general error org.apache.openjpa.persistence.PersistenceException: column boolean_value is of type boolean but expression is of type bit {prepstmnt 108675190 INSERT INTO ruleform.job_attribute (id, notes, update_date, binary_value, boolean_value, integer_value, numeric_value, sequence_number, text_value, timestamp_value, job, research, updated_by, attribute, unit) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) [params=(long) 1, (null) null, (null) null, (null) null, (null) null, (null) null, (BigDecimal) 1500, (int) 1, (null) null, (null) null, (long) 55, (null) null, (long) 3, (long) 56, (null) null]} [code=0, state=42804] I have created a test case where I create a simple table with a BOOLEAN column and then use the raw JDBC driver to insert a NULL value into a row when running inside of PL/Java and it just works fine. So I’m guessing there’s some weird meta data thing going on. I did a lot of googling to see what I could find out, and basically
Re: PostgreSQL BIT/Boolean madness
I'm not certain if DatabaseMetaData has anything to do with what is going on. I'm going to suggest to debug down to see what values OpenJPA is passing to the PL/Java JDBC driver, and then see what values need to be passed in. That might help us understand what is going on. On Wed, Jan 8, 2014 at 2:33 PM, Hal Hildebrand hal.hildebr...@me.comwrote: Anyone know if java.sql.DatabaseMetaData might have something to do with this? The implementation that runs in PL/Java, I have the source code to, so I can has change it. In particular, I’m suspicious that DatabaseMetaData.getTypeInfo() might be messing with me. On Jan 8, 2014, at 10:19 AM, Hal Hildebrand hal.hildebr...@me.com wrote: Sorry I didn’t make that clear. The table is defined to have type “BOOLEAN”. The Java type is Boolean for the column (the aptly named “boolean_value). This works, as is, when I from the middle tier - database, using PostgreSQL JDBC drivers, using OpenJPA, with no changes to the postgres db dictionary. When running inside a stored procedure (using PL/Java), I’m using a different JDBC driver (i.e. the one integrated into PL/Java), and I’m still using OpenJPA. However, this fails with the stack trace below, complaining the the expression is of type BIT, but the column is defined as type BOOLEAN. On Jan 8, 2014, at 9:53 AM, Rick Curtis curti...@gmail.com wrote: Is the problem that we're calling stmnt.setNull(idx, Types.BOOLEAN); and the driver wants us to call stmnt.setNull(idx, Types.BIT);? Just so we're on the same page, your Entity has a boolean field that maps to a BIT column? On Wed, Jan 8, 2014 at 11:18 AM, Hal Hildebrand hal.hildebr...@me.com wrote: I tried this same trick in my app server - DB test and oddly it had no effect either way. I’m wondering if I’m actually doing anything here with this property. I do see the log line: 606 CoRE INFO [main] openjpa.jdbc.JDBC - Using dictionary class org.apache.openjpa.jdbc.sql.PostgresDictionary”. So I’m assuming that this property will in fact, do something to the db dictionary. But the default already is “BIT”, so it’s unclear that this is where the problem lies. On Jan 8, 2014, at 8:29 AM, Hal Hildebrand hal.hildebr...@me.com wrote: So, I tried it with both “BIT” and “BOOLEAN” with the same result. Wonder where OpenJPA is getting this metadata from? Here’s the stack trace, in case that catches someone’s eye: Caused by: org.apache.openjpa.lib.jdbc.ReportingSQLException: column boolean_value is of type boolean but expression is of type bit {prepstmnt 1254242409 INSERT INTO ruleform.job_attribute (id, notes, update_date, binary_value, boolean_value, integer_value, numeric_value, sequence_number, text_value, timestamp_value, job, updated_by, attribute, unit) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) [params=(long) 301, (null) null, (null) null, (null) null, (null) null, (int) 1500, (null) null, (int) 1, (null) null, (null) null, (long) 605, (long) 4, (long) 56, (null) null]} [code=0, state=42804] at org.apache.openjpa.lib.jdbc.LoggingConnectionDecorator.wrap(LoggingConnectionDecorator.java:219) at org.apache.openjpa.lib.jdbc.LoggingConnectionDecorator.wrap(LoggingConnectionDecorator.java:195) at org.apache.openjpa.lib.jdbc.LoggingConnectionDecorator.access$1000(LoggingConnectionDecorator.java:59) at org.apache.openjpa.lib.jdbc.LoggingConnectionDecorator$LoggingConnection$LoggingPreparedStatement.executeUpdate(LoggingConnectionDecorator.java:1134) at org.apache.openjpa.lib.jdbc.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:275) at org.apache.openjpa.lib.jdbc.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:275) at org.apache.openjpa.jdbc.kernel.JDBCStoreManager$CancelPreparedStatement.executeUpdate(JDBCStoreManager.java:1792) at org.apache.openjpa.jdbc.kernel.PreparedStatementManagerImpl.executeUpdate(PreparedStatementManagerImpl.java:268) at org.apache.openjpa.jdbc.kernel.PreparedStatementManagerImpl.flushAndUpdate(PreparedStatementManagerImpl.java:119) On Jan 7, 2014, at 7:31 PM, Rick Curtis curti...@gmail.com wrote: It isn't clear to me what is going on, but with the DBDictionary you can change the type of column that boolean / bit field types are mapped to. Perhaps you can change the bitTypeName / booleanTypeName to see if you can get something working? To change these values you can change the type to set the property openjpa.jdbc.DBDictionary=postgres(bitTypeName=BIT). HTH, Rick On Tue, Jan 7, 2014 at 7:38 PM, Hal Hildebrand hal.hildebr...@me.com wrote: So, I have a serious problem. I have tables that are defined to have BOOLEAN columns. When I run this with PostgreSQL JDBC drivers, from the middle tier, this works just fine. Everything peachy keen
Re: PostgreSQL BIT/Boolean madness
It isn't clear to me what is going on, but with the DBDictionary you can change the type of column that boolean / bit field types are mapped to. Perhaps you can change the bitTypeName / booleanTypeName to see if you can get something working? To change these values you can change the type to set the property openjpa.jdbc.DBDictionary=postgres(bitTypeName=BIT). HTH, Rick On Tue, Jan 7, 2014 at 7:38 PM, Hal Hildebrand hal.hildebr...@me.comwrote: So, I have a serious problem. I have tables that are defined to have BOOLEAN columns. When I run this with PostgreSQL JDBC drivers, from the middle tier, this works just fine. Everything peachy keen. However…. I’m also running this code inside the database via PL/Java. When running inside the session as a stored procedure, a different JDBC driver is used - i.e. the one integrated into PL/Java. This JDBC driver works fine for the most part. But the problem is that when I try to set NULL to BOOLEAN columns, OpenJPA barfs: Caused by: openjpa-2.2.2-r422266:1468616 fatal general error org.apache.openjpa.persistence.PersistenceException: column boolean_value is of type boolean but expression is of type bit {prepstmnt 108675190 INSERT INTO ruleform.job_attribute (id, notes, update_date, binary_value, boolean_value, integer_value, numeric_value, sequence_number, text_value, timestamp_value, job, research, updated_by, attribute, unit) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) [params=(long) 1, (null) null, (null) null, (null) null, (null) null, (null) null, (BigDecimal) 1500, (int) 1, (null) null, (null) null, (long) 55, (null) null, (long) 3, (long) 56, (null) null]} [code=0, state=42804] I have created a test case where I create a simple table with a BOOLEAN column and then use the raw JDBC driver to insert a NULL value into a row when running inside of PL/Java and it just works fine. So I’m guessing there’s some weird meta data thing going on. I did a lot of googling to see what I could find out, and basically it seems like there’s an issue with type 2 drivers vs type 3 drivers with PostgreSQL BIT and BOOLEAN. Thus, the question I have is there some setting in OpenJPA I can use to get around this? Things seem to turn nightmarish if I convert all my columns to BIT and try to deal with that. But hey, if there’s a SQL way out of this level of hell, I’ll gladly do that as well. Any help would be appreciated. -Hal -- *Rick Curtis*
Re: Inner enum cannot be used as literal in JPQL
Try to change your query to : select f from Foo f where f.type = com.myorg.jpa.Foo$FooType.BAR Thanks, Rick On Mon, Jan 6, 2014 at 9:19 AM, Kevin Sutter kwsut...@gmail.com wrote: The use of a qualified enum value should be valid for JPQL. But, the spec doesn't seem to specify whether the enum type needs to be in it's own file or not. I've looked through the spec and some references and they all seem to assume that these enums are defined in it's file and not embedded within the Entity definition. I'd be hard pressed to consider this a bug. Maybe a feature request, but not a bug. As you mentioned, if you define this enum separately, everything works as expected, right? Is there some reason why that approach can't be done? Kevin On Sat, Jan 4, 2014 at 4:50 PM, twelveeighty twelve.eig...@gmail.com wrote: Can someone confirm whether or not this is a known issue, or if this is a known limitation? When I define an enum as part of my entity class, I am not able to use its values as literals in a JPQL statement: package com.myorg.jpa; public Foo { public enum FooType { FOO, BAR } @Enumerated(EnumType.STRING) private FooType type; } select f from Foo f where f.type = com.myorg.jpa.Foo.FooType.BAR Error message: Attempt to query field com.myorg.jpa.Foo.FooType.BAR from non-entity variable com. Perhaps you forgot to prefix the path in question with an identification variable from your FROM clause? My version: OpenJPA 2.3.0-nonfinal-1540826 However, if I take FooType and define it as its own Enum class (com.myorg.jpa.FooType) the literal com.myorg.jpa.FooType.BAR works as expected. Should I log a bug for this? -- View this message in context: http://openjpa.208410.n2.nabble.com/Inner-enum-cannot-be-used-as-literal-in-JPQL-tp7585806.html Sent from the OpenJPA Users mailing list archive at Nabble.com. -- *Rick Curtis*
Re: Using ReverseMappingTool.run(...) to generate Java strings?
Take a look at https://issues.apache.org/jira/browse/OPENJPA-2466 to see if that'll work for you. Thanks, Rick On Tue, Dec 10, 2013 at 9:37 AM, Aron L a.lurie+o...@gmail.com wrote: Rick, That does what I need for now, but might I also suggest adding another method that parallels the one you just suggested: run(JDBCConfiguration conf, String[] args, Options opts, Map output)? Or combining the two, say: run(JDBCConfiguration conf, String[] args, Flags flags, Options opts, Map output)? It seems like there is a good bit of code in run(JDBCConfiguration, String[], Options) that one would want to reuse in line with the Map. -Aron -- View this message in context: http://openjpa.208410.n2.nabble.com/Using-ReverseMappingTool-run-to-generate-Java-strings-tp7585730p7585752.html Sent from the OpenJPA Users mailing list archive at Nabble.com. -- *Rick Curtis*
Re: Using ReverseMappingTool.run(...) to generate Java strings?
Aron - If you're using trunk I could modify ReverseMappingTool to allow for extension. Let me know if you want me to put something in. Thanks, Rick On Sun, Dec 8, 2013 at 3:50 PM, Aron Lurie a...@cambridgesemantics.comwrote: Rick, Thanks for the idea. I did some reading into that, and while there is some potential there to extend ReverseMappingTool and override the recordCode() method, the static run(...) method itself is responsible for instantiating the class [1]. So, without copying all of that method into the subclass, no dice. [1] http://grepcode.com/file/repository.springsource.com/org.apache.openjpa/com.springsource.org.apache.openjpa/1.1.0/org/apache/openjpa/jdbc/meta/ReverseMappingTool.java#1987 Thanks, Aron On Sat, Dec 7, 2013 at 12:54 PM, Rick Curtis curti...@gmail.com wrote: Have you tried to extend the ReverseMappingTool so that you can intercept the generated code rather than write it to a file? ... I'm not sure how feasible this might be as I'm writing from my phone. On Fri, Dec 6, 2013 at 3:35 PM, Aron L a.lurie+o...@gmail.com wrote: Hi All, I've managed to get ReverseMappingTool running from within a plain-old Java method. Using the following: JDBCConfiguration result = new JDBCConfigurationImpl(); result.setDBDictionary(xyz); result.setConnectionURL(xyz); result.setConnectionDriverName(xyz); result.setConnectionUserName(xyz); result.setConnectionPassword(xyz); Options rmOpts = new Options(); String argString = -Log DefaultLevel=INFO -metaDataFactory jpa() -metadata none; rmOpts.setFromCmdLine(StringUtils.split(argString, )); try { ReverseMappingTool.run(result, new String[0], rmOpts); } catch (Exception e) { e.printStackTrace(); } This successfully writes Java files reflective of the specified database to the directory that my code is running in, as expected. However, I would much prefer to have the Java generated returned in a map as a String, than written to a file. I see a way of changing the code slightly to make this happen: The method recordCode called by run on [1] below can take a Map, and if the Map is there, it will write the code that I want to that map. So if a variant of run(...) is added that allows the user to pass a map which is then forwarded to that method, then the user will get the code back. [1] http://grepcode.com/file/repository.springsource.com/org.apache.openjpa/com.springsource.org.apache.openjpa/1.1.0/org/apache/openjpa/jdbc/meta/ReverseMappingTool.java#2017 Does anyone see a better way to do this, that doesn't involve changing the code? And if not, should I submit this customization as a patch? Thanks, Aron -- View this message in context: http://openjpa.208410.n2.nabble.com/Using-ReverseMappingTool-run-to-generate-Java-strings-tp7585730.html Sent from the OpenJPA Users mailing list archive at Nabble.com. -- *Rick Curtis* -- *Rick Curtis*
Re: Incorrect concurrent persist of ManyToOne field
Oleg - Can you post the maven project with this test? My first thought is that this *should* work, but I can come up with some hypothetical cases where it might not work. Thanks, Rick On Mon, Dec 9, 2013 at 12:02 PM, olyalikov oleg.lyali...@alcatel-lucent.com wrote: Hello, In short: I have 2 entities - Person and Document, Person has a ManyToOne link to the Document. I create Document in a separate thread/entity manager, close entity manager (and so document becomes detached) and then create concurrently Person objects (in separate entity managers) setting document field to the same detached Document object I created previously. As a result some of the Person objects are persisted with null value in the document field (in my tests 1 such Person object for ~2000 good Person objects). Full story: OpenJPA 2.2.2 is used + tomcat connection pool + no cache for simplicity. Here are these entities (without setters and getters): @Entity public class Person { @Id @GeneratedValue private String id; @ManyToOne private Document document; } @Entity public class Document { @Id @GeneratedValue private String id; } The code which creates Document: final Document document = new Document(); final EntityManager em = emf.createEntityManager(); try { final EntityTransaction tx = em.getTransaction(); tx.begin(); em.persist(document); tx.commit(); } finally { em.close(); } Then concurrently I create Person objects: final EntityManager emCreation = emf.createEntityManager(); try { for (int j = 0; j threadObjectCount; j++) { final Person person = new Person(); person.setDocument(document); final EntityTransaction tx = emCreation.getTransaction(); tx.begin(); emCreation.persist(person); tx.commit(); emCreation.refresh(person); if (person.getDocument() == null) { System.err.println(Person with null Document found, id= + person.getId()); } else { succeded.incrementAndGet(); } } } finally { emCreation.close(); } For some Person objects invocation of person.getDocument() returns null. Here is a SQL trace when Person is persisted with null Document: 1687 openjpa-concurrent-creation-test TRACE [pool-2-thread-3] openjpa.jdbc.SQL - t 4934637, conn 22701741 executing prepstmnt 5327894 INSERT INTO Person (id, DOCUMENT_ID) VALUES (?, ?) [params=(String) 1453, (null) null] And here is normal case: 1687 openjpa-concurrent-creation-test TRACE [pool-2-thread-1] openjpa.jdbc.SQL - t 4358252, conn 30226657 executing prepstmnt 32228587 INSERT INTO Person (id, DOCUMENT_ID) VALUES (?, ?) [params=(String) 1452, (String) 1] There is no more information from OpenJPA logs. Actually I have a maven project with this test and can send it, the test consistently fails. As a fix it's enough to find Document for each new EntityManager and use it for Person objects. It seems that the Document is correctly published with regard to my code and other threads (it is not changed by my code after these threads start) but it may be changed by OpenJPA during Person persist and in this case this may lead to the inproper view of Document object in different threads and explain observed behaviour (perhaps). So I have several questions: 1) Is Document object changed by OpenJPA when I persist Person object (becomes managed or something)? 2) Anyway is it possible to detect this situation by OpenJPA and react with exception? Or maybe the Document object is not changed and there is some other issue? Thanks, Oleg -- View this message in context: http://openjpa.208410.n2.nabble.com/Incorrect-concurrent-persist-of-ManyToOne-field-tp7585744.html Sent from the OpenJPA Users mailing list archive at Nabble.com. -- *Rick Curtis*
Re: Using ReverseMappingTool.run(...) to generate Java strings?
Would something like this work for you? http://pastebin.com/gwrFg48c Thanks, Rick On Mon, Dec 9, 2013 at 10:19 AM, Aron L a.lurie+o...@gmail.com wrote: Rick, That would be much appreciated - thank you. Please let me know if I can help at all. -Aron -- View this message in context: http://openjpa.208410.n2.nabble.com/Using-ReverseMappingTool-run-to-generate-Java-strings-tp7585730p7585742.html Sent from the OpenJPA Users mailing list archive at Nabble.com. -- *Rick Curtis*
Re: Incorrect concurrent persist of ManyToOne field
The short story is that in this case you can't share the detached Document across threads as OpenJPA does use some internal structures to get the id out. I'd suggest changing your code to utilize the EntityManager.getReference method. person.setDocument(emCreation.getReference(Document.class, document.getId())); Thanks, Rick On Mon, Dec 9, 2013 at 1:39 PM, LYALIKOV, Oleg (Oleg) oleg.lyali...@alcatel-lucent.com wrote: Thanks Rick for quick reply, The project is in attachment. Forgot to mention - it uses H2 in-memory DB, so no additional setup needed. Also on different computers I saw different count of failed objects - on one it's ~1-10 failed objects of 3 total, on another (slower with less cores) ~10-100 failed of 3 total. So feel free to change total objects count and/or threads count (I create Max(4, availableProcessors()) threads, one computer has 4 cores and another 2 cores). Thanks, Oleg -Original Message- From: Rick Curtis [mailto:curti...@gmail.com] Sent: Monday, December 09, 2013 11:23 PM To: users Subject: Re: Incorrect concurrent persist of ManyToOne field Oleg - Can you post the maven project with this test? My first thought is that this *should* work, but I can come up with some hypothetical cases where it might not work. Thanks, Rick On Mon, Dec 9, 2013 at 12:02 PM, olyalikov oleg.lyali...@alcatel-lucent.com wrote: Hello, In short: I have 2 entities - Person and Document, Person has a ManyToOne link to the Document. I create Document in a separate thread/entity manager, close entity manager (and so document becomes detached) and then create concurrently Person objects (in separate entity managers) setting document field to the same detached Document object I created previously. As a result some of the Person objects are persisted with null value in the document field (in my tests 1 such Person object for ~2000 good Person objects). Full story: OpenJPA 2.2.2 is used + tomcat connection pool + no cache for simplicity. Here are these entities (without setters and getters): @Entity public class Person { @Id @GeneratedValue private String id; @ManyToOne private Document document; } @Entity public class Document { @Id @GeneratedValue private String id; } The code which creates Document: final Document document = new Document(); final EntityManager em = emf.createEntityManager(); try { final EntityTransaction tx = em.getTransaction(); tx.begin(); em.persist(document); tx.commit(); } finally { em.close(); } Then concurrently I create Person objects: final EntityManager emCreation = emf.createEntityManager(); try { for (int j = 0; j threadObjectCount; j++) { final Person person = new Person(); person.setDocument(document); final EntityTransaction tx = emCreation.getTransaction(); tx.begin(); emCreation.persist(person); tx.commit(); emCreation.refresh(person); if (person.getDocument() == null) { System.err.println(Person with null Document found, id= + person.getId()); } else { succeded.incrementAndGet(); } } } finally { emCreation.close(); } For some Person objects invocation of person.getDocument() returns null. Here is a SQL trace when Person is persisted with null Document: 1687 openjpa-concurrent-creation-test TRACE [pool-2-thread-3] openjpa.jdbc.SQL - t 4934637, conn 22701741 executing prepstmnt 5327894 INSERT INTO Person (id, DOCUMENT_ID) VALUES (?, ?) [params=(String) 1453, (null) null] And here is normal case: 1687 openjpa-concurrent-creation-test TRACE [pool-2-thread-1] openjpa.jdbc.SQL - t 4358252, conn 30226657 executing prepstmnt 32228587 INSERT INTO Person (id, DOCUMENT_ID) VALUES (?, ?) [params=(String) 1452, (String) 1] There is no more information from OpenJPA logs. Actually I have a maven project with this test and can send it, the test consistently fails. As a fix it's enough to find Document for each new EntityManager and use it for Person objects. It seems that the Document is correctly published with regard to my code and other threads (it is not changed by my code after these threads start) but it may be changed by OpenJPA during Person persist and in this case this may lead to the inproper view of Document object in different threads and explain observed behaviour (perhaps). So I have several questions
Re: Using ReverseMappingTool.run(...) to generate Java strings?
I would rather add an additional static run method[1] that has an additional parameter to the existing parameter list and that can be used for output. http://pastebin.com/jZ7vUtM8 Thanks, Rick On Mon, Dec 9, 2013 at 4:29 PM, Aron L a.lurie+o...@gmail.com wrote: Rick, That could work but I would end up using JDBCConfiguration in an unusual way to get the result I want, something like this: http://pastebin.com/Uqngcuaj I'm not sure if this would be a lasting fix; there's nothing in the method contract that ensures the added value will be propagated through the other run methods. An easier (for me) way to do this might be adding a flag of type Map to ReverseMappingTool.Flags, and if that flag is not null, calling tool.recordCode(Map) instead of tool.recordCode() where I linked before. However, I could see where this might be an issue as the flag wouldn't really be usable from the command line. Thanks, Aron -- View this message in context: http://openjpa.208410.n2.nabble.com/Using-ReverseMappingTool-run-to-generate-Java-strings-tp7585730p7585750.html Sent from the OpenJPA Users mailing list archive at Nabble.com. -- *Rick Curtis*