Re: Performance with large lists

2010-06-18 Thread Adam Hardy
1) I'm using tomcat, so no Java EE container. I'm using Spring to handle the 
transactions.


2) I'm using one EMF for the whole app from app start, and one EM for the 
transaction or sometimes 2, launched by a tomcat filter which opens it at the 
start of the HTTP request and closes it at the end. And the two transactions I'm 
talking about are 2 distinct user operations, so each gets one EM.


3) Build-time enhancement.

4) Actually no because in OpenJPA 1.2.2 there is a bug that throws NPEs and 
means I can't run with it enabled. I just upgraded to 2.0.0 I think I've got the 
same bug - albeit a different stack trace - but a quick google didn't show up 
any immediately available work-around. I guess this is the key issue, right?




Here's my persistence xml
  persistence-unit name=OpenJpaJdbc
descriptionPattern Repo JPA Config with OpenJPA/description
providerorg.apache.openjpa.persistence.PersistenceProviderImpl/provider

mapping-fileorg/permacode/atomic/domain/entity/AtomicEntity.xml/mapping-file
mapping-fileorg/permacode/patternrepo/orm/BrokerAccount.xml/mapping-file
mapping-fileorg/permacode/patternrepo/orm/Category.xml/mapping-file
mapping-fileorg/permacode/patternrepo/orm/DollarReturn.xml/mapping-file
mapping-fileorg/permacode/patternrepo/orm/Exchange.xml/mapping-file

mapping-fileorg/permacode/patternrepo/orm/ExchangeSymbol.xml/mapping-file
mapping-fileorg/permacode/patternrepo/orm/Fill.xml/mapping-file
mapping-fileorg/permacode/patternrepo/orm/Market.xml/mapping-file
mapping-fileorg/permacode/patternrepo/orm/MarketSymbol.xml/mapping-file
mapping-fileorg/permacode/patternrepo/orm/MarketSystem.xml/mapping-file
mapping-fileorg/permacode/patternrepo/orm/Order.xml/mapping-file
mapping-fileorg/permacode/patternrepo/orm/Pattern.xml/mapping-file
mapping-fileorg/permacode/patternrepo/orm/Portfolio.xml/mapping-file
mapping-fileorg/permacode/patternrepo/orm/PortfolioItem.xml/mapping-file
mapping-fileorg/permacode/patternrepo/orm/RequiredParam.xml/mapping-file
mapping-fileorg/permacode/patternrepo/orm/TestAnalysis.xml/mapping-file
mapping-fileorg/permacode/patternrepo/orm/TestRun.xml/mapping-file
mapping-fileorg/permacode/patternrepo/orm/Trade.xml/mapping-file
mapping-fileorg/permacode/patternrepo/orm/TradingParam.xml/mapping-file
mapping-fileorg/permacode/patternrepo/orm/TradingSystem.xml/mapping-file
mapping-fileorg/permacode/patternrepo/orm/Weighting.xml/mapping-file

mapping-fileorg/permacode/patternrepo/orm/WeightingGroup.xml/mapping-file
mapping-fileorg/permacode/patternrepo/orm/MiniAnalysis.xml/mapping-file
mapping-fileorg/permacode/patternrepo/orm/MiniResult.xml/mapping-file
mapping-fileorg/permacode/patternrepo/orm/TestQueries.xml/mapping-file
properties
  property name=openjpa.RuntimeUnenhancedClasses value=unsupported /
  property name=openjpa.Log value=SQL=TRACE/
  property name=openjpa.ConnectionFactoryProperties
value=PrettyPrint=true, PrettyPrintLineLength=72 /
/properties
  /persistence-unit




Donald Woods on 18/06/10 13:36, wrote:

Some quick initial questions -
1) Are you using Java SE or a Java EE container (more of a transactions
and XA question)?
2) Are you using one EMF and EM for both the parent, child and
grandchild operations?  Are you calling flush() anywhere?
3) Are you using build time or run time enhancement of entities?
4) Have you enabled the DataCache or QueryCache?
http://openjpa.apache.org/builds/2.0.0/apache-openjpa-2.0.0/docs/manual/manual.html#ref_guide_cache_query

Also, what do the entities and your persistence.xml look like?


-Donald


On 6/18/10 8:11 AM, Adam Hardy wrote:

I have a transaction that submits around 20K records, followed by
another transaction in the user process which inserts the same into a
related table.

The performance issues I talked about below initially caused processing
times of 45 mins, but I worked on the Java in the code and I tweaked the
mySQL database and reduced it to 15 mins.

This though is still a problem - I've been over the optimization
guidelines in the documentation and there's nothing there that I can
implement that I'm not already.

The first transaction I mentioned takes 5 mins, but the second takes
15mins and is inserting child records of the records created in the
first transaction. It looks like OpenJPA is fetching all of those first
records again. Shouldn't they already be in memory?


Thanks
Adam

Adam Hardy on 12/06/10 13:21, wrote:

I am trying to get a handle on what I should be able to achieve.

Can someone give me some idea of the metrics I should be able to get
optimistically when persisting an object that has a child list with
20,000 child objects and 20,000 grandchildren? (one-to-one child -
grandchild)

Can I reasonably expect to get this done in under a minute?

I think that would work out at a rate of about 1.5 milliseconds per
object.

Thanks
Adam

Adam Hardy on 11/06/10 17:34

Re: Performance with large lists

2010-06-18 Thread Adam Hardy

Donald Woods on 18/06/10 20:54, wrote:

4) Actually no because in OpenJPA 1.2.2 there is a bug that throws NPEs
and means I can't run with it enabled. I just upgraded to 2.0.0 I think
I've got the same bug - albeit a different stack trace - but a quick
google didn't show up any immediately available work-around. I guess
this is the key issue, right?



Which JIRA and what does your 2.0.0 stack trace look like?


OK you caught me there. I did not find it in the Jira just now (searched on 
either NullPointerException or DataCachPCDataImpl), I was referring to this 
message here:


http://openjpa.208410.n2.nabble.com/NullPointerException-in-DataCache-td532523.html

The same stack trace as mine in 1.2.2 and this is my stacktrace with 2.0.0:

Caused by: java.lang.NullPointerException
at 
org.apache.openjpa.datacache.DataCachePCDataImpl.clearInverseRelationCache(DataCachePCDataImpl.java:179)
at 
org.apache.openjpa.datacache.DataCachePCDataImpl.storeField(DataCachePCDataImpl.java:159)

at org.apache.openjpa.kernel.PCDataImpl.store(PCDataImpl.java:235)
at 
org.apache.openjpa.datacache.DataCachePCDataImpl.store(DataCachePCDataImpl.java:125)
at 
org.apache.openjpa.datacache.DataCacheStoreManager.updateCaches(DataCacheStoreManager.java:179)
at 
org.apache.openjpa.datacache.DataCacheStoreManager.commit(DataCacheStoreManager.java:90)
at 
org.apache.openjpa.kernel.DelegatingStoreManager.commit(DelegatingStoreManager.java:95)
at 
org.apache.openjpa.kernel.BrokerImpl.endStoreManagerTransaction(BrokerImpl.java:1436)
at 
org.apache.openjpa.kernel.BrokerImpl.endTransaction(BrokerImpl.java:2316)
at 
org.apache.openjpa.kernel.BrokerImpl.afterCompletion(BrokerImpl.java:1975)

... 69 more


Re: Performance with large lists

2010-06-12 Thread Adam Hardy

I am trying to get a handle on what I should be able to achieve.

Can someone give me some idea of the metrics I should be able to get 
optimistically when persisting an object that has a child list with 20,000 child 
objects and 20,000 grandchildren? (one-to-one child - grandchild)


Can I reasonably expect to get this done in under a minute?

I think that would work out at a rate of about 1.5 milliseconds per object.

Thanks
Adam

Adam Hardy on 11/06/10 17:34, wrote:
I have performance problems with large lists of beans due to the base 
class I am using for my entities.


This is slightly non-OpenJPA specific, so I hope nobody minds, but it is 
Friday afternoon so I'm hoping you give me a bit of slack here.


The problem arises when I start building lists with over 10,000 items on 
a parent class.


The trouble is in the base class for the entities, which is quite clever 
(but obviously not clever enough) and it has non-facile equals() and 
hashcode() algorithms which make use of reflection. It's here that the 
slow-down comes.


When I link the child with a parent that already has 10,000 children, 
the equals() method is called by ArrayList before the new child is 
placed in the index.


As far as I can tell I have a couple of options.

(1) ditch the reflection-based equals method and hard-code an equals 
method.


(2) don't use ArrayList but find a Collection-based class that uses 
hashes or similar to identify items instead of equals. This is just 
speculation - perhaps there is no such thing or it wouldn't help anyway:


- would a collection using hashes caches the hashes of the items 
already indexed?

   - would such a collection be persistable?

If anyone has  been in this situation before, or has an idea about it , 
I'd really appreciate the help.


Regards
Adam






deleting with cascades - performance?

2010-02-04 Thread Adam Hardy
I've got a performance issue doing deletes. I have a simple model where a 
grandparent, parent, child, grandchild relationship exists, with one grandparent 
having 2 or 3 thousand grandchildren. I delete the parent and cascade it down 
the relationships to delete the whole lot.


This takes 5 mins on a 1.6GHz Core2 machine.

I'm just doing this:

entityManager.remove(grandparent);

Is there a better way or do I just have to put some indices on the database? I 
thought standard foreign key constraints acted as indices in databases - isn't 
that so?


Thanks
Adam


Re: Eclipse WTP tomcat plugin

2009-12-11 Thread Adam Hardy
Is anybody using OpenJPA and doing in-container development and debugging with a 
 servlet container like tomcat?



Adam Hardy on 09/12/09 09:52, wrote:
yes, I can run it without, but I have several persistence units defined 
and I would have to delete all but one for it to work.


I have actually done that as a work-around, and I ran straight into 
class loading problems with tomcat, and ClassNotFoundExceptions 
appearing for anything that I had in the tomcat lib directory for the 
javaagent operation.



Regards
Adam

Rick Curtis on 08/12/09 17:25, wrote:

Adam -

Have you tried setting the javaagent without the =properties=persis ?
Another option would be build/package time enhancement?

Thanks,
Rick

Adam Hardy-4 wrote:

I'm using the Eclipse Galileo and WTP to run an embedded instance of
tomcat in Eclipse and I am trying to run enhancement on my webapp's 
entities using

any means possible but am having no luck so far.

The furthest I get is with the javaagent as a parameter on the tomcat
launch config. However this fails when I try to specify the 
persistence.xml file.

I tried:

-javaagent:/disk2/java/m2-repo/org/apache/openjpa/openjpa/1.2.2-SNAPSHOT/openjpa-1.2.2-SNAPSHOT.jar=properties=persistence.xml#OpenJpaJdbc 



and I get this:

  by: java.util.MissingResourceException: persistence.xml#OpenJpaJdbc
at
org.apache.openjpa.lib.conf.ProductDerivations.load(ProductDerivations.java:272) 

at 
org.apache.openjpa.lib.conf.Configurations.populateConfiguration(Configurations.java:344) 

at 
org.apache.openjpa.enhance.PCEnhancerAgent.registerClassLoadEnhancer(PCEnhancerAgent.java:101) 


at
org.apache.openjpa.enhance.PCEnhancerAgent.premain(PCEnhancerAgent.java:82) 


... 6 more

The line works fine when I'm using it as a parameter to run junit tests,
but not here.

I can see that Eclipse-Tomcat has put the persistence.xml file deep 
in its directory structure somewhere, but giving the full path as a 
parameter

didn't help anyway, it should be picked up from the classpath, I assume.







Re: Eclipse WTP tomcat plugin

2009-12-09 Thread Adam Hardy

Hi

yes, I can run it without, but I have several persistence units defined and I 
would have to delete all but one for it to work.


I have actually done that as a work-around, and I ran straight into class 
loading problems with tomcat, and ClassNotFoundExceptions appearing for anything 
that I had in the tomcat lib directory for the javaagent operation.



Regards
Adam

Rick Curtis on 08/12/09 17:25, wrote:

Adam -

Have you tried setting the javaagent without the =properties=persis ?
Another option would be build/package time enhancement?

Thanks,
Rick

Adam Hardy-4 wrote:

I'm using the Eclipse Galileo and WTP to run an embedded instance of
tomcat in 
Eclipse and I am trying to run enhancement on my webapp's entities using
any 
means possible but am having no luck so far.


The furthest I get is with the javaagent as a parameter on the tomcat
launch 
config. However this fails when I try to specify the persistence.xml file.

I tried:

-javaagent:/disk2/java/m2-repo/org/apache/openjpa/openjpa/1.2.2-SNAPSHOT/openjpa-1.2.2-SNAPSHOT.jar=properties=persistence.xml#OpenJpaJdbc

and I get this:

  by: java.util.MissingResourceException: persistence.xml#OpenJpaJdbc
at
org.apache.openjpa.lib.conf.ProductDerivations.load(ProductDerivations.java:272)
	at 
org.apache.openjpa.lib.conf.Configurations.populateConfiguration(Configurations.java:344)
	at 
org.apache.openjpa.enhance.PCEnhancerAgent.registerClassLoadEnhancer(PCEnhancerAgent.java:101)

at
org.apache.openjpa.enhance.PCEnhancerAgent.premain(PCEnhancerAgent.java:82)
... 6 more

The line works fine when I'm using it as a parameter to run junit tests,
but not 
here.


I can see that Eclipse-Tomcat has put the persistence.xml file deep in its 
directory structure somewhere, but giving the full path as a parameter
didn't 
help anyway, it should be picked up from the classpath, I assume.




Re: Eclipse WTP tomcat plugin

2009-12-08 Thread Adam Hardy
I had to give up trying to get the javaagent to run on Eclipse Tomcat plugin. 
Maybe it works, maybe it doesn't, I can't really tell at tomcat startup and the 
problem is that every time I make a request, I see ClassNotFound errors.


I think the ClassNotFound errors arise from the requirement to have OpenJPA.jar 
in the classpath for the server rather than the app. Ditto for the other jars 
that OpenJPA depends on.


So I figure it is a class loading issue with the classes loaded by javaagent 
then becoming inaccessible to the webapp.


Pure speculation though.

Adam

Adam Hardy on 07/12/09 19:24, wrote:
I'm using the Eclipse Galileo and WTP to run an embedded instance of 
tomcat in Eclipse and I am trying to run enhancement on my webapp's 
entities using any means possible but am having no luck so far.


The furthest I get is with the javaagent as a parameter on the tomcat 
launch config. However this fails when I try to specify the 
persistence.xml file. I tried:


-javaagent:/disk2/java/m2-repo/org/apache/openjpa/openjpa/1.2.2-SNAPSHOT/openjpa-1.2.2-SNAPSHOT.jar=properties=persistence.xml#OpenJpaJdbc 



and I get this:

 by: java.util.MissingResourceException: persistence.xml#OpenJpaJdbc
at 
org.apache.openjpa.lib.conf.ProductDerivations.load(ProductDerivations.java:272) 

at 
org.apache.openjpa.lib.conf.Configurations.populateConfiguration(Configurations.java:344) 

at 
org.apache.openjpa.enhance.PCEnhancerAgent.registerClassLoadEnhancer(PCEnhancerAgent.java:101) 

at 
org.apache.openjpa.enhance.PCEnhancerAgent.premain(PCEnhancerAgent.java:82)

... 6 more

The line works fine when I'm using it as a parameter to run junit tests, 
but not here.


I can see that Eclipse-Tomcat has put the persistence.xml file deep in 
its directory structure somewhere, but giving the full path as a 
parameter didn't help anyway, it should be picked up from the classpath, 
I assume.


Does anyone have this working?


Eclipse WTP tomcat plugin

2009-12-07 Thread Adam Hardy
I'm using the Eclipse Galileo and WTP to run an embedded instance of tomcat in 
Eclipse and I am trying to run enhancement on my webapp's entities using any 
means possible but am having no luck so far.


The furthest I get is with the javaagent as a parameter on the tomcat launch 
config. However this fails when I try to specify the persistence.xml file. I tried:


-javaagent:/disk2/java/m2-repo/org/apache/openjpa/openjpa/1.2.2-SNAPSHOT/openjpa-1.2.2-SNAPSHOT.jar=properties=persistence.xml#OpenJpaJdbc

and I get this:

 by: java.util.MissingResourceException: persistence.xml#OpenJpaJdbc
at 
org.apache.openjpa.lib.conf.ProductDerivations.load(ProductDerivations.java:272)
	at 
org.apache.openjpa.lib.conf.Configurations.populateConfiguration(Configurations.java:344)
	at 
org.apache.openjpa.enhance.PCEnhancerAgent.registerClassLoadEnhancer(PCEnhancerAgent.java:101)

at 
org.apache.openjpa.enhance.PCEnhancerAgent.premain(PCEnhancerAgent.java:82)
... 6 more

The line works fine when I'm using it as a parameter to run junit tests, but not 
here.


I can see that Eclipse-Tomcat has put the persistence.xml file deep in its 
directory structure somewhere, but giving the full path as a parameter didn't 
help anyway, it should be picked up from the classpath, I assume.


Does anyone have this working?

Thanks
Adam


Re: openjpa-maven-plugin snapshot

2009-12-03 Thread Adam Hardy
I'm not on the Mojo list, so I'm not sure what they're saying there, but I do 
know from their JIRA that there are only 2 issues in the way of a release and 
one of them is relatively trivial


http://jira.codehaus.org/browse/MOPENJPA-3

You could vote for it I guess.

The other issue is something I know nothing about.

I downloaded the source from svn and it compiles easily with maven and you can 
install the snapshot to your local repo:


svn checkout https://svn.codehaus.org/mojo/trunk/mojo/openjpa-maven-plugin 
openjpa-maven-plugin



Michael Vorburger on 03/12/09 10:45, wrote:

Hello Mojoers, I support the request for an e.g. non-SNAPSHOT v1.1
OpenJPA Maven plugin published to a repo somewhere... 


Would be nice to be able to use the
http://jira.codehaus.org/browse/MOPENJPA-8 fix!


-Original Message-
From: Adam Hardy [mailto:adam@cyberspaceroad.com] 
Sent: Wednesday, December 02, 2009 7:04 PM

To: users@openjpa.apache.org
Subject: Re: openjpa-maven-plugin snapshot

Oh I thought the dev guys for the plugin preferred this user list.

No problem. It looks like I'll have to download the source and have a go
at it myself anyway.


Regards
Adam

Michael Dick on 02/12/09 16:14, wrote:

Hi all,

The maven plugin is something different. The code is in codehaus SVN

and the

resulting binaries are probably published to a codehaus m2-snapshot
repository.

I don't know how often those get published, so I've cross-posted to

their

users email list.

Sorry for the confusion,

-mike

On Tue, Dec 1, 2009 at 5:13 PM, Adam Hardy

adam@cyberspaceroad.comwrote:

ljnelson on 01/12/09 20:47, wrote:


On Tue, Dec 1, 2009 at 1:48 PM, Michael Dick [via OpenJPA] 


ml-node+4095467-513228...@n2.nabble.comml-node%2b4095467-513228...@n2.n
abble.com
ml-node%2b4095467-513228...@n2.nabble.comml-node%252B4095467-513228112
@n2.nabble.com

wrote:


 I publish the snapshots to the m2-snapshot repository on

people.apache.orgfairly regularly. You can find 1.1.1-SNAPSHOT at




http://people.apache.org/repo/m2-snapshot-repository/org/apache/openjpa/
openjpa/1.1.1-SNAPSHOT/



(Not part of this conversation, but is that for the OpenJPA Maven

plugin,

or
just OpenJPA?)


Looks like OpenJPA itself. I should have mentioned the

openjpa-maven-plugin

in the text as well as the title.

Interesting to know about the repository for Apache snapshots though.

Regards
Adam






• This email and any files transmitted with it are CONFIDENTIAL and intended
  solely for the use of the individual or entity to which they are addressed.
• Any unauthorized copying, disclosure, or distribution of the material within
  this email is strictly forbidden.
• Any views or opinions presented within this e-mail are solely those of the
  author and do not necessarily represent those of Odyssey Financial
Technologies SA unless otherwise specifically stated.
• An electronic message is not binding on its sender. Any message referring to
  a binding engagement must be confirmed in writing and duly signed.
• If you have received this email in error, please notify the sender immediately
  and delete the original.





openjpa-maven-plugin snapshot

2009-12-01 Thread Adam Hardy

Hi

I can't find any snapshot repository that has 1.1-SNAPSHOT available. Is there 
no publicly available snapshot or product of some CI build?


Thanks
Adam




Re: openjpa-maven-plugin snapshot

2009-12-01 Thread Adam Hardy

ljnelson on 01/12/09 20:47, wrote:

On Tue, Dec 1, 2009 at 1:48 PM, Michael Dick [via OpenJPA] 
ml-node+4095467-513228...@n2.nabble.comml-node%2b4095467-513228...@n2.nabble.com

wrote:



I publish the snapshots to the m2-snapshot repository on
people.apache.orgfairly regularly. You can find 1.1.1-SNAPSHOT at

http://people.apache.org/repo/m2-snapshot-repository/org/apache/openjpa/openjpa/1.1.1-SNAPSHOT/



(Not part of this conversation, but is that for the OpenJPA Maven plugin, or
just OpenJPA?)


Looks like OpenJPA itself. I should have mentioned the openjpa-maven-plugin in 
the text as well as the title.


Interesting to know about the repository for Apache snapshots though.

Regards
Adam


Re: resultsets with Object[] lists

2009-07-26 Thread Adam Hardy

Miłosz Tylenda on 25/07/09 18:06, wrote:

Daryl Stultz on 24/07/09 13:45, wrote:

Adam's original question is about portability of the Object[] return type,
yes?  Will this query:

select distinct o.periodEnd, o.portfolioItem.portfolio
from Order o
order by o.periodEnd asc, o.portfolioItem.portfolio.title

return a List of Object[] in all JPA implementations or just OpenJPA?

That is my original question, yes. Anyone care to comment?


Yes. All JPA implementations should return a List of Object[] for that query. 
What they say in JPA 1 spec:

The elements of the result of a Java Persistence query whose SELECT clause 
consists of more than one
select expression are of type Object[].


Haha! So that's why I didn't find it. Wrong section and wrong search string. I 
normally read the paragraphs that have the most relevant headings in the 
contents list and then search for other stuff on keywords. My tried and trusty 
RTFM method is obviously lacking.


Re: resultsets with Object[] lists

2009-07-24 Thread Adam Hardy

Daryl Stultz on 24/07/09 13:45, wrote:

Adam's original question is about portability of the Object[] return type,
yes?  Will this query:

select distinct o.periodEnd, o.portfolioItem.portfolio
from Order o
order by o.periodEnd asc, o.portfolioItem.portfolio.title

return a List of Object[] in all JPA implementations or just OpenJPA?


That is my original question, yes. Anyone care to comment?


Thanks
Adam


Re: openJPA generates select per row - impossible to use for simple select statements

2009-07-22 Thread Adam Hardy

Kevin Sutter on 21/07/09 14:42, wrote:

For those of you interested, here are the details on this problem and what's
required to reproduce it.  I will be opening a JIRA and posting a testcase
shortly.

Requirements...

o  Your Entity needs to have an attribute that OpenJPA wrappers with a
proxy.  This proxy is used to detect changes to these object types.  Example
object types that OpenJPA wrappers are Calendar (culprit for this scenario),
Date, Collection, and Map.

o  In the Setter method for the proxied attribute, you must modify the value
getting set.  In this scenario, the Calendar object was being modified in
the setDate method via the set() method.


I thought putting code in your entities' setters and getters was considered bad 
practice. Are you saying that there is much code in OpenJPA to cater for this 
kind of thing?



Regards
Adam


resultsets with Object[] lists

2009-07-22 Thread Adam Hardy

Hi,

I just coded up a webpage to show the resultset from a query like this:

select distinct o.periodEnd, o.portfolioItem.portfolio
from Order o
order by o.periodEnd asc, o.portfolioItem.portfolio.title

I required quite a lot of code to put the resulting Object[] list into a format 
I can use. So it struck me that it would be a major pain if this implementation 
isn't portable.


I read through the JPA v1 spec and it was unfortunately pretty non-specific on 
the matter ;)


Is this type of resultset with Object[] items guaranteed for portability and 
future compatibility? Or have I just coded up stuff that is OpenJPA v1.2 only?


Thanks
Adam


Re: Nullable unique constraints

2009-06-26 Thread Adam Hardy

Daryl Stultz on 26/06/09 13:15, wrote:

I am finding that OpenJPA does not handle my unique constraints the way I
expect. When I have a nullable column, say a foreign key that is an integer,
and it must be unique (if not null), OpenJPA inserts a 0 instead of null,
which violates the FK constraint, of course. Currently I am using the unique
constraint on the database but omitting it from the entity metadata (and
wondering what benefit I might be sacrificing - the manual speaks only of
table creation DDL and ordering SQL statements). Is there a way to define a
Unique Constraint that allows null?



Presumably you have tried setting the column definition in your metadata to 
nullable=true and unique=true ?


execution order of sql statements for multiple persists and updates

2009-06-26 Thread Adam Hardy

Hi All,

I have played with this all day and managed to find one work-around but I can't 
really live with it for long - I hope I'm just missing something in my config 
but I can't work out what:


I have four entities, a grandparent, two parents (each with many-to-one with the 
grandparent) and one grandchild (related many-to-one to both parents). (And all 
are bi-directional).


OpenJPA 1.2 is having problems ordering the inserts and updates when I'm adding 
new entities and seems to ignore my metadata.


Despite setting nullable=false insertable=true updatable=false on the grandchild 
join-column definition, I see grandchild is inserted first without the parent 
foreign keys and then next the parent is inserted, after which OpenJPA issues an 
UPDATE on the grandchild. This breaks the referential integrity constraints when 
the INSERT executes.


Perhaps the cascading is important? I have tried cascading only through one 
parent from grandparent to grandchild, and then through the other, and also by 
allowing both to cascade (cascade-all) but only one configuration works which is 
when I'm persisting everything. If I try it with an existing grandparent, I 
can't get it to work.


Thanks in advance for any help

Adam



Re: cascading deletes and entity relationship constraints

2009-05-11 Thread Adam Hardy
Just a quick apology up-front because I haven't had time to follow up my problem 
myself. I fully intend to at some point soon, but in the meantime, I am having 
problems with cascading inserts, which I think may be closely related, so I 
thought I'd add it into this thread.


I create a child and a parent entity in the same transaction, and call the 
EntityManager operations on the child. It seems from my current state that 
OpenJPA is trying to insert the child before the parent, which won't work due to 
the referential integrity constraints on the child table.


Due to time pressure at the moment I'm going to work around it by swapping over 
the calls and pass in the parent to the entity manager.



Paul Copeland on 16/04/09 05:02, wrote:
Are you saying that CascadeType.REMOVE does not propagate through 
multiple levels of OneToMany relations and this only happens when using 
database foreign keys?


The SQL errors in JIRA-39 look like they might be due to operation 
ordering - hence the suggestion to use the UpdateManager setting below - 
The documentation is very brief so it is unclear when you need that 
setting and why it isn't the default in the first place - apparently you 
lose some batch statement efficiency?


Side question - is the database schema reflection that happens with 
openjpa.jdbc.SchemaFactory=native(ForeignKeys=true)  expensive?  Does 
this only happen once per EntityManagerFactory?


On 4/15/2009 4:03 PM, Adam Hardy wrote:


My observation is that one level of parent-child relationship 
cascading will work, but OpenJPA won't descend the relationship tree 
any further.



Paul Copeland on 15/04/09 23:28, wrote:
JIRA-39 is pretty old. Possibly JIRA-1004 is related to your 
question? There was recent discussion on this list about this.


Also there was recent mention on this list of using either one of 
these settings regarding ordering of operations (reading the docs 
leaves me with some questions about what these do).


 property name=openjpa.jdbc.Schemafactory 
value=native(ForeignKeys=true)/


 property name=openjpa.jdbc.UpdateManager value=operation-order/

It seems like by default (this is just my observation) OpenJPA 
manages effective foreign key behavior without actually generating 
foreign keys when it does forward mapping.  If you have existing 
foreign keys on a database or the database is updated by non-JPA 
clients then you might need to apply annotations and configuration to 
account for that.


On 4/12/2009 7:31 PM, Adam Hardy wrote:
Just wanted to check that this old issue still describes the 
situation properly:


http://issues.apache.org/jira/browse/OPENJPA-39
Cascade delete does not work with foreign key constraints

- marked as 'resolved - wont fix'.

Just wondering if maybe in the in-between time something has been 
implemented for this.


I'm trying to do exactly this (delete a large part of my object 
model cascading down several parent-child relationships) but I can't 
get it to work.


cascading deletes and entity relationship constraints

2009-04-12 Thread Adam Hardy

Just wanted to check that this old issue still describes the situation properly:

http://issues.apache.org/jira/browse/OPENJPA-39
Cascade delete does not work with foreign key constraints

- marked as 'resolved - wont fix'.

Just wondering if maybe in the in-between time something has been implemented 
for this.


I'm trying to do exactly this (delete a large part of my object model cascading 
down several parent-child relationships) but I can't get it to work.



Regards
Adam


Re: enum = null becomes blank string in db

2009-04-05 Thread Adam Hardy
OK after a bit of cursory testing, it seems that putting the nullable=false 
attribute on the column metadata prevents the situation.


Do you consider this a bug? Do you want a bug report?


Adam Hardy on 04/04/09 12:39, wrote:
OpenJPA is not persisting null as you had thought, it is changing the 
null value to an empty string.


I also realised I wrote some logic into the enum which might be causing 
the problem, since I stripped the logic out and it now throws an 
'ArgumentException: Field 
org.permacode.patternrepo.domain.entity.TestRun.timeFrame of 
org.permacode.patternrepo.domain.entity.test...@a82a10 can not be set 
to null value.' So that is good.


However I don't know why my logic in the enum should affect this. Here's 
what I've got:


public enum PatternRepoTimeFrame
{
NOT_USED(NOT USED),
DAILY(DAILY);
private String period;

private PatternRepoTimeFrame(String newPeriod)
{
this.period = newPeriod;
}

public static PatternRepoTimeFrame get(String whichPeriod)
{
for (PatternRepoTimeFrame value : PatternRepoTimeFrame.values())
{
if (value.period.equals(whichPeriod))
{
return value;
}
}
return NOT_USED;
}
}




Adam Hardy on 04/04/09 12:24, wrote:
I followed your suggestion and placed the 'nullable' attribute on the 
column, but it had no impact.


I'm having problems grasping the logic behind that nullable attribute.

If the column is not nullable and I also configure JPA with 
nullable=false, then surely OpenJPA should throw an exception before 
it tries to execute a prepared statement with that field set to null?


Failing that, OpenJPA should try to persist the entity with that 
column = null, but the database should throw a constraint violation 
exception.


However in my case here, I don't see either behaviour - what I am 
seeing is the value of the data changed from null to an empty string.


Surely a bug?



Michael Dick on 03/04/09 21:52, wrote:

Have you tried the following ?

basic name=numericDisplay
 column name=NUMERIC_DISPLAY nullable=false/
 enumeratedSTRING/enumerated
/basic

By default OpenJPA doesn't check for constraints on your columns. So 
if your

mappings (or annotations) aren't consistent with the constraints in the
database you can run into problems.

Alternatively you can configure OpenJPA to read the data from the 
database

by adding this property :
property name=openjpa.jdbc.SchemaFactory value=native/

If you've tried either of those and we're still persisting a null 
value then

it's definitely a bug.

-mike

On Fri, Apr 3, 2009 at 1:51 PM, Craig L Russell 
craig.russ...@sun.comwrote:



Hi Adam,

Sounds like a bug. Can you please file a JIRA?

Thanks,

Craig


On Apr 3, 2009, at 9:26 AM, Adam Hardy wrote:

 Just tested this with static enhancement against mysql and have the 
same
problem. OpenJPA is inserting a blank string into the not-null 
field when

the the enum variable is null.

Is this a bug or to be expected?

Regards
Adam

Adam Hardy on 01/04/09 17:38, wrote:


I have an entity bean with this property in v1.2.0 and H2 db:
basic name=numericDisplay
 column name=NUMERIC_DISPLAY/
 enumeratedSTRING/enumerated
/basic
I just discovered that I can set the property on the bean to null and
save it to a field in the DB with a not-null constraint. It saves a
zero-length string.
On reading back the row however OpenJPA throws this:
openjpa-1.2.0-r422266:683325 nonfatal general error
org.apache.openjpa.persistence.PersistenceException: No enum const 
class

org.permacode.patternrepo.PatternRepoNumericDisplay.
Surely this is inconsistent? Shouldn't I get an error when trying 
to do

the write first of all?
Admittedly I have yet to test it with pre-enhanced beans but I 
figured it

would be the same (or is that a completely different code base?)





Re: enum = null becomes blank string in db

2009-04-04 Thread Adam Hardy

Hi Michael,

I just re-read your message and saw the misunderstanding:

OpenJPA is not persisting null as you had thought, it is changing the null value 
to an empty string.


I also realised I wrote some logic into the enum which might be causing the 
problem, since I stripped the logic out and it now throws an 'ArgumentException: 
Field org.permacode.patternrepo.domain.entity.TestRun.timeFrame of 
org.permacode.patternrepo.domain.entity.test...@a82a10 can not be set to 
null value.' So that is good.


However I don't know why my logic in the enum should affect this. Here's what 
I've got:


public enum PatternRepoTimeFrame
{
NOT_USED(NOT USED),
DAILY(DAILY);
private String period;

private PatternRepoTimeFrame(String newPeriod)
{
this.period = newPeriod;
}

public static PatternRepoTimeFrame get(String whichPeriod)
{
for (PatternRepoTimeFrame value : PatternRepoTimeFrame.values())
{
if (value.period.equals(whichPeriod))
{
return value;
}
}
return NOT_USED;
}
}




Adam Hardy on 04/04/09 12:24, wrote:
I followed your suggestion and placed the 'nullable' attribute on the 
column, but it had no impact.


I'm having problems grasping the logic behind that nullable attribute.

If the column is not nullable and I also configure JPA with 
nullable=false, then surely OpenJPA should throw an exception before it 
tries to execute a prepared statement with that field set to null?


Failing that, OpenJPA should try to persist the entity with that column 
= null, but the database should throw a constraint violation exception.


However in my case here, I don't see either behaviour - what I am seeing 
is the value of the data changed from null to an empty string.


Surely a bug?



Michael Dick on 03/04/09 21:52, wrote:

Have you tried the following ?

basic name=numericDisplay
 column name=NUMERIC_DISPLAY nullable=false/
 enumeratedSTRING/enumerated
/basic

By default OpenJPA doesn't check for constraints on your columns. So 
if your

mappings (or annotations) aren't consistent with the constraints in the
database you can run into problems.

Alternatively you can configure OpenJPA to read the data from the 
database

by adding this property :
property name=openjpa.jdbc.SchemaFactory value=native/

If you've tried either of those and we're still persisting a null 
value then

it's definitely a bug.

-mike

On Fri, Apr 3, 2009 at 1:51 PM, Craig L Russell 
craig.russ...@sun.comwrote:



Hi Adam,

Sounds like a bug. Can you please file a JIRA?

Thanks,

Craig


On Apr 3, 2009, at 9:26 AM, Adam Hardy wrote:

 Just tested this with static enhancement against mysql and have the 
same
problem. OpenJPA is inserting a blank string into the not-null field 
when

the the enum variable is null.

Is this a bug or to be expected?

Regards
Adam

Adam Hardy on 01/04/09 17:38, wrote:


I have an entity bean with this property in v1.2.0 and H2 db:
basic name=numericDisplay
 column name=NUMERIC_DISPLAY/
 enumeratedSTRING/enumerated
/basic
I just discovered that I can set the property on the bean to null and
save it to a field in the DB with a not-null constraint. It saves a
zero-length string.
On reading back the row however OpenJPA throws this:
openjpa-1.2.0-r422266:683325 nonfatal general error
org.apache.openjpa.persistence.PersistenceException: No enum const 
class

org.permacode.patternrepo.PatternRepoNumericDisplay.
Surely this is inconsistent? Shouldn't I get an error when trying 
to do

the write first of all?
Admittedly I have yet to test it with pre-enhanced beans but I 
figured it

would be the same (or is that a completely different code base?)




Re: enum = null becomes blank string in db

2009-04-03 Thread Adam Hardy
Just tested this with static enhancement against mysql and have the same 
problem. OpenJPA is inserting a blank string into the not-null field when the 
the enum variable is null.


Is this a bug or to be expected?

Regards
Adam

Adam Hardy on 01/04/09 17:38, wrote:

I have an entity bean with this property in v1.2.0 and H2 db:

basic name=numericDisplay
  column name=NUMERIC_DISPLAY/
  enumeratedSTRING/enumerated
/basic

I just discovered that I can set the property on the bean to null and 
save it to a field in the DB with a not-null constraint. It saves a 
zero-length string.


On reading back the row however OpenJPA throws this:

openjpa-1.2.0-r422266:683325 nonfatal general error 
org.apache.openjpa.persistence.PersistenceException: No enum const class 
org.permacode.patternrepo.PatternRepoNumericDisplay.



Surely this is inconsistent? Shouldn't I get an error when trying to do 
the write first of all?


Admittedly I have yet to test it with pre-enhanced beans but I figured 
it would be the same (or is that a completely different code base?)




enumeration handling

2009-04-01 Thread Adam Hardy

I have an entity bean with this property in v1.2.0 and H2 db:

basic name=numericDisplay
  column name=NUMERIC_DISPLAY/
  enumeratedSTRING/enumerated
/basic

I just discovered that I can set the property on the bean to null and save it to 
a field in the DB with a not-null constraint. It saves a zero-length string.


On reading back the row however OpenJPA throws this:

openjpa-1.2.0-r422266:683325 nonfatal general error 
org.apache.openjpa.persistence.PersistenceException: No enum const class 
org.permacode.patternrepo.PatternRepoNumericDisplay.



Surely this is inconsistent? Shouldn't I get an error when trying to do the 
write first of all?


Admittedly I have yet to test it with pre-enhanced beans but I figured it would 
be the same (or is that a completely different code base?)


Regards
Adam


usage of embeddable

2009-03-24 Thread Adam Hardy
I have a requirement to display some price data using different formats 
according to its origin, i.e. decimal places vary, or are shown as fractions 
e.g. 129 21/32


My idea is to use 'embeddable' to persist a field of type MyBigDecimalWrapper 
which knows how to display itself, i.e. it has a BigDecimal and an enum 
specifying display type.


However I then realised I would need to have an extra column for every price 
column.


This seems like overkill, considering that some persistent entities might have 
five price fields and the display type is the same for each.


There is no way around this, is there, because I can't map a column more than 
once?


Re: How do I persist timestamp in UTC timezone?

2009-03-17 Thread Adam Hardy
What I understand is that Java will store the date in milliseconds from 
1970-01-01 UTC. However your operating system and your database will probably 
affect what is stored in the database, and then JDBC will work out the correct 
millisecond value when it instantiates a date from your database.




Fay Wang on 17/03/09 21:15, wrote:

Hi Fazi,
   I found that by putting 


TimeZone.setDefault(TimeZone.getTimeZone(Etc/UTC));

   to make your java app in UTC time zone (see below in testDate), openjpa will store the dates in UTC in the database. 


public void testDate(){
TimeZone.setDefault(TimeZone.getTimeZone(Etc/UTC));

DateTest dt = new DateTest();
dt.setId(id);
dt.setCreatedTime(Calendar.getInstance(TimeZone.getTimeZone(UTC)));
dt.setStartTime(new Date());



-Fay


--- On Tue, 3/17/09, fazi faisal.ans...@gmail.com wrote:


From: fazi faisal.ans...@gmail.com
Subject: How do I persist timestamp in UTC timezone?
To: users@openjpa.apache.org
Date: Tuesday, March 17, 2009, 9:56 AM
Hi 


I need to persist all timestamps in UTC timezone. I found
this documentation
which talks about using Calendar by setting the timezone
value to the
desired timezone
(http://openjpa.apache.org/builds/1.1.0/apache-openjpa-1.1.0/docs/manual/ref_guide_pc_scos.html#ref_guide_pc_calendar_timezone),
plus this JIRA
(https://issues.apache.org/jira/browse/OPENJPA-322) that
gives me some idea on how I can initialize Calendar to
insert/update time in
UTC but nothing has worked so far. The timestamps are
always entered and
read in DB local timezone (PDT). I noticed that the
Calendar timezone of the
retrieved object is set correctly to UTC, however, the time
is still in
local timezone (PDT). I also noticed that the retrieved
object has JPA
implementation of the Calendar object:
org.apache.openjpa.util.java$util$GregorianCalendar$proxy. 



We are using openJPA version 1.1.0. 


I am copying my artifacts below. The test class has two
timestamp fields,
one is for UTC timezone and the other one is for local
timezone to show that
both values are same. 


Please let me know if any other information can help
understand this problem
better. 

Any help on this matter will be greatly appreciated. 


-
TestDate table:
CREATE TABLE DATETEST (
ID VARCHAR(255) NOT NULL,
CREATEDTIME TIMESTAMP, 
STARTTIME TIMESTAMP)
 
--


--
JPA class:
--
package com.my.package.entity;

import java.io.Serializable;
import java.util.Calendar;
import java.util.Date;
import java.util.TimeZone;

import javax.persistence.*;


@Entity
@Table(name=DATETEST)
public class DateTest implements Serializable  {

@Id
@Column(name=ID)
private  String  id;

@Temporal(TemporalType.TIMESTAMP)
@Column(name=CREATEDTIME)
private  Calendar  createdTime =
Calendar.getInstance(TimeZone.getTimeZone(UTC));

@Temporal(TemporalType.TIMESTAMP)
@Column(name=STARTTIME)
private  Date  startTime;

public Calendar getCreatedTime() {
return createdTime;
}

public void setCreatedTime(Calendar createdTime) {
this.createdTime = createdTime;
}

public Date getStartTime() {
return startTime;
}

public void setStartTime(Date startTime) {
this.startTime = startTime;
}

public String getId() {
return id;
}

public void setId(String id) {
this.id = id;
}
}
--

--
JUnit test:
--
   @Test
   public void testDate()
   {
 
 DateTest dt = new DateTest();

 dt.setId(id);

dt.setCreatedTime(Calendar.getInstance(TimeZone.getTimeZone(UTC)));

 dt.setStartTime(new Date());

 try
 {
   //persist
 }
 catch (Exception e)
 {
   fail(e.getMessage());
 }
 
 // Check result

 DateTest returned = null;
 try
 {
   returned = //find by id;
   Calendar createdTimeC = returned.getCreatedTime();
   System.out.println(createdTime type:
 +
createdTimeC.getClass().getName());
   System.out.println(createdTime timezone:
 +
createdTimeC.getTimeZone());
   System.out.println(Created time:
 +
createdTimeC.getTime());
   System.out.println(Start time  :
 +
returned.getStartTime());
   System.out.println(Created time (millisecs):
 +
createdTimeC.getTimeInMillis());
   System.out.println(Start time (millisecs)  :
 +
returned.getStartTime().getTime());
 }
 catch 

Re: How do I persist timestamp in UTC timezone?

2009-03-17 Thread Adam Hardy
By the way, the timezone can also be stored in the timestamp column in the 
database, but I think not by default and you must specify TIMESTAMP WITH LOCAL 
TIMEZONE or similar, depending on your DB software.


Adam Hardy on 17/03/09 21:55, wrote:
What I understand is that Java will store the date in milliseconds from 
1970-01-01 UTC. However your operating system and your database will 
probably affect what is stored in the database, and then JDBC will work 
out the correct millisecond value when it instantiates a date from your 
database.




Fay Wang on 17/03/09 21:15, wrote:

Hi Fazi,
   I found that by putting
TimeZone.setDefault(TimeZone.getTimeZone(Etc/UTC));

   to make your java app in UTC time zone (see below in testDate), 
openjpa will store the dates in UTC in the database.

public void testDate(){
TimeZone.setDefault(TimeZone.getTimeZone(Etc/UTC));

DateTest dt = new DateTest();
dt.setId(id);

dt.setCreatedTime(Calendar.getInstance(TimeZone.getTimeZone(UTC)));

dt.setStartTime(new Date());



-Fay


--- On Tue, 3/17/09, fazi faisal.ans...@gmail.com wrote:


From: fazi faisal.ans...@gmail.com
Subject: How do I persist timestamp in UTC timezone?
To: users@openjpa.apache.org
Date: Tuesday, March 17, 2009, 9:56 AM
Hi
I need to persist all timestamps in UTC timezone. I found
this documentation
which talks about using Calendar by setting the timezone
value to the
desired timezone
(http://openjpa.apache.org/builds/1.1.0/apache-openjpa-1.1.0/docs/manual/ref_guide_pc_scos.html#ref_guide_pc_calendar_timezone), 


plus this JIRA
(https://issues.apache.org/jira/browse/OPENJPA-322) that
gives me some idea on how I can initialize Calendar to
insert/update time in
UTC but nothing has worked so far. The timestamps are
always entered and
read in DB local timezone (PDT). I noticed that the
Calendar timezone of the
retrieved object is set correctly to UTC, however, the time
is still in
local timezone (PDT). I also noticed that the retrieved
object has JPA
implementation of the Calendar object:
org.apache.openjpa.util.java$util$GregorianCalendar$proxy.

We are using openJPA version 1.1.0.
I am copying my artifacts below. The test class has two
timestamp fields,
one is for UTC timezone and the other one is for local
timezone to show that
both values are same.
Please let me know if any other information can help
understand this problem
better.
Any help on this matter will be greatly appreciated.
-
TestDate table:
CREATE TABLE DATETEST (
ID VARCHAR(255) NOT NULL,
CREATEDTIME TIMESTAMP, STARTTIME 
TIMESTAMP)
 
--


--
JPA class:
--
package com.my.package.entity;

import java.io.Serializable;
import java.util.Calendar;
import java.util.Date;
import java.util.TimeZone;

import javax.persistence.*;


@Entity
@Table(name=DATETEST)
public class DateTest implements Serializable  {

@Id
@Column(name=ID)
private String  id;

@Temporal(TemporalType.TIMESTAMP)
@Column(name=CREATEDTIME)
private Calendar  createdTime =
Calendar.getInstance(TimeZone.getTimeZone(UTC));

@Temporal(TemporalType.TIMESTAMP)
@Column(name=STARTTIME)
private Date  startTime;

public Calendar getCreatedTime() {
return createdTime;
}

public void setCreatedTime(Calendar createdTime) {
this.createdTime = createdTime;
}

public Date getStartTime() {
return startTime;
}

public void setStartTime(Date startTime) {
this.startTime = startTime;
}

public String getId() {
return id;
}

public void setId(String id) {
this.id = id;
}
}
--

--
JUnit test:
--
   @Test
   public void testDate()
   {
  DateTest dt = new DateTest();
 dt.setId(id);

dt.setCreatedTime(Calendar.getInstance(TimeZone.getTimeZone(UTC)));

 dt.setStartTime(new Date());

 try

 {
   //persist
 }
 catch (Exception e)
 {
   fail(e.getMessage());
 }
  // Check result
 DateTest returned = null;
 try
 {
   returned = //find by id;
   Calendar createdTimeC = returned.getCreatedTime();
   System.out.println(createdTime type:
 +
createdTimeC.getClass().getName());
   System.out.println(createdTime timezone:
 +
createdTimeC.getTimeZone());
   System.out.println(Created time:
 +
createdTimeC.getTime());
   System.out.println(Start time  :
 +
returned.getStartTime());
   System.out.println(Created time (millisecs):
 +
createdTimeC.getTimeInMillis

Re: How do I persist timestamp in UTC timezone?

2009-03-17 Thread Adam Hardy

Hello Fay,

no reason to worry, it is logical that the time saved to the database depends on 
the default time zone in Java. If you don't set the default time zone, it uses 
the time in your operating system.


I did say that the database field TIMESTAMP default is without time zone - you 
can specify that it does save the timezone, but that is an optional extra and 
database vendor specific (I think).


The issue here is that Java times and dates are time-zone-independent, because 
they are held as milliseconds past 1970-01-01 GMT/UTC


As Paul said, your database stores time-zone-dependent values (i.e. times 
without a timezone) and the time zone that they are dependent on is the time 
zone of your operating system, because JDBC uses that to work out what the 
millisecond past 1970 value is. So if you change your OS timezone, you change 
all the timestamps in your database - as far as Java is concerned.


I figure that this not optimal, since you have to be careful if you move a 
database from a host in one timezone to a host in another timezone, but how 
often does that happen?



Fay Wang on 17/03/09 23:41, wrote:
To my knowledge, as far as DB2 is concerned, DB2 v9 and below does not have timestamp with time zone data type. The value stored in the DB2 timestamp column is without time zone information. It is up to the application to determine the time zone. For example, in my db2 table, I have the following value 

   2009-03-17-22.05.37.569000 


stored in my timestamp column.

If I set the default time zone to UTC in my application, I get the value  back as: 


Created time: Tue Mar 17 22:05:37 UTC 2009

If I did not set the default time zone to UTC, I get this value:

Created time: Tue Mar 17 22:05:37 PDT 2009

A new data type Timestamp with time zone may be introduced in the next DB2 
release, but currently there is no way to store the time zone information in the 
timestamp column. DB2 experts, please correct me if I am wrong.

Having said that, with this statement,  
	dt.setCreatedTime(Calendar.getInstance(TimeZone.getTimeZone(Etc/UTC)));


when application sets default time zone to UTC, the timestamp value in the database becomes 2009-03-17-23.26.53.32. Without setting the default time zone, the timestamp value in the database is 2009-03-17-16.23.27.494000. Let me try a simple POJO test case to see if this is an openjpa problem or not. 



--- On Tue, 3/17/09, Paul Copeland t...@jotobjects.com wrote:


From: Paul Copeland t...@jotobjects.com
Subject: Re: How do I persist timestamp in UTC timezone?
To: users@openjpa.apache.org
Date: Tuesday, March 17, 2009, 3:04 PM
Of course java.util.Date is already measured in milliseconds
UTC without regard to TimeZone.  So it may seem that you are
converting your Date objects to a different timezone, but
that's not the case.  This is why you can use
Calendar.compareTo() with objects in different TimeZones.

By definition - new Date() is the same thing as new
Date(System.currentTimeMillis()) no matter what the default
TimeZone!

The link cited by Fazi implies that you have to store the
TimeZone along with the date if you want to load the date
(milliseconds) back into a Calendar object representing that
TimeZone.

If you change the default timezone to UTC (or Moscow, etc.)
then all the other Calendar objects that are meant to
represent the default Locale will be wrong!

- Paul

On 3/17/2009 2:15 PM, Fay Wang wrote:

Hi Fazi,
   I found that by putting 
   

TimeZone.setDefault(TimeZone.getTimeZone(Etc/UTC));

   to make your java app in UTC time zone (see below

in testDate), openjpa will store the dates in UTC in the
database. 

public void testDate(){
   

TimeZone.setDefault(TimeZone.getTimeZone(Etc/UTC));

DateTest dt = new DateTest();
dt.setId(id);
   

dt.setCreatedTime(Calendar.getInstance(TimeZone.getTimeZone(UTC)));

dt.setStartTime(new Date());



-Fay


--- On Tue, 3/17/09, fazi

faisal.ans...@gmail.com wrote:
  

From: fazi faisal.ans...@gmail.com
Subject: How do I persist timestamp in UTC

timezone?

To: users@openjpa.apache.org
Date: Tuesday, March 17, 2009, 9:56 AM
Hi 
I need to persist all timestamps in UTC timezone.

I found

this documentation
which talks about using Calendar by setting the

timezone

value to the
desired timezone


(http://openjpa.apache.org/builds/1.1.0/apache-openjpa-1.1.0/docs/manual/ref_guide_pc_scos.html#ref_guide_pc_calendar_timezone),

plus this JIRA


(https://issues.apache.org/jira/browse/OPENJPA-322) that

gives me some idea on how I can initialize

Calendar to

insert/update time in
UTC but nothing has worked so far. The timestamps

are

always entered and
read in DB local timezone (PDT). I noticed that

the

Calendar timezone of the
retrieved object is set correctly to UTC, however,

the time

is still in
local timezone (PDT). I also noticed that the

retrieved

object has JPA
implementation of the Calendar object:


Re: OpenJPA Maven Plugin

2009-03-13 Thread Adam Hardy

Hi Mark,
didn't realise you wanted my feedback - your email made me think too much about 
skiing (or not skiing, to be precise) ;)


I would definitely consider Randy's suggestion:


I believe that we could add a new tag/mojo list member called 
additionalClasspathElements. AFAICT, that is the conventional approach to 
this. We could also add a mojo member for testClasspathElements and use those if 
available... i.e. in the process-test-classes phase.



Regarding the other classes beyond PCEnhancer, sorry I have no knowledge of 
them.

Regards
Adam

Mark Struberg on 13/03/09 14:06, wrote:

Randy, Adam, Kevin!

Any info on this?
We are now at 95% but I personally would prefer to have the 
openjpa-maven-plugin just working out of the box for (almost) all situations. 
But I need your input what is still missing...

txs and LieGrue,
strub

--- Mark Struberg strub...@yahoo.de schrieb am Mo, 9.3.2009:


Von: Mark Struberg strub...@yahoo.de
Betreff: Re: OpenJPA Maven Plugin
An: users@openjpa.apache.org
Datum: Montag, 9. März 2009, 17:50

Hi!

I'm back from skiing and catching up the current status.

So imho the following things are due to be done:

1.) add a config property to specify a specific
persistence.xml location and name.

2.) add a config property to specify one specific
PersistenceUnit in the persistence.xml from 1.)

3.) add a way to also resolve the test scope dependencies.
Please note that simply adding the test classpath in
target/test-classes may be not enough since there may be
missing dependencies left open in this case.

All those functions should get implemented for
openjpa:enhance, openjpa:sql and openjpa:schema. I already
know the property for the PCEnhancer but what about the
respective parameters for the
org.apache.openjpa.jdbc.meta.MappingTool?


txs and LieGrue,
strub


--- Randy Watler wat...@wispertel.net
schrieb am Mi, 4.3.2009:


Von: Randy Watler wat...@wispertel.net
Betreff: Re: OpenJPA Maven Plugin
An: users@openjpa.apache.org
Datum: Mittwoch, 4. März 2009, 18:32
Sorry... I meant Mark!

Mark, let me know if you want me to submit a patch.

Randy

Randy Watler wrote:

Adam,

After rereading Matt's question and your

original

problem report, I believe you are asking how to get

the

target/test-classes directory within a single project

added

to the class path used to invoke the enhancer. By

default,

the existing target/classes directory is included, but

not

target/test-classes.

I believe that we could add a new tag/mojo list

member

called additionalClasspathElements. AFAICT, that is

the

conventional approach to this. We could also add a

mojo

member for testClasspathElements and use those if
available... i.e. in the process-test-classes phase.

Matt... do you want me to submit a patch with

either

change, or do you want to handle this?

Randy

Randy Watler wrote:

Adam,

I stand corrected. Let me take a look for

you.

Randy

Adam Hardy wrote:

Randy Watler on 04/03/09 14:28, wrote:

Just to clarify this is also

using

version 1.0 of the plugin? I dont think

classes is

supported if so. Did you try this?



includes${build.testOutputDirectory}/org/permacode/atomic/domain/entity/*.class/includes

Just checking before I diagnose for

you,

Yes, I mean 1.1-SNAPSHOT actually.

Is this not the correct documentation?

http://mojo.codehaus.org/openjpa-maven-plugin/enhance-mojo.html















  





Re: Time handling in Dates

2009-03-10 Thread Adam Hardy


logged in Jira:

https://issues.apache.org/jira/browse/OPENJPA-971

Adam Hardy on 09/03/09 13:25, wrote:
I have what looks like a bug in time handling, that probably stems from 
the database Dictionaries. My testing shows it occurs with MySQL and 
Derby, but not for PostgreSQL, H2 or Hypersonic (using latest GA 
versions of Java, OpenJPA  JDBC drivers).


If I save an entity with this set-up, I see incorrect handling of dates:

class MyEntity
{
...
private java.util.Date opening;
...
}

and:

entity
attributes
  basic name=opening
column name=OPENING/
temporalTIME/temporal
  /basic
  ...

and:

create table MY_ENTITY (
...
OPENING time not null,
...

In my test I assign a new value to 'opening', save the entity and 
retrieve it from the database again. Then I compare the date on the 
retrieved entity against the value originally assigned, and I get this 
result:


with Derby: original Calendar.HOUR_OF_DAY == 9, retrieved value = 10

(probably due to Daylight Saving Time handling)

with MySQL: original Calendar.YEAR == 1970, retrieved value = 1969

(could be declared irrelevant since the value comes from a Time database 
type and the Java comparisons should blank out or ignore non-time Date 
values).


I couldn't find this in JIRA but thought I'd bring it up here before 
logging it. I can post a unit test showing the bug. Shall I do that now?


Regards
Adam





Time handling in Dates

2009-03-09 Thread Adam Hardy
I have what looks like a bug in time handling, that probably stems from the 
database Dictionaries. My testing shows it occurs with MySQL and Derby, but not 
for PostgreSQL, H2 or Hypersonic (using latest GA versions of Java, OpenJPA  
JDBC drivers).


If I save an entity with this set-up, I see incorrect handling of dates:

class MyEntity
{
...
private java.util.Date opening;
...
}

and:

entity
attributes
  basic name=opening
column name=OPENING/
temporalTIME/temporal
  /basic
  ...

and:

create table MY_ENTITY (
...
OPENING time not null,
...

In my test I assign a new value to 'opening', save the entity and retrieve it 
from the database again. Then I compare the date on the retrieved entity against 
the value originally assigned, and I get this result:


with Derby: original Calendar.HOUR_OF_DAY == 9, retrieved value = 10

(probably due to Daylight Saving Time handling)

with MySQL: original Calendar.YEAR == 1970, retrieved value = 1969

(could be declared irrelevant since the value comes from a Time database type 
and the Java comparisons should blank out or ignore non-time Date values).


I couldn't find this in JIRA but thought I'd bring it up here before logging it. 
I can post a unit test showing the bug. Shall I do that now?


Regards
Adam


Re: Quick question re date, time, timestamp or java.util.Date/Calendar

2009-03-05 Thread Adam Hardy
Actually the JPA spec (1.0 and 2.0) has a knock-on effect concerning the use of 
entity beans in the front-end.


Since I must use either java.util.Date or Calendar as the type for my temporal 
properties, I can't rely on the property type to distinguish between times and 
dates when it comes to displaying the values or for parsing incoming HTTP 
parameters.


This gives the programmer extra coding burden in the front-end, and I can't see 
any counter-balancing advantage in the persistence layer from banning the use of 
java.sql.Date and Time.


Have I missed something or is this an improvement that should be put into JPA 2 
before it goes final?




Adam Hardy on 04/03/09 23:54, wrote:

Thanks Mike.

Looks like the same wording in JPA 2.0 too.

Regards Adam

Michael Dick on 04/03/09 19:39, wrote:

Hi Adam,

Looks like we're less stringent about the @Temporal annotation. I'd have to
 look closer to see that's the case.

Regarding the JPA 2.0 spec you can find a copy of the public review draft 
here http://jcp.org/aboutJava/communityprocess/pr/jsr317/index.html


-mike

On Wed, Mar 4, 2009 at 10:57 AM, Adam Hardy 
adam@cyberspaceroad.comwrote:



I converted my project over from java.util.Date to java.sql.Timestamp for
 entity fields after I figured that would give me more room to maneuver
with a new requirement for time fields.

It went smoothly with OpenJPA and made the MVC layer's type converter 
code a cinch to refactor.


However I then ran my tests under Hibernate JPA and Toplink Essentials,
and both complained bitterly that I was violating the spec and threw 
exceptions.


Looking through the JPA 1 spec, I see where I have transgressed (9.1.20):


The Temporal annotation must be specified for persistent fields or 
properties of type java.util.Date and java.util.Calendar. It may only be 
specified for fields or properties of these types.


Is the OpenJPA interpretations deliberately including Timestamp or is 
that considered an OpenJPA feature?


Is there any change in JPA 2?

Also, can anyone give a URL for the JPA 2 spec pdf? Google turned up 
nothing.




Re: Quick question re date, time, timestamp or java.util.Date/Calendar

2009-03-05 Thread Adam Hardy

Hi Craig,

thanks for shedding some light on the issue.

It did seem like a severe omission since Dates and Calendars can't do 
nanoseconds.

Regards
Adam

Craig L Russell on 05/03/09 15:33, wrote:

Hi Adam,

I think there is a misunderstanding. From the spec, 2.2:
The persistent fields or properties of an entity may be of the following 
types: Java primitive types;
java.lang.String; other Java serializable types (including wrappers of 
the primitive types,

java.math.BigInteger, java.math.BigDecimal, java.util.Date,
java.util.Calendar[5], java.sql.Date, java.sql.Time, java.sql.Timestamp,
byte[], Byte[], char[], Character[], and user-defined types that 
implement the Serial-
izable interface); enums; entity types; collections of entity types; 
embeddable classes (see Section

2.5); collections of basic and embeddable types (see Section 2.6).

So there is no problem using a java.sql Time, Date, or Timestamp as a 
persistent field or property type.


The @Temporal annotation was introduced so the provider would be able to 
figure out the correct methods to persist java.util.Date and 
java.util.Calendar, since these have no standard representation in the 
database.


Your code might work if you simply omit the @Temporal annotation entirely.

Craig

On Mar 5, 2009, at 4:39 AM, Adam Hardy wrote:

Actually the JPA spec (1.0 and 2.0) has a knock-on effect concerning 
the use of entity beans in the front-end.


Since I must use either java.util.Date or Calendar as the type for my 
temporal properties, I can't rely on the property type to distinguish 
between times and dates when it comes to displaying the values or for 
parsing incoming HTTP parameters.


This gives the programmer extra coding burden in the front-end, and I 
can't see any counter-balancing advantage in the persistence layer 
from banning the use of java.sql.Date and Time.


Have I missed something or is this an improvement that should be put 
into JPA 2 before it goes final?




Adam Hardy on 04/03/09 23:54, wrote:

Thanks Mike.
Looks like the same wording in JPA 2.0 too.
Regards Adam
Michael Dick on 04/03/09 19:39, wrote:

Hi Adam,
Looks like we're less stringent about the @Temporal annotation. I'd 
have to

look closer to see that's the case.
Regarding the JPA 2.0 spec you can find a copy of the public review 
draft here 
http://jcp.org/aboutJava/communityprocess/pr/jsr317/index.html

-mike
On Wed, Mar 4, 2009 at 10:57 AM, Adam Hardy 
adam@cyberspaceroad.comwrote:
I converted my project over from java.util.Date to 
java.sql.Timestamp for

entity fields after I figured that would give me more room to maneuver
with a new requirement for time fields.
It went smoothly with OpenJPA and made the MVC layer's type 
converter code a cinch to refactor.
However I then ran my tests under Hibernate JPA and Toplink 
Essentials,
and both complained bitterly that I was violating the spec and 
threw exceptions.
Looking through the JPA 1 spec, I see where I have transgressed 
(9.1.20):
The Temporal annotation must be specified for persistent fields or 
properties of type java.util.Date and java.util.Calendar. It may 
only be specified for fields or properties of these types.
Is the OpenJPA interpretations deliberately including Timestamp or 
is that considered an OpenJPA feature?

Is there any change in JPA 2?
Also, can anyone give a URL for the JPA 2 spec pdf? Google turned 
up nothing.




Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:craig.russ...@sun.com
P.S. A good JDO? O, Gasp!





OpenJPA Maven Plugin

2009-03-04 Thread Adam Hardy
I now use the openjpa-maven-plugin to enhance my persistence-enabled classes, 
but I have a couple of projects which rely on a superclass, which needs 
enhancing and testing.


I put this in a seperate project and wrote some unit tests for it, which needed 
a few real entity beans to demonstrate parent-child relationships etc. So I 
coded up a couple of entity beans for testing only, in this project.


I don't want these test-only entity beans in my jar, so I put them in the 
src/test/java directory, and this caused me some confusion when configuring 
openjpa-maven-plugin.


I have a couple of issues getting this to work. From debug logging, i can see 
that openjpa-maven-plugin is not including the test directory in the classpath, 
despite this config:


execution
  phaseprocess-test-classes/phase
  idenhanceTestEntities/id
  goals
goalenhance/goal
  /goals
  configuration
classes
  ${build.testOutputDirectory}/org/permacode/atomic/domain/entity
/classes
outputDirectory${build.testOutputDirectory}/outputDirectory
toolProperties
  property
nameproperties/name
value
  ${build.testOutputDirectory}/META-INF/persistence.xml#OpenJpaTest
/value
  /property
/toolProperties
  /configuration
/execution

So it fails with a ClassNotFoundException - is there anything I can configure to 
get the test dir into the classpath?


Thanks
Adam


Re: OpenJPA Maven Plugin

2009-03-04 Thread Adam Hardy

Randy Watler on 04/03/09 14:28, wrote:
Just to clarify this is also using version 1.0 of the plugin? I dont 
think classes is supported if so. Did you try this?


includes${build.testOutputDirectory}/org/permacode/atomic/domain/entity/*.class/includes 



Just checking before I diagnose for you,


Yes, I mean 1.1-SNAPSHOT actually.

Is this not the correct documentation?

http://mojo.codehaus.org/openjpa-maven-plugin/enhance-mojo.html



Re: OpenJPA Maven Plugin

2009-03-04 Thread Adam Hardy

Hi Mark,

not quite.

I have module A with one Entity in src/main/java and several test entities which 
extend it (to allow testing) in src/test/java.


I have moduleB which relies on moduleA non-test normal jar. I don't need the 
test entities outside moduleA.


Therefore I created 2 executions - one for the main Entity and one for the test 
entities.


openjpa-maven-plugin is not including the test directory in the classpath.  This 
is the execution for the test entities (the other works fine):


execution
  phaseprocess-test-classes/phase
  idenhanceTestEntities/id
  goals
goalenhance/goal
  /goals
  configuration
classes
  ${build.testOutputDirectory}/org/permacode/atomic/domain/entity
/classes
outputDirectory${build.testOutputDirectory}/outputDirectory
toolProperties
  property
nameproperties/name
value
  ${build.testOutputDirectory}/META-INF/persistence.xml#OpenJpaTest
/value
  /property
/toolProperties
  /configuration
/execution

and it fails with a ClassNotFoundException because of the classpath omission of 
target/test-classes.




Mark Struberg on 04/03/09 15:24, wrote:
Adam, 


If I understood your problem correct, then you have a

.) module A which contains @Entities in src/test/java
.) module B which also has @Entities in src/test/java and rely on the @Entities 
from module A

Is this the scenario you have?

If so, you need to tell module A that it should package and add the test-jar as 
attached artifact. Simply add this to the pom.xml of module A:


   plugin
   artifactIdmaven-jar-plugin/artifactId
   executions
   execution
   goals
   goaltest-jar/goal
   /goals
   /execution
   /executions
   /plugin


after a 
$ mvn clean install

you can add the dependency to the test sources jar in the pom.xml of module B.) 
:


   dependency
   groupIdorg.apache.projectX/groupId
   artifactIdmoduleA/artifactId
   version1.0-SNAPSHOT/version
   classifiertests/classifier
   /dependency





Re: OpenJPA Maven Plugin

2009-03-04 Thread Adam Hardy
No problem! I'm just glad someone's willing to look at it. I answered Mark 
Struberg's message - although I didn't say much more than I've said already.


Randy Watler on 04/03/09 15:31, wrote:

Adam,

I stand corrected. Let me take a look for you.

Randy

Adam Hardy wrote:

Randy Watler on 04/03/09 14:28, wrote:
Just to clarify this is also using version 1.0 of the plugin? I 
dont think classes is supported if so. Did you try this?


includes${build.testOutputDirectory}/org/permacode/atomic/domain/entity/*.class/includes 



Just checking before I diagnose for you,


Yes, I mean 1.1-SNAPSHOT actually.

Is this not the correct documentation?

http://mojo.codehaus.org/openjpa-maven-plugin/enhance-mojo.html









Quick question re date, time, timestamp or java.util.Date/Calendar

2009-03-04 Thread Adam Hardy


I converted my project over from java.util.Date to java.sql.Timestamp for entity 
fields after I figured that would give me more room to maneuver with a new 
requirement for time fields.


It went smoothly with OpenJPA and made the MVC layer's type converter code a 
cinch to refactor.


However I then ran my tests under Hibernate JPA and Toplink Essentials, and both 
complained bitterly that I was violating the spec and threw exceptions.


Looking through the JPA 1 spec, I see where I have transgressed (9.1.20):

The Temporal annotation must be specified for persistent fields or properties 
of type java.util.Date and java.util.Calendar. It may only be specified for 
fields or properties of these types.


Is the OpenJPA interpretations deliberately including Timestamp or is that 
considered an OpenJPA feature?


Is there any change in JPA 2?

Also, can anyone give a URL for the JPA 2 spec pdf? Google turned up nothing.


Thanks
Adam


Re: OpenJPA Maven Plugin

2009-03-04 Thread Adam Hardy

Maven won't let me override that annotation in my execution config then?

Mark Struberg on 04/03/09 16:45, wrote:

Humm this is because the OpenJpaEnhancerMojo has

@requiresDependencyResolution compile

LieGrue,
strub

--- Adam Hardy adam@cyberspaceroad.com schrieb am Mi, 4.3.2009:


Von: Adam Hardy adam@cyberspaceroad.com
Betreff: Re: OpenJPA Maven Plugin
An: users@openjpa.apache.org
Datum: Mittwoch, 4. März 2009, 17:36
Hi Mark,

not quite.

I have module A with one Entity in src/main/java and
several test entities which extend it (to allow testing) in
src/test/java.

I have moduleB which relies on moduleA non-test normal jar.
I don't need the test entities outside moduleA.

Therefore I created 2 executions - one for the main Entity
and one for the test entities.

openjpa-maven-plugin is not including the test directory in
the classpath.  This is the execution for the test
entities (the other works fine):

execution
  phaseprocess-test-classes/phase
  idenhanceTestEntities/id
  goals
goalenhance/goal
  /goals
  configuration
classes
 
${build.testOutputDirectory}/org/permacode/atomic/domain/entity

/classes
   
outputDirectory${build.testOutputDirectory}/outputDirectory

toolProperties
  property
   
nameproperties/name

value
 
${build.testOutputDirectory}/META-INF/persistence.xml#OpenJpaTest

/value
  /property
/toolProperties
  /configuration
/execution

and it fails with a ClassNotFoundException because of the
classpath omission of target/test-classes.



Mark Struberg on 04/03/09 15:24, wrote:
Adam, 
If I understood your problem correct, then you have a


.) module A which contains @Entities in src/test/java
.) module B which also has @Entities in src/test/java

and rely on the @Entities from module A

Is this the scenario you have?

If so, you need to tell module A that it should

package and add the test-jar as attached artifact. Simply
add this to the pom.xml of module A:

plugin
   

artifactIdmaven-jar-plugin/artifactId

executions
   

execution
 

  goals
 

  goaltest-jar/goal
 

  /goals
   

/execution

/executions
/plugin

after a $ mvn clean install
you can add the dependency to the test sources jar in

the pom.xml of module B.) :

dependency
   

groupIdorg.apache.projectX/groupId
   

artifactIdmoduleA/artifactId
   

version1.0-SNAPSHOT/version
   

classifiertests/classifier

/dependency




Re: Quick question re date, time, timestamp or java.util.Date/Calendar

2009-03-04 Thread Adam Hardy

Thanks Mike.

Looks like the same wording in JPA 2.0 too.

Regards
Adam

Michael Dick on 04/03/09 19:39, wrote:

Hi Adam,

Looks like we're less stringent about the @Temporal annotation. I'd have to
look closer to see that's the case.

Regarding the JPA 2.0 spec you can find a copy of the public review draft
here http://jcp.org/aboutJava/communityprocess/pr/jsr317/index.html

-mike

On Wed, Mar 4, 2009 at 10:57 AM, Adam Hardy adam@cyberspaceroad.comwrote:


I converted my project over from java.util.Date to java.sql.Timestamp for
entity fields after I figured that would give me more room to maneuver with
a new requirement for time fields.

It went smoothly with OpenJPA and made the MVC layer's type converter code
a cinch to refactor.

However I then ran my tests under Hibernate JPA and Toplink Essentials, and
both complained bitterly that I was violating the spec and threw exceptions.

Looking through the JPA 1 spec, I see where I have transgressed (9.1.20):

The Temporal annotation must be specified for persistent fields or
properties of type java.util.Date and java.util.Calendar. It may only be
specified for fields or properties of these types.

Is the OpenJPA interpretations deliberately including Timestamp or is that
considered an OpenJPA feature?

Is there any change in JPA 2?

Also, can anyone give a URL for the JPA 2 spec pdf? Google turned up
nothing.


Thanks
Adam







Re: [ANN] OpenJPA Maven Plugin 1.0 Released

2009-03-03 Thread Adam Hardy
)
at 
org.codehaus.mojo.openjpa.OpenJpaEnhancerMojo.enhance(OpenJpaEnhancerMojo.java:125)




So basically you need to get that persistence-unit name to the PCEnhancer or it 
looks like it processes all the persistence-units.


Thanks
Adam

Mark Struberg on 28/02/09 22:03, wrote:

Hi!

@Randy, txs 4 the patch, good catch.

As I also commented in JIRA: A few points are still open

1.) using a different configuration XML (other than persistence.xml) should
be provided as own plugin attribute (and not in the 'additional properties'
list).

2.) using a different configuration XML must also work for the openjpa:sql
and openjpa:schema mojos.

I will implement this after I'm back from vacation next week.

@Adam: Can you please checkout the latest from SVN and try it with this
version?

svn co https://svn.codehaus.org/mojo/trunk/mojo/openjpa-maven-plugin

Do you have other wishes? So we can plan to make a 1.1 with ~ end of march.

txs and LieGrue, strub



--- Randy Watler wat...@wispertel.net schrieb am Sa, 28.2.2009:


Von: Randy Watler wat...@wispertel.net Betreff: Re: [ANN] OpenJPA Maven
Plugin 1.0 Released An: users@openjpa.apache.org Datum: Samstag, 28.
Februar 2009, 5:38 Mark/Adam,

I have filed a JIRA ticket with a patch for this issue: 
http://jira.codehaus.org/browse/MOJO-1309


I am not sure why Adam is specifying the 'properties' property, but we use
it to ensure the enhancement is processing based off of a specific project
persistence.xml file... there are multiple persistence.xml files on the
classpath. I feel this is a fairly important bug to fix since I think it is
generally a regression from the 1.0-alpha version.

HTH,

Randy Watler Apache Portals Jetspeed2 Committer

Mark Struberg wrote:

Adam,

For what I've seen the 'properties'

property specifies a file which contains the configuration.
So I'm honestly a bit confused about the value: 
META-INF/persistence.xml#OpenJpaTest


Can you please tell me what you like to achieve and

how your config file really looks like (including real filename).


Please note that the ant tasks imho calls the

PCEnhancer only via spawning an own java cmd. So any error occurring may
not get to your eyes there.


txs, strub

--- Mark Struberg strub...@yahoo.de schrieb am

Fr, 27.2.2009:



Von: Mark Struberg strub...@yahoo.de Betreff: Re: [ANN] OpenJPA Maven
Plugin 1.0

Released
An: users@openjpa.apache.org Datum: Freitag, 27. Februar 2009, 0:11 
Thanks for the response Adam! I will try to create an integration test

based on

your

info.

LieGrue, strub


--- Adam Hardy adam@cyberspaceroad.com

schrieb

am Do, 26.2.2009:



Von: Adam Hardy

adam@cyberspaceroad.com

Betreff: Re: [ANN] OpenJPA Maven Plugin 1.0

Released

An: users@openjpa.apache.org Datum: Donnerstag, 26. Februar 2009,
0:59 Mark Struberg on 25/02/09 09:49, wrote:


The Maven team is pleased to announce the

release



of


the


openjpa-maven-plugin-1.0


The plugin documentation can be found

here:



http://mojo.codehaus.org/openjpa-maven-plugin/



I gave it a test run and I have an error.

After



looking


thro the docs, the source and looking at the

debug, I



still


can't figure it out. I get this error:

org.apache.openjpa.lib.util.ParseException:



org.apache.openjpa.conf.openjpaconfigurationi...@52dd4e95.properties



= META-INF/persistence.xml#OpenJpaTest at



org.apache.openjpa.lib.util.Options.setInto(Options.java:237)



at



org.apache.openjpa.lib.util.Options.setInto(Options.java:181)



at



org.apache.openjpa.lib.conf.Configurations.populateConfiguration(Configurations.java:357)




at



org.apache.openjpa.enhance.PCEnhancer.run(PCEnhancer.java:4438)



at



org.apache.openjpa.enhance.PCEnhancer$1.run(PCEnhancer.java:4409)



at



org.apache.openjpa.lib.conf.Configurations.launchRunnable(Configurations.java:708)




 Caused by:

java.lang.reflect.InvocationTargetException

at


sun.reflect.NativeMethodAccessorImpl.invoke0(Native



Method)


at



sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)




at



sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)




at


java.lang.reflect.Method.invoke(Method.java:597)


at



org.apache.openjpa.lib.util.Options.invoke(Options.java:401)



at



org.apache.openjpa.lib.util.Options.setInto(Options.java:234)


... 27 more Caused by: java.util.MissingResourceException: 
META-INF/persistence.xml#OpenJpaTest#null at




org.apache.openjpa.lib.conf.ProductDerivations.load(ProductDerivations.java:272)




at



org.apache.openjpa.lib.conf.ConfigurationImpl.setProperties(ConfigurationImpl.java:762)




... 33 more


My config currently uses antrun, so I know I

don't



have


any major issues here. This is in my pom:

plugin



artifactIdmaven-war-plugin/artifactId

configuration



warName${project.artifactId}/warName

/configuration /plugin plugin



groupIdorg.codehaus.mojo/groupId






artifactIdopenjpa-maven-plugin/artifactId



executions execution idJPA Enhance/id


phaseprocess

Re: [ANN] OpenJPA Maven Plugin 1.0 Released

2009-03-03 Thread Adam Hardy

Randy Watler on 03/03/09 17:03, wrote:
If you have a sample persistence.xml file, that might help as well... 
perhaps it is ignoring your persistence.xml for some reason?


False alarm!

I have 2 projects and I copied the config over from one pom to the other without 
remembering to change the location of the persistence.xml - they're in different 
places (there is a reason for it).


So the format works for me as it does for you, with the #pu name on the end.

Sorry for the aggro.


Regards
Adam


Re: [ANN] OpenJPA Maven Plugin 1.0 Released

2009-02-25 Thread Adam Hardy

Mark Struberg on 25/02/09 09:49, wrote:

The Maven team is pleased to announce the release of the

openjpa-maven-plugin-1.0


The plugin documentation can be found here:

http://mojo.codehaus.org/openjpa-maven-plugin/



I gave it a test run and I have an error. After looking thro the docs, the 
source and looking at the debug, I still can't figure it out. I get this error:


 org.apache.openjpa.lib.util.ParseException: 
org.apache.openjpa.conf.openjpaconfigurationi...@52dd4e95.properties = 
META-INF/persistence.xml#OpenJpaTest

at org.apache.openjpa.lib.util.Options.setInto(Options.java:237)
at org.apache.openjpa.lib.util.Options.setInto(Options.java:181)
at 
org.apache.openjpa.lib.conf.Configurations.populateConfiguration(Configurations.java:357)

at org.apache.openjpa.enhance.PCEnhancer.run(PCEnhancer.java:4438)
at org.apache.openjpa.enhance.PCEnhancer$1.run(PCEnhancer.java:4409)
at 
org.apache.openjpa.lib.conf.Configurations.launchRunnable(Configurations.java:708)


Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.openjpa.lib.util.Options.invoke(Options.java:401)
at org.apache.openjpa.lib.util.Options.setInto(Options.java:234)
... 27 more
Caused by: java.util.MissingResourceException: 
META-INF/persistence.xml#OpenJpaTest#null
at 
org.apache.openjpa.lib.conf.ProductDerivations.load(ProductDerivations.java:272)
at 
org.apache.openjpa.lib.conf.ConfigurationImpl.setProperties(ConfigurationImpl.java:762)

... 33 more


My config currently uses antrun, so I know I don't have any major issues here.
This is in my pom:

  plugin
artifactIdmaven-war-plugin/artifactId
configuration
  warName${project.artifactId}/warName
/configuration
  /plugin
  plugin
groupIdorg.codehaus.mojo/groupId
artifactIdopenjpa-maven-plugin/artifactId
executions
  execution
idJPA Enhance/id
phaseprocess-classes/phase
goals
  goalenhance/goal
/goals
  /execution
/executions
configuration
  toolProperties
property
  nameproperties/name
  valueMETA-INF/persistence.xml#OpenJpaTest/value
/property
  /toolProperties
  classes
${build.outputDirectory}/org/permacode/patternrepo/domain/entity/ 


  /classes
/configuration
  /plugin

Any idea what it could be, seeing that extra #null on the end of the persistence 
filename?


Regards
Adam


Re: Running the enhancer in Maven

2009-02-12 Thread Adam Hardy

https://issues.apache.org/jira/browse/OPENJPA-609

Donald Woods on 11/02/09 19:39, wrote:

Which JIRA?

-Donald

Adam Hardy wrote:

Rick Curtis on 09/02/09 22:53, wrote:
I'm putting together a set of examples that show the different ways 
to use

enhancement and I'm wondering if there is a recommended way to run the
enhancer with maven? So far I've found the 
http://mojo.codehaus.org/openjpa-maven-plugin/usage.html OpenJPA Maven
Plugin ,  http://openjpa.apache.org/enhancingwithmaven.html launching 
the
PCEnhancer java class via ant , and I'm sure there are N other ways 
to do

this Any recommendations or best practices?


I just left some comments on a JIRA issue about this.

Plus the code for the openjpa-maven-plugin mojo is so simple, it would 
easy to update it:


https://svn.codehaus.org/mojo/tags/openjpa-maven-plugin-1.0-alpha/src/main/java/org/codehaus/mojo/openjpa/OpenJpaEnhancerMojo.java 



it makes me wonder why the original coder hasn't done so. The 
associated pom.xml is badly out of date.




Re: maven-openjpa-plugin problem

2009-02-12 Thread Adam Hardy
Co-incidentally there was another thread on the same problem just a day or two 
back.


Here's the mojo code:

https://svn.codehaus.org/mojo/tags/openjpa-maven-plugin-1.0-alpha/src/main/java/org/codehaus/mojo/openjpa/OpenJpaEnhancerMojo.java

for me, the reason I don't use it is that the 'classes' property of the mojo 
only takes a single directory, and enhances all it can find in that dir.


I have 2 directories with different roots with entities in, and the root of both 
has too many other classes with dependencies that aren't involved with JPA.


I could have modified the mojo locally for my own purposes but I found it was 
easier to work out how to configure the maven-ant-run plugin than to build 
myself a maven plugin. Marginally I guess but I haven't done that before.


Regards
Adam




Kevin Sutter on 12/02/09 14:05, wrote:

David,
I totally agree that enhancing the JPA POJOs is a desired operation.  I was
referring to the maven plugin itself.  Is this tool worthwhile to support?
It sounds like it could help the build time processing for OpenJPA
enhancement.

Kevin

On Thu, Feb 12, 2009 at 4:04 AM, David Goodenough 
david.goodeno...@btconnect.com wrote:


On Wednesday 11 February 2009, Kevin Sutter wrote:

Is this worth enhancing (pardon the pun)?  Seriously, is this maven
plugin worth supporting?  Or, possibly bringing into our svn?

Well yes there is a good reason to enhance.

If you are using BeanBindings (and I guess that the other Bean Binding
machanisms have the same problem) it gets very confused if there are two
versions of the same class around, the one that is bound and the one that
gets delivered with the data.  The only way I found to get binding to work
correctly was to enhance the JPA POJOs.

This has been discussed on several occasions on this forum (I initiated one
of the threads, and I was then pointed at other threads).




Re: [Resolved] enhancement roadblock - but with a bug

2009-02-09 Thread Adam Hardy


ok, here it is:

https://issues.apache.org/jira/browse/OPENJPA-914



Michael Dick on 09/02/09 14:08, wrote:

Hi Adam,

The issue caused you more than enough pain - please go ahead and open a JIRA
issue. I can't promise that we'll have it fixed immediately but we should do
a better job here.

-mike

On Mon, Feb 9, 2009 at 7:19 AM, Adam Hardy adam@cyberspaceroad.comwrote:


After a frustrating time building a test project to isolate what I thought
was a new issue, I discovered that there was nothing wrong with my project,
I had just  forgotten to add one entity class to the PCEnhancer command
during my build-time enhancement routine.

So the class was unenhanced.

With RuntimeUnenhancedClasses set to unsupported, instead of receiving a
cannot run with unenhanced classes error, OpenJPA threw the exceptions
below.

I think this definitely qualifies as a bug - OpenJPA is throwing the wrong
exception. It definitely misled me for a while.

Shall I open a JIRA for it?


Regards
Adam


Adam Hardy on 08/02/09 23:46, wrote:


Following on from my initial response, I copied the superclass into the
same project so that the enhancer can process the class file properly.

It looks like I'm taking the scenic route around all the OpenJPA issues
right now.

I hit this problem with enhanced files that doesn't happen with the
unenhanced.

I carefully set the property openjpa.RuntimeUnenhancedClasses=unsupported
so I can be certain I am running with the enhanced classes.

So with build-time enhancement, I get stacktraces of exceptions as
follows, although it's not clear which is the original cause:

Caused by: java.lang.IllegalStateException: No registered metadata for
type class org.permacode.patternrepo.domain.entity.CollatedRun.
   at
org.apache.openjpa.enhance.PCRegistry.getMeta(PCRegistry.java:255)

NestedThrowables:
openjpa-1.2.0-r422266:683325 fatal user error
org.apache.openjpa.persistence.ArgumentException: Unable to resolve type
org.permacode.patternrepo.domain.entity.TestAnalysis due to previous
errors resolving related type
org.permacode.patternrepo.domain.entity.CollatedRun.
   at
org.apache.openjpa.meta.MetaDataRepository.processBuffer(MetaDataRepository.java:731)


openjpa-1.2.0-r422266:683325 fatal user error
org.apache.openjpa.persistence.ArgumentException: Unable to resolve type
org.permacode.patternrepo.domain.entity.Weighting due to previous errors
resolving related type
org.permacode.patternrepo.domain.entity.CollatedRun.
   at
org.apache.openjpa.meta.MetaDataRepository.processBuffer(MetaDataRepository.java:731)


openjpa-1.2.0-r422266:683325 fatal user error
org.apache.openjpa.persistence.ArgumentException: Attempt to map
org.permacode.patternrepo.domain.entity.CollatedRun.portfolio failed: the
owning entity is not mapped.
   at
org.apache.openjpa.jdbc.meta.MappingInfo.assertTable(MappingInfo.java:547)

java.lang.IllegalStateException: No registered metadata for type class
org.permacode.patternrepo.domain.entity.CollatedRun.at
org.apache.openjpa.enhance.PCRegistry.getMeta(PCRegistry.java:255)


I figure that the no registered metadata for type ... must be the
original since it appears first and last. It runs fine unenhanced, so what
can it be? Do enhanced entities only work with annotations? These entities
are all mapped via xml.









Re: Running the enhancer in Maven

2009-02-09 Thread Adam Hardy

Rick Curtis on 09/02/09 22:53, wrote:

I'm putting together a set of examples that show the different ways to use
enhancement and I'm wondering if there is a recommended way to run the
enhancer with maven? So far I've found the 
http://mojo.codehaus.org/openjpa-maven-plugin/usage.html OpenJPA Maven

Plugin ,  http://openjpa.apache.org/enhancingwithmaven.html launching the
PCEnhancer java class via ant , and I'm sure there are N other ways to do
this Any recommendations or best practices?


I just left some comments on a JIRA issue about this.

Plus the code for the openjpa-maven-plugin mojo is so simple, it would easy to 
update it:


https://svn.codehaus.org/mojo/tags/openjpa-maven-plugin-1.0-alpha/src/main/java/org/codehaus/mojo/openjpa/OpenJpaEnhancerMojo.java

it makes me wonder why the original coder hasn't done so. The associated pom.xml 
is badly out of date.





Re: enhancement roadblock

2009-02-08 Thread Adam Hardy
Yes I was attempting to enhance the entity superclass which is in a jar - from 
the framework project. The implementation project depends on the framework 
project for stuff like common superclasses (not just JPA stuff).


I hadn't thought about where the enhanced class would be written to - I just 
assumed it would get put in the output directory. But I guess that's the output 
stream throwing the exception - blast!


OK, so I'll have to copy that superclass into each project that uses it, no big 
deal.




Kevin Sutter on 06/02/09 18:59, wrote:

First thing that jumped out at me...  Are you attempting to enhance classes
that are contained within a jar?  We can't write enhanced classes back out
to a jar file.  The classes have to be enhanced outside of a jar and then
re-packaged.  Maybe you already realized this and are working around it, but
the error message sort of indicated the use of classes within a jar file...

I've also asked another member of the team to dive into your question.  He's
developing a lot of experience with the various means of enhancement (maven,
ant, command line, eclipse, etc).  Unfortunately, I think he is out this
afternoon...

Kevin

On Fri, Feb 6, 2009 at 12:12 PM, Adam Hardy adam@cyberspaceroad.comwrote:


I am concentrating now on getting build-time enhancement working.

I thought I'd try with the maven-antrun-plugin using the config shown
(copied from the list here) but I get this error - the offending class is an
entity superclass which is in the jar in the error message. Java knows
exactly which jar it is in, yet can't find it so I guess I've missed a
simple piece of the config.

  [java] Caused by: java.io.FileNotFoundException:
file:/home/java/m2-repo/org/permacode/atomic/0.0.1-SNAPSHOT/atomic-0.0.1-SNAPSHOT.jar!/org/permacode/atomic/domain/AtomicEntity.class
(No such file or directory)
[java] at java.io.FileOutputStream.open(Native Method)
[java] at
java.io.FileOutputStream.init(FileOutputStream.java:179)
[java] at java.io.FileOutputStream.init(FileOutputStream.java:70)
[java] at serp.bytecode.BCClass.write(BCClass.java:179)
[java] at
org.apache.openjpa.enhance.PCEnhancer.record(PCEnhancer.java:593)


This is the PCEnhancer launch config:


 plugin
   artifactIdmaven-antrun-plugin/artifactId
   executions
 execution
   phaseprocess-classes/phase
   configuration
 tasks
   path id=cp
 path refid=maven.test.classpath /
 path refid=maven.compile.classpath /
 path refid=maven.runtime.classpath /
 path refid=maven.dependency.classpath /
   /path
   fileset id=enhance.path.ref dir=.
 include

name=${build.outputDirectory}/org/permacode/patternrepo/domain/entity/*.class
/
 include

name=${build.testOutputDirectory}/org/permacode/patternrepo/domain/entity/*.class
/
   /fileset
   echo message=Enhancing classes /
   java classname=org.apache.openjpa.enhance.PCEnhancer
 classpathref=cp dir=${build.outputDirectory}
 fork=true
 arg line=-properties
META-INF/persistence.xml#OpenJpaJDBC /
   /java
   echo message=Enhancing classes for OpenJPA done! /
 /tasks
   /configuration
   goals
 goalrun/goal
   /goals
 /execution
   /executions
 /plugin

It seems I might need to reference the entity bean in the fileset
'enhance.path.ref' but I'm not sure how to include a jar with a class in
there.










Kevin Sutter on 06/02/09 15:45, wrote:


Hi Adam,
You are right in your thinking not to use the fall back enhancement
processing for production use.  It was meant as an easy out of the box
experience for simple apps.  It was not meant for production use.  All of
this has been documented in our forums and JIRAs (which you have already
found).

We are doing more investigation into the Java 6 class redefinition
support.
This doesn't seem to be 100% complete either, at least for some scenarios.

So, the best bet (and most proven) is to stick with standard
PCEnhancement.
This can be done in several ways -- statically during build, or
dynamically
via the -javaagent approach or tied into the classloader of a suitable
EJB3
container.

In the WebSphere environment, the classloader enhancement process is tied
into both the EJB and Web Containers (same Java EE runtime
implementation).
But, it doesn't sound like this linkage is provided by Tomcat.  I have no
direct experience with Tomcat -- just reading your note below.

So, it sounds like your best approach is to do the enhancement during the
build process.  If you are experiencing un-enhanced entities at runtime,
then either your build-time enhancement is not working, or your packaged
application isn't picking up the enhanced classes, or you accidentally

Re: Multi-language database and JPA/OpenJPA

2009-02-02 Thread Adam Hardy

Simone Gianni on 01/02/09 22:11, wrote:

Hi all,
the problem of having a database containing localized data is quite 
common, and a number of ways to do it are used here and there on the 
internet.


What is, from your experience or from your knowledge of JPA, the best 
way of doing it in JPA? Are there any recommendation or study papers on 
this? (Google seems to find nothing unseful).


Does the JPA 2.0 specs adds some useful new features for this problem?


One project which I worked on in the past which was pre-JPA had this 
requirement.

The language strings were heavily cached and not mapped as part of any entity 
bean, rather added to the bean post-select by a listener.


It worked well.

Regards
Adam



Re: Eclipse - javaagent enhancement

2009-02-02 Thread Adam Hardy

Hi Rick,

actually I use maven but I also run them all or partially in Eclipse, especially 
when debugging, and especially when there is some fiendish hidden static 
somewhere that carries over state from one test to the next, despite all the 
precautionary set-up and tear-down code.


Rick Curtis on 01/02/09 16:36, wrote:

Adam -
Is there some reason that you must run your batch tests from within eclipse?
If eclipse isn't a must have, I'd suggest using ant to launch your batch of
tests. That would give you the flexibility that you're looking for.

Thanks,
Rick


Adam Hardy (SQL) wrote:

Hi Jeremy,

your suggestion is exactly what I do, but unfortunately since I have to
specify 
this in Eclipse in the 'installed JRE' default VM args, I can only change
it 
manually, so in one batch run, I can't stop OpenJPA modifying the entity
beans 
during a Hibernate or Toplink test.


Regards
Adam

Jeremy Bauer on 30/01/09 18:17, wrote:

Adam,

Provided you have your pu's defined separately, the agent allows you to
specify a particular persistence unit for enhancement.  Example:

-javaagent:C:/eclipse.workspaces/3.2.2/openjpa-1.2.0/openjpa-all/target/openjpa-1.2.0.jar=pu=OpenJPAPU

-Jeremy

On Fri, Jan 30, 2009 at 11:05 AM, Adam Hardy
adam@cyberspaceroad.comwrote:


Hi Kevin,

whilst on the subject, I'm using this in Eclipse too but there is one
thing
that it appears cannot be done.

One of the projects I work on is a framework that is designed to be used
with any JPA provider, and the testing runs against OpenJPA as well as
Hibernate and Toplink Essentials.

Obviously I don't want the javaagent to act on my entity classes when
I'm
running Hibernate or Toplink, but there appears to be no easy way to
stop it
from one test to another in a batch run.

It would be interesting to know whether there are plans to abandon the
byte
code rewriting, and what the history behind its adoption is. I would
assume
that it's performance but I haven't had seen any comparative stats
between
the different JPA providers.

Regards
Adam




Kevin Sutter on 30/01/09 15:57, wrote:


Hi Gianny,
I use this all the time.  On my Run configuration for the application
that
I
am running I will put something like the following as a JVM argument...


-javaagent:C:/eclipse.workspaces/3.2.2/openjpa-1.2.0/openjpa-all/target/openjpa-1.2.0.jar

This should kick in the PCEnhancerAgent for the Entity classes that are
being loaded.

Kevin

On Fri, Jan 30, 2009 at 3:53 AM, Gianny Damour 
gianny.dam...@optusnet.com.au wrote:

 Hi,

I am unable to get javaagent based enhancement work within Eclipse.

Based on a cursory review of PCEnhancerAgent, I can see that no
ClassFileTransformer is installed when

  ListString anchors = Configurations.
  getFullyQualifiedAnchorsInPropertiesLocation(opts);

returns an empty list, which it does in my case when I simply specify
the
following JVM arg

-javaagent:path to/openjpa-1.2.0.jar










Re: InvalidStateException: Attempt to set column client.version to two different values

2009-02-02 Thread Adam Hardy (SQL)


John Leach wrote:
 
 I must stop talking to myself. Yes it is a bug 
 https://issues.apache.org/jira/browse/OPENJPA-327 OPENJPA-327 
 
 The test case uses @PostUpdate, but the same thing seems to happen for
 @PreUpdate. Sigh.
 

This old bug came out of the woodwork to bite me today, presumably because I
started using javaagent to enhance my entities. 

It looks like the so-called low hanging fruit in terms of easy bug fixes,
but it's been around since version 1.0 so I guess there must be another
reason why it's not fixed. 

Does anybody have some feedback on this, and perhaps a work-around for doing
auditing and saving a user-defined 'last-modified' field?

Regards
Adam
-- 
View this message in context: 
http://n2.nabble.com/InvalidStateException%3A-Attempt-to-set-column-%22client.version%22-to-two-different-values-tp210609p2258068.html
Sent from the OpenJPA Users mailing list archive at Nabble.com.



Re: Eclipse - javaagent enhancement

2009-02-01 Thread Adam Hardy

Hi Jeremy,

your suggestion is exactly what I do, but unfortunately since I have to specify 
this in Eclipse in the 'installed JRE' default VM args, I can only change it 
manually, so in one batch run, I can't stop OpenJPA modifying the entity beans 
during a Hibernate or Toplink test.


Regards
Adam

Jeremy Bauer on 30/01/09 18:17, wrote:

Adam,

Provided you have your pu's defined separately, the agent allows you to
specify a particular persistence unit for enhancement.  Example:

-javaagent:C:/eclipse.workspaces/3.2.2/openjpa-1.2.0/openjpa-all/target/openjpa-1.2.0.jar=pu=OpenJPAPU

-Jeremy

On Fri, Jan 30, 2009 at 11:05 AM, Adam Hardy adam@cyberspaceroad.comwrote:


Hi Kevin,

whilst on the subject, I'm using this in Eclipse too but there is one thing
that it appears cannot be done.

One of the projects I work on is a framework that is designed to be used
with any JPA provider, and the testing runs against OpenJPA as well as
Hibernate and Toplink Essentials.

Obviously I don't want the javaagent to act on my entity classes when I'm
running Hibernate or Toplink, but there appears to be no easy way to stop it
from one test to another in a batch run.

It would be interesting to know whether there are plans to abandon the byte
code rewriting, and what the history behind its adoption is. I would assume
that it's performance but I haven't had seen any comparative stats between
the different JPA providers.

Regards
Adam




Kevin Sutter on 30/01/09 15:57, wrote:


Hi Gianny,
I use this all the time.  On my Run configuration for the application that
I
am running I will put something like the following as a JVM argument...


-javaagent:C:/eclipse.workspaces/3.2.2/openjpa-1.2.0/openjpa-all/target/openjpa-1.2.0.jar

This should kick in the PCEnhancerAgent for the Entity classes that are
being loaded.

Kevin

On Fri, Jan 30, 2009 at 3:53 AM, Gianny Damour 
gianny.dam...@optusnet.com.au wrote:

 Hi,

I am unable to get javaagent based enhancement work within Eclipse.

Based on a cursory review of PCEnhancerAgent, I can see that no
ClassFileTransformer is installed when

  ListString anchors = Configurations.
  getFullyQualifiedAnchorsInPropertiesLocation(opts);

returns an empty list, which it does in my case when I simply specify the
following JVM arg

-javaagent:path to/openjpa-1.2.0.jar







Re: Eclipse - javaagent enhancement

2009-02-01 Thread Adam Hardy

Kevin Sutter on 30/01/09 18:23, wrote:

There are many advantages to the enhancement processing.  Overall, our
performance is better with byte-code enhancement.  The industry benchmarks
seem to back up this claim.

But, where we would like to get is to do the bytecode enhancement without
any special options or processing by the user.  The Java 6 feature for class
redefinition may give us that possibility.  We would like to automatically
detect whether enhancement has been specified and, if it hasn't, insert our
enhancement processing into the classloading mechanism automatically.  Prior
to Java 6, we had to rely on the javaagent parameter, or the app server's
container hooks, or static enhancement.  Hopefully, Java 6 will allow us to
improve on this processing.

Hope this helps explain.  We have some of the same concerns and are looking
to improve on them.


Thanks v. much for the info. Most interesting, although I'll only know after a 
couple of months of progress with my project what relevance that has to what I'm 
doing.


Initially I'd rather not have to restrict the project to Java 6, but then if the 
load tests with non-Sun JREs uncovers a blitz-schnell JRE, it wouldn't be a 
problem.


However I don't think the chances of that are so great and it won't be good to 
demand that users have to have Java 6.




Re: [URGENT] performance issues

2009-01-29 Thread Adam Hardy

Pinaki Poddar on 29/01/09 05:09, wrote:

  2. Build-time enhancement i.e. to execute one command after compilation
performs best and minimizes complexity of load-time-weaving. I have never
understood what really makes people to avoid 'build-time enhancement' -
especially when most of the apps are built, packaged and deployed via
Ant/Maven.


This was one of the things that made EJB2.x so unpopular. Obviously it's not a 
big deal, but it's one extra step in the build cycle. Talking about Maven, I 
couldn't actually find a current approach that worked for this. Do you know of one?



Adam


Re: orderBy

2009-01-29 Thread Adam Hardy

Is there any chance that I'll be able to do this in the future (like JPA 2):

  one-to-many name=weightings mapped-by=collatedRun
order-bytestAnalysis.marketSystem.systemRun.market.symbol/order-by
  /one-to-many


instead of writing a Comparator?



Pinaki Poddar on 29/01/09 05:16, wrote:

Hi,
  Check @OrderBy annotation in OpenJPA docs for something similiar.


That would be just the standard field ordering you mean?

Just checked the docs you referred to and don't see anything new.

I would like to order by the field of a field of a field - since it's no big 
deal to put the order by clause for it into the SQL, I thought it would be 
reasonable to hope for.


Or are you saying I only need to quote the table column?




controlling number of SQL operations

2009-01-29 Thread Adam Hardy


I am running a load test for a performance-sensitive operation and I'm surprised 
by the number of SQL calls I am seeing, so I want to check that I have the JPA 
and OpenJPA config set up right.


Hopefully someone can say if any of the SQL calls are avoidable.

This is the entity model:

CollatedAnalysis - Trade - Balance

The operation, CollatedAnalysis.analyze(), causes the entity to run over its 
large collection of Trade entities, producing a new 'grandchild' Balance entity 
for each Trade.


What happens in terms of SQL hits is this:

select all Balance where Trade = ? (because I put the Balance in Trade.balances)

insert Balance

update Trade



For the load test I run the operation on 30K trades - naively I thought I would 
only see 30K inserts - is there any way to avoid the other 2 operations?



Thanks
Adam




Re: out of memory issues

2009-01-28 Thread Adam Hardy

Hi Mike,

thanks for the input.

The child objects are mapped entities and they total around 25 * 300, so 7500.

This is their structure:

private Long id;
private BigDecimal total;
private DollarReturn dollarReturn;
private TestAnalysis testAnalysis;

where DollarReturn and TestAnalysis are also mapped entities. TestAnalysis will 
have 7500 of these in its child list. They extend a superclass:


private Long version;
private Long ownerId;
private Date created;
private Date modified;

For what it's worth, they also have several transient properties.

I didn't get a heap dump before. Now I have changed the architecture a bit to 
try to streamline the process, and the process doesn't crash now, it just takes 
a very long time.


Regards
Adam


Michael Dick on 27/01/09 16:31, wrote:

Hi Adam, just some quick thoughts,

How many child objects are we talking about, and are they entities or just
objects which are persisted to the database?

If you have heap dumps it might be interesting to see which object(s) are
taking up the most memory. It sounds like the likely culprit is the number
of child objects and any OpenJPA objects associated with them.

-mike

On Mon, Jan 26, 2009 at 11:46 AM, Adam Hardy adam@cyberspaceroad.comwrote:


The webapp I am working on uses OpenJPA (or Hibernate depending on bugs
blocking progress) and I have just hit a problem that currently looks like
it will only be solved by violating the current architecture.

I'm hoping I'm wrong and thought I'd ask first.

The scenario is that the user can choose a whole range of financial
instruments to put in a portfolio, and the webapp grabs them all on
submission of the request and builds the object graph.

The Manager class which is also the transactional interface then creates an
Analysis entity to hold performance stats, and which may create for itself a
large number of small child objects representing past performance.

When the Analysis is done, the Manager call finishes and the transaction
commits (or I commit the transaction in my unit test), and I get an
out-of-memory exception.

Presumably it's all the child object inserts causing the problem.

Obviously I would like to do a flush before I run out of memory, but the
Analysis entity object has no access to the entity manager. Or at least it
shouldn't have.

The other problem is that the Analysis entity can't really be saved until
the child objects are all created, so I would have to think of a dirty
work-around to allow me to save it first, to allow me to flush the child
objects.




Re: out of memory issues

2009-01-28 Thread Adam Hardy

Hi Jeremy,

compile-time enhancement is being used.

I figured it would have to be something like that. I just upgraded to JDK 1.6 to 
see if the enhancement provided that way is better, so fingers crossed.


Otherwise I'll have to build in the enhancement stage into the maven build and 
from what it looks like, the current way of doing that hasn't been changed since 
OpenJPA v0.9.6.



regards
Adam

Jeremy Bauer on 28/01/09 14:35, wrote:

Adam,
Are you using compile time enhancement or does the web container do
enhancement?  If not, runtime enhancement will be used and I've seen cases
where non-agent runtime enhancement can significantly increase memory usage.

If you aren't certain, an easy way to check is to disable runtime
enhancement via:

  property name=openjpa.RuntimeUnenhancedClasses
value=unsupported/

If your classes are not enhanced, the app will fail with an exception to
that effect.

-Jeremy

On Wed, Jan 28, 2009 at 4:39 AM, Adam Hardy adam@cyberspaceroad.comwrote:


Hi Mike,

thanks for the input.

The child objects are mapped entities and they total around 25 * 300, so
7500.

This is their structure:

   private Long id;
   private BigDecimal total;
   private DollarReturn dollarReturn;
   private TestAnalysis testAnalysis;

where DollarReturn and TestAnalysis are also mapped entities. TestAnalysis
will have 7500 of these in its child list. They extend a superclass:

   private Long version;
   private Long ownerId;
   private Date created;
   private Date modified;

For what it's worth, they also have several transient properties.

I didn't get a heap dump before. Now I have changed the architecture a bit
to try to streamline the process, and the process doesn't crash now, it just
takes a very long time.

Regards
Adam



Michael Dick on 27/01/09 16:31, wrote:


Hi Adam, just some quick thoughts,

How many child objects are we talking about, and are they entities or just
objects which are persisted to the database?

If you have heap dumps it might be interesting to see which object(s) are
taking up the most memory. It sounds like the likely culprit is the number
of child objects and any OpenJPA objects associated with them.

-mike

On Mon, Jan 26, 2009 at 11:46 AM, Adam Hardy adam@cyberspaceroad.com

wrote:

 The webapp I am working on uses OpenJPA (or Hibernate depending on bugs

blocking progress) and I have just hit a problem that currently looks
like
it will only be solved by violating the current architecture.

I'm hoping I'm wrong and thought I'd ask first.

The scenario is that the user can choose a whole range of financial
instruments to put in a portfolio, and the webapp grabs them all on
submission of the request and builds the object graph.

The Manager class which is also the transactional interface then creates
an
Analysis entity to hold performance stats, and which may create for
itself a
large number of small child objects representing past performance.

When the Analysis is done, the Manager call finishes and the transaction
commits (or I commit the transaction in my unit test), and I get an
out-of-memory exception.

Presumably it's all the child object inserts causing the problem.

Obviously I would like to do a flush before I run out of memory, but the
Analysis entity object has no access to the entity manager. Or at least
it
shouldn't have.

The other problem is that the Analysis entity can't really be saved until
the child objects are all created, so I would have to think of a dirty
work-around to allow me to save it first, to allow me to flush the child
objects.







Re: out of memory issues

2009-01-28 Thread Adam Hardy
Sorry, didn't mean compile-time enhancement, I was just mindlessly parroting the 
words I was reading :$


I started off using nothing, making no changes. No javaagent, nada.

Now I've upgraded to Java 1.6 and I'm trying to get that working but today's not 
being very productive. However Java 6 class retransformation certainly sounds 
hopeful.


I do see on my workstation that I still get the warning in my test run that

 This means that your application will be less efficient than it would if you 
ran the OpenJPA enhancer. (openjpa)


although at least it doesn't tell me what it did with Java 1.5:

 Additionally, lazy loading will not be available for one-to-one and 
many-to-one persistent attributes in types using field access; they will be 
loaded eagerly instead.




Jeremy Bauer on 28/01/09 16:35, wrote:

Adam,

Did you mean runtime enhancement is being used?  (Want to make sure I didn't
steer you down the wrong path - I don't think so, based on your statement
about adding enhancement to the maven build).
I'm not sure if 1.6 will help, but it may be worth a shot.  (Anyone?)
 Besides using compile-time enhancement you could also try to configure your
web app server to use the agent enhancer.

-Jeremy

On Wed, Jan 28, 2009 at 9:51 AM, Adam Hardy adam@cyberspaceroad.comwrote:


Hi Jeremy,

compile-time enhancement is being used.

I figured it would have to be something like that. I just upgraded to JDK
1.6 to see if the enhancement provided that way is better, so fingers
crossed.

Otherwise I'll have to build in the enhancement stage into the maven build
and from what it looks like, the current way of doing that hasn't been
changed since OpenJPA v0.9.6.


regards
Adam


Jeremy Bauer on 28/01/09 14:35, wrote:


Adam,
Are you using compile time enhancement or does the web container do
enhancement?  If not, runtime enhancement will be used and I've seen cases
where non-agent runtime enhancement can significantly increase memory
usage.

If you aren't certain, an easy way to check is to disable runtime
enhancement via:

 property name=openjpa.RuntimeUnenhancedClasses
value=unsupported/

If your classes are not enhanced, the app will fail with an exception to
that effect.

-Jeremy

On Wed, Jan 28, 2009 at 4:39 AM, Adam Hardy adam@cyberspaceroad.com

wrote:

 Hi Mike,

thanks for the input.

The child objects are mapped entities and they total around 25 * 300, so
7500.

This is their structure:

  private Long id;
  private BigDecimal total;
  private DollarReturn dollarReturn;
  private TestAnalysis testAnalysis;

where DollarReturn and TestAnalysis are also mapped entities.
TestAnalysis
will have 7500 of these in its child list. They extend a superclass:

  private Long version;
  private Long ownerId;
  private Date created;
  private Date modified;

For what it's worth, they also have several transient properties.

I didn't get a heap dump before. Now I have changed the architecture a
bit
to try to streamline the process, and the process doesn't crash now, it
just
takes a very long time.

Regards
Adam



Michael Dick on 27/01/09 16:31, wrote:

 Hi Adam, just some quick thoughts,

How many child objects are we talking about, and are they entities or
just
objects which are persisted to the database?

If you have heap dumps it might be interesting to see which object(s)
are
taking up the most memory. It sounds like the likely culprit is the
number
of child objects and any OpenJPA objects associated with them.

-mike

On Mon, Jan 26, 2009 at 11:46 AM, Adam Hardy 
adam@cyberspaceroad.com


wrote:


 The webapp I am working on uses OpenJPA (or Hibernate depending on bugs


blocking progress) and I have just hit a problem that currently looks
like
it will only be solved by violating the current architecture.

I'm hoping I'm wrong and thought I'd ask first.

The scenario is that the user can choose a whole range of financial
instruments to put in a portfolio, and the webapp grabs them all on
submission of the request and builds the object graph.

The Manager class which is also the transactional interface then
creates
an
Analysis entity to hold performance stats, and which may create for
itself a
large number of small child objects representing past performance.

When the Analysis is done, the Manager call finishes and the
transaction
commits (or I commit the transaction in my unit test), and I get an
out-of-memory exception.

Presumably it's all the child object inserts causing the problem.

Obviously I would like to do a flush before I run out of memory, but
the
Analysis entity object has no access to the entity manager. Or at least
it
shouldn't have.

The other problem is that the Analysis entity can't really be saved
until
the child objects are all created, so I would have to think of a dirty
work-around to allow me to save it first, to allow me to flush the
child
objects.








Re: out of memory issues

2009-01-28 Thread Adam Hardy
I'm now trying the build-time enhancement. My set-up is not playing well with 
the PCEnhancer. I get this:


158  OpenJPA  INFO   [main] openjpa.Tool - No targets were given.  Running on 
all classes in your persistent classes list, or all metadata files in classpath 
directories if you have not listed your persistent classes.  Use -help to 
display tool usage information.
Exception in thread main java.util.MissingResourceException: 
org.apache.openjpa.persistence.PersistenceProductDerivation:java.util.MissingResourceException: 
Persistence provider org.hibernate.ejb.HibernatePersistence specified in 
persistence unit Hibernate in META-INF/persistence.xml is not a recognized 
provider.
	at 
org.apache.openjpa.lib.conf.ProductDerivations.reportErrors(ProductDerivations.java:365)

at 
org.apache.openjpa.lib.conf.ProductDerivations.load(ProductDerivations.java:270)
	at 
org.apache.openjpa.lib.conf.Configurations.populateConfiguration(Configurations.java:344)

at org.apache.openjpa.enhance.PCEnhancer.run(PCEnhancer.java:4438)


Is there by some remote chance an undocumented config param to set the 
persistence unit I want it to look at in my persistence.xml?


This is a framework I'm working on which is theoretically JPA-implementation 
neutral.




Adam Hardy on 28/01/09 17:04, wrote:
Sorry, didn't mean compile-time enhancement, I was just mindlessly 
parroting the words I was reading :$


I started off using nothing, making no changes. No javaagent, nada.

Now I've upgraded to Java 1.6 and I'm trying to get that working but 
today's not being very productive. However Java 6 class 
retransformation certainly sounds hopeful.


I do see on my workstation that I still get the warning in my test run that

 This means that your application will be less efficient than it would 
if you ran the OpenJPA enhancer. (openjpa)


although at least it doesn't tell me what it did with Java 1.5:

 Additionally, lazy loading will not be available for one-to-one and 
many-to-one persistent attributes in types using field access; they will 
be loaded eagerly instead.




Jeremy Bauer on 28/01/09 16:35, wrote:

Adam,

Did you mean runtime enhancement is being used?  (Want to make sure I 
didn't

steer you down the wrong path - I don't think so, based on your statement
about adding enhancement to the maven build).
I'm not sure if 1.6 will help, but it may be worth a shot.  (Anyone?)
 Besides using compile-time enhancement you could also try to 
configure your

web app server to use the agent enhancer.

-Jeremy

On Wed, Jan 28, 2009 at 9:51 AM, Adam Hardy 
adam@cyberspaceroad.comwrote:



Hi Jeremy,

compile-time enhancement is being used.

I figured it would have to be something like that. I just upgraded to 
JDK

1.6 to see if the enhancement provided that way is better, so fingers
crossed.

Otherwise I'll have to build in the enhancement stage into the maven 
build

and from what it looks like, the current way of doing that hasn't been
changed since OpenJPA v0.9.6.


regards
Adam


Jeremy Bauer on 28/01/09 14:35, wrote:


Adam,
Are you using compile time enhancement or does the web container do
enhancement?  If not, runtime enhancement will be used and I've seen 
cases

where non-agent runtime enhancement can significantly increase memory
usage.

If you aren't certain, an easy way to check is to disable runtime
enhancement via:

 property name=openjpa.RuntimeUnenhancedClasses
value=unsupported/

If your classes are not enhanced, the app will fail with an 
exception to

that effect.

-Jeremy

On Wed, Jan 28, 2009 at 4:39 AM, Adam Hardy 
adam@cyberspaceroad.com

wrote:

 Hi Mike,

thanks for the input.

The child objects are mapped entities and they total around 25 * 
300, so

7500.

This is their structure:

  private Long id;
  private BigDecimal total;
  private DollarReturn dollarReturn;
  private TestAnalysis testAnalysis;

where DollarReturn and TestAnalysis are also mapped entities.
TestAnalysis
will have 7500 of these in its child list. They extend a superclass:

  private Long version;
  private Long ownerId;
  private Date created;
  private Date modified;

For what it's worth, they also have several transient properties.

I didn't get a heap dump before. Now I have changed the architecture a
bit
to try to streamline the process, and the process doesn't crash 
now, it

just
takes a very long time.

Regards
Adam



Michael Dick on 27/01/09 16:31, wrote:

 Hi Adam, just some quick thoughts,

How many child objects are we talking about, and are they entities or
just
objects which are persisted to the database?

If you have heap dumps it might be interesting to see which object(s)
are
taking up the most memory. It sounds like the likely culprit is the
number
of child objects and any OpenJPA objects associated with them.

-mike

On Mon, Jan 26, 2009 at 11:46 AM, Adam Hardy 
adam@cyberspaceroad.com


wrote:

 The webapp I am working on uses OpenJPA (or Hibernate depending 
on bugs

Re: out of memory issues

2009-01-28 Thread Adam Hardy
Just to report back on my results using Java 6 class retransformation: the 
performance isn't any better.


I just double checked what was going on in terms of workload and I 
underestimated the number of child entities that I am creating. It's more in the 
area of 30,000 or 4 to 5 times more than I thought.




Adam Hardy on 28/01/09 17:04, wrote:
Sorry, didn't mean compile-time enhancement, I was just mindlessly 
parroting the words I was reading :$


I started off using nothing, making no changes. No javaagent, nada.

Now I've upgraded to Java 1.6 and I'm trying to get that working but 
today's not being very productive. However Java 6 class 
retransformation certainly sounds hopeful.


I do see on my workstation that I still get the warning in my test run that

 This means that your application will be less efficient than it would 
if you ran the OpenJPA enhancer. (openjpa)


although at least it doesn't tell me what it did with Java 1.5:

 Additionally, lazy loading will not be available for one-to-one and 
many-to-one persistent attributes in types using field access; they will 
be loaded eagerly instead.




Jeremy Bauer on 28/01/09 16:35, wrote:

Adam,

Did you mean runtime enhancement is being used?  (Want to make sure I 
didn't

steer you down the wrong path - I don't think so, based on your statement
about adding enhancement to the maven build).
I'm not sure if 1.6 will help, but it may be worth a shot.  (Anyone?)
 Besides using compile-time enhancement you could also try to 
configure your

web app server to use the agent enhancer.

-Jeremy

On Wed, Jan 28, 2009 at 9:51 AM, Adam Hardy 
adam@cyberspaceroad.comwrote:



Hi Jeremy,

compile-time enhancement is being used.

I figured it would have to be something like that. I just upgraded to 
JDK

1.6 to see if the enhancement provided that way is better, so fingers
crossed.

Otherwise I'll have to build in the enhancement stage into the maven 
build

and from what it looks like, the current way of doing that hasn't been
changed since OpenJPA v0.9.6.


regards
Adam


Jeremy Bauer on 28/01/09 14:35, wrote:


Adam,
Are you using compile time enhancement or does the web container do
enhancement?  If not, runtime enhancement will be used and I've seen 
cases

where non-agent runtime enhancement can significantly increase memory
usage.

If you aren't certain, an easy way to check is to disable runtime
enhancement via:

 property name=openjpa.RuntimeUnenhancedClasses
value=unsupported/

If your classes are not enhanced, the app will fail with an 
exception to

that effect.

-Jeremy

On Wed, Jan 28, 2009 at 4:39 AM, Adam Hardy 
adam@cyberspaceroad.com

wrote:

 Hi Mike,

thanks for the input.

The child objects are mapped entities and they total around 25 * 
300, so

7500.

This is their structure:

  private Long id;
  private BigDecimal total;
  private DollarReturn dollarReturn;
  private TestAnalysis testAnalysis;

where DollarReturn and TestAnalysis are also mapped entities.
TestAnalysis
will have 7500 of these in its child list. They extend a superclass:

  private Long version;
  private Long ownerId;
  private Date created;
  private Date modified;

For what it's worth, they also have several transient properties.

I didn't get a heap dump before. Now I have changed the architecture a
bit
to try to streamline the process, and the process doesn't crash 
now, it

just
takes a very long time.

Regards
Adam



Michael Dick on 27/01/09 16:31, wrote:

 Hi Adam, just some quick thoughts,

How many child objects are we talking about, and are they entities or
just
objects which are persisted to the database?

If you have heap dumps it might be interesting to see which object(s)
are
taking up the most memory. It sounds like the likely culprit is the
number
of child objects and any OpenJPA objects associated with them.

-mike

On Mon, Jan 26, 2009 at 11:46 AM, Adam Hardy 
adam@cyberspaceroad.com


wrote:

 The webapp I am working on uses OpenJPA (or Hibernate depending 
on bugs


blocking progress) and I have just hit a problem that currently 
looks

like
it will only be solved by violating the current architecture.

I'm hoping I'm wrong and thought I'd ask first.

The scenario is that the user can choose a whole range of financial
instruments to put in a portfolio, and the webapp grabs them all on
submission of the request and builds the object graph.

The Manager class which is also the transactional interface then
creates
an
Analysis entity to hold performance stats, and which may create for
itself a
large number of small child objects representing past performance.

When the Analysis is done, the Manager call finishes and the
transaction
commits (or I commit the transaction in my unit test), and I get an
out-of-memory exception.

Presumably it's all the child object inserts causing the problem.

Obviously I would like to do a flush before I run out of memory, but
the
Analysis entity object has no access to the entity manager

Re: out of memory issues

2009-01-28 Thread Adam Hardy

Yet another question, just in case you weren't entirely sick of me already:

is it so that the byte code manipulation algorithms cannot handle inner classes 
on the entity classes?


I have several such inner classes and I get this compile error (when compiling 
after doing the enhancement):


[INFO] Compilation failure
/home/adam/projects/pattern-repo/src/main/java/org/permacode/patternrepo/web/CollatedAnalysisAction.java:[12,59] 
cannot access 
org.permacode.patternrepo.domain.entity.TestAnalysis.TestAnalysisCorrelation
class file for 
org.permacode.patternrepo.domain.entity.TestAnalysis$TestAnalysisCorrelation not 
found

import 
org.permacode.patternrepo.domain.entity.TestAnalysis.TestAnalysisCorrelation;



Adam Hardy on 28/01/09 17:44, wrote:
I'm now trying the build-time enhancement. My set-up is not playing well 
with the PCEnhancer. I get this:


158  OpenJPA  INFO   [main] openjpa.Tool - No targets were given.  
Running on all classes in your persistent classes list, or all metadata 
files in classpath directories if you have not listed your persistent 
classes.  Use -help to display tool usage information.
Exception in thread main java.util.MissingResourceException: 
org.apache.openjpa.persistence.PersistenceProductDerivation:java.util.MissingResourceException: 
Persistence provider org.hibernate.ejb.HibernatePersistence specified 
in persistence unit Hibernate in META-INF/persistence.xml is not a 
recognized provider.
at 
org.apache.openjpa.lib.conf.ProductDerivations.reportErrors(ProductDerivations.java:365) 

at 
org.apache.openjpa.lib.conf.ProductDerivations.load(ProductDerivations.java:270) 

at 
org.apache.openjpa.lib.conf.Configurations.populateConfiguration(Configurations.java:344) 


at org.apache.openjpa.enhance.PCEnhancer.run(PCEnhancer.java:4438)


Is there by some remote chance an undocumented config param to set the 
persistence unit I want it to look at in my persistence.xml?


This is a framework I'm working on which is theoretically 
JPA-implementation neutral.




Adam Hardy on 28/01/09 17:04, wrote:
Sorry, didn't mean compile-time enhancement, I was just mindlessly 
parroting the words I was reading :$


I started off using nothing, making no changes. No javaagent, nada.

Now I've upgraded to Java 1.6 and I'm trying to get that working but 
today's not being very productive. However Java 6 class 
retransformation certainly sounds hopeful.


I do see on my workstation that I still get the warning in my test run 
that


 This means that your application will be less efficient than it 
would if you ran the OpenJPA enhancer. (openjpa)


although at least it doesn't tell me what it did with Java 1.5:

 Additionally, lazy loading will not be available for one-to-one and 
many-to-one persistent attributes in types using field access; they 
will be loaded eagerly instead.




Jeremy Bauer on 28/01/09 16:35, wrote:

Adam,

Did you mean runtime enhancement is being used?  (Want to make sure I 
didn't
steer you down the wrong path - I don't think so, based on your 
statement

about adding enhancement to the maven build).
I'm not sure if 1.6 will help, but it may be worth a shot.  (Anyone?)
 Besides using compile-time enhancement you could also try to 
configure your

web app server to use the agent enhancer.

-Jeremy

On Wed, Jan 28, 2009 at 9:51 AM, Adam Hardy 
adam@cyberspaceroad.comwrote:



Hi Jeremy,

compile-time enhancement is being used.

I figured it would have to be something like that. I just upgraded 
to JDK

1.6 to see if the enhancement provided that way is better, so fingers
crossed.

Otherwise I'll have to build in the enhancement stage into the maven 
build

and from what it looks like, the current way of doing that hasn't been
changed since OpenJPA v0.9.6.


regards
Adam


Jeremy Bauer on 28/01/09 14:35, wrote:


Adam,
Are you using compile time enhancement or does the web container do
enhancement?  If not, runtime enhancement will be used and I've 
seen cases

where non-agent runtime enhancement can significantly increase memory
usage.

If you aren't certain, an easy way to check is to disable runtime
enhancement via:

 property name=openjpa.RuntimeUnenhancedClasses
value=unsupported/

If your classes are not enhanced, the app will fail with an 
exception to

that effect.

-Jeremy

On Wed, Jan 28, 2009 at 4:39 AM, Adam Hardy 
adam@cyberspaceroad.com

wrote:

 Hi Mike,

thanks for the input.

The child objects are mapped entities and they total around 25 * 
300, so

7500.

This is their structure:

  private Long id;
  private BigDecimal total;
  private DollarReturn dollarReturn;
  private TestAnalysis testAnalysis;

where DollarReturn and TestAnalysis are also mapped entities.
TestAnalysis
will have 7500 of these in its child list. They extend a superclass:

  private Long version;
  private Long ownerId;
  private Date created;
  private Date modified;

For what it's worth, they also have several transient properties.

I didn't get

out of memory issues

2009-01-26 Thread Adam Hardy
The webapp I am working on uses OpenJPA (or Hibernate depending on bugs blocking 
progress) and I have just hit a problem that currently looks like it will only 
be solved by violating the current architecture.


I'm hoping I'm wrong and thought I'd ask first.

The scenario is that the user can choose a whole range of financial instruments 
to put in a portfolio, and the webapp grabs them all on submission of the 
request and builds the object graph.


The Manager class which is also the transactional interface then creates an 
Analysis entity to hold performance stats, and which may create for itself a 
large number of small child objects representing past performance.


When the Analysis is done, the Manager call finishes and the transaction commits 
(or I commit the transaction in my unit test), and I get an out-of-memory 
exception.


Presumably it's all the child object inserts causing the problem.

Obviously I would like to do a flush before I run out of memory, but the 
Analysis entity object has no access to the entity manager. Or at least it 
shouldn't have.


The other problem is that the Analysis entity can't really be saved until the 
child objects are all created, so I would have to think of a dirty work-around 
to allow me to save it first, to allow me to flush the child objects.


Can anybody give some advice?


Much appreciated
Adam






orderBy

2009-01-23 Thread Adam Hardy

Is there any chance that I'll be able to do this in the future (like JPA 2):

  one-to-many name=weightings mapped-by=collatedRun
order-bytestAnalysis.marketSystem.systemRun.market.symbol/order-by
  /one-to-many


instead of writing a Comparator?



entity manager behaviour after rollback

2008-11-14 Thread Adam Hardy
I'm trying to work out what is meant to happen to the persistence context for my 
entity manager when my application throws an exception.


I can't work out any specific details about this from the EJB Persistence spec.

I'm using the extended persistence context with Spring transaction management 
around my business objects.


It appears that a relational integrity exception causes the transaction to roll 
back. At this point, do I have to remove or repair the offending entity beans 
myself? Or is that done automatically?


This is what appears to be happening (automatic) but I wanted to check because I 
don't see any reference in the spec.



Thanks
Adam


Re: JPA 2.0 development plans...

2008-10-31 Thread Adam Hardy
Is there a list of new JPA 2.0 spec features which are already covered by 
existing OpenJPA features?


Such as @Dependent or @OrderColumn ?




On Sep 30, 2008, at 9:10 AM, Kevin Sutter wrote:
Due to the upcoming JPA 2.0 development activities, the OpenJPA svn 
repository needs to make a few changes.  Since the JPA 2.0 changes

could be disruptive to current JPA 1.0 processing (at least during
initial development), I would like to create a 1.3.x service branch.
This 1.3.x branch would indicate the end of new development based on
the JPA 1.0 specification.  This 1.3.x branch could be used for any
necessary support activities (or even a potential release, if
necessary), but the mainline trunk would now become the basis for JPA
2.0 development (2.0.0-SNAPSHOT).

All new development activity would continue in trunk, just like the
past. But, we would also have the ability to start experimenting with
some of the new features that have been identified in the early
drafts of the JPA 2.0 specification.  Granted, we only have the
single Early Draft Review version of the JPA 2.0 spec, but it's at
least a start.  Even if we have to modify a few things along the way,
at least we're getting experience with the new specification.  And,
we can initiate a roadmap for the new features and hopefully
encourage new participation by the diverse OpenJPA community.

We need to determine the best date for this cutover.  Since I'm not
 sure on the current development activities of the OpenJPA 
developers, let's use this thread for the discussion.  Since the end 
of Sept is fast approaching, let's shoot for Friday, Oct 31 as the 
cutover date.


Comments or suggestions are welcome!




many-to-many with extra value

2008-10-23 Thread Adam Hardy
Just reading the EJB3 persistence spec and would like to get some knowledgeable 
confirmation that my understanding is correct:


AFAIK if I have a unidirectional many-to-many relationship from A to B, so A has 
a collection of Bs in my object model, there's no way in the current JPA spec to 
cater for mapping an extra value from the join table onto my B object.


The more I think about how it might be implemented behind the scenes, the more I 
think it is probably rife with impossible requirements, but I thought I'd ask.


Alternatively I guess I could map the join table to a new object C which has the 
property / extra value, and to make coding against it easier, give it 
pass-through getters and setters for the B object's properties.


How have others approached this?

Regards
Adam



Re: h2 database with openjpa

2008-06-03 Thread Adam Hardy

Hi David,

in case you are still looking for a reply: I use H2 with OpenJPA and I don't see 
this problem. It's difficult to say what the problem is because the exception 
doesn't mention what operation it is that BasicDataSource doesn't support.


Are you sure your connection is good?

David Hofmann on 20/05/08 16:57, wrote:

I was trying to configure openejb (openjpa as the JPA provider) with
H2 Database. I get the exception shown here.

H2 Version = 1.0.72
OpenJPA/EJB Version = 3.0

persistence.xml configuration
persistence-unit name=crmPU transaction-type=RESOURCE_LOCAL
properties
  property name=openjpa.jdbc.SynchronizeMappings
value=buildSchema(ForeignKeys=true)/

  property name=openjpa.jdbc.DBDictionary
value=org.apache.openjpa.jdbc.sql.H2Dictionary/
  property name=openjpa.ConnectionURL value= jdbc:h2:file:c:
\dev\db\acercatepy/
  property name=openjpa.ConnectionDriverName
value=org.h2.Driver/
  property name=openjpa.ConnectionUserName value=sa/
  property name=openjpa.ConnectionPassword value=/
  property name=openjpa.Log value=DefaultLevel=TRACE,
Tool=TRACE/
/properties
  /persistence-unit

I have done a lot of search but I couldn't find the solution.

I will appreciate a lot if somebody's help to show me whre I am doing
wrong. Probably I am not understanding how the configuration part of
openejb is working.

Thank you very much in advance

Greetings,

P.D.: Sorry about my English, I am still learning it
Exception in thread main javax.ejb.EJBException: The bean
encountered a non-application exception.; nested exception is:
openjpa-1.0.1-r420667:592145 nonfatal general error
org.apache.openjpa.persistence.PersistenceException: There were errors
initializing your configuration: openjpa-1.0.1-r420667:592145 fatal
store error org.apache.openjpa.util.StoreException: Not supported by
BasicDataSource
at
org.apache.openjpa.jdbc.schema.DataSourceFactory.installDBDictionary(DataSo

urceFactory.java:
234)
at
org.apache.openjpa.jdbc.conf.JDBCConfigurationImpl.getConnectionFactory(JDBCConfigurationImpl.java:

709)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at
org.apache.openjpa.lib.conf.ConfigurationImpl.instantiateAll(ConfigurationImpl.java:

289)
at
org.apache.openjpa.conf.OpenJPAConfigurationImpl.instantiateAll(OpenJPAConfigurationImpl.java:

1463)
at
org.apache.openjpa.kernel.AbstractBrokerFactory.makeReadOnly(AbstractBrokerFactory.java:

638)
at
org.apache.openjpa.kernel.AbstractBrokerFactory.newBroker(AbstractBrokerFactory.java:

169)
at
org.apache.openjpa.kernel.DelegatingBrokerFactory.newBroker(DelegatingBrokerFactory.java:

142)
at
org.apache.openjpa.persistence.EntityManagerFactoryImpl.createEntityManager(EntityManagerFactoryImpl.java:

192)
at
org.apache.openjpa.persistence.EntityManagerFactoryImpl.createEntityManager(EntityManagerFactoryImpl.java:

56)
at
org.apache.openejb.persistence.JtaEntityManagerRegistry.getEntityManager(JtaEntityManagerRegistry.java:

105)
at
org.apache.openejb.persistence.JtaEntityManager.getEntityManager(JtaEntityManager.java:

61)
at
org.apache.openejb.persistence.JtaEntityManager.persist(JtaEntityManager.java:

97)
at
com.acercatepy.personalsbe.ejb.EJBServerImpl.giveMeResults(EJBServerImpl.java:

23)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.apache.openejb.core.interceptor.ReflectionInvocationContext
$Invocation.invoke(ReflectionInvocationContext.java:158)
at
org.apache.openejb.core.interceptor.ReflectionInvocationContext.proceed(ReflectionInvocationContext.java:

141)
at
org.apache.openejb.core.interceptor.InterceptorStack.invoke(InterceptorStack.java:

67)
at
org.apache.openejb.core.stateless.StatelessContainer._invoke(StatelessContainer.java:

210)
at
org.apache.openejb.core.stateless.StatelessContainer._invoke(StatelessContainer.java:

188)
at
org.apache.openejb.core.stateless.StatelessContainer.invoke(StatelessContainer.java:

165)
at
org.apache.openejb.core.ivm.EjbObjectProxyHandler.businessMethod(EjbObjectProxyHandler.java:

217)
at
org.apache.openejb.core.ivm.EjbObjectProxyHandler._invoke(EjbObjectProxyHandler.java:

77)
at
org.apache.openejb.core.ivm.BaseEjbProxyHandler.invoke(BaseEjbProxyHandler.java:

321)
at
org.apache.openejb.util.proxy.Jdk13InvocationHandler.invoke(Jdk13InvocationHandler.java:

49)
at $Proxy19.giveMeResults(Unknown Source)
at com.acercatepy.personalsbe.test.TestAll.main(TestAll.java:29)
Caused by: java.lang.UnsupportedOperationException: Not supported by

Re: General restrictions

2008-05-07 Thread Adam Hardy

alex_ro_bv on 07/05/08 10:39, wrote:

Hi all, I was wondering if there is possible a general restriction for my
database using openJPA.
What I mean is that in my database all my tables have some common fields.
For example deletedflag. So I was wondering if is there any way to exclude
all these records in my business logic somehow. So for any query that I
execute via openJPA, a new condition should be implied, with deletedflag=0.
This should be applied for all the joins too. I was hoping for some easy
setting in orm.xml or something like that were I could add conditions for
all my tables but couldn't find any.
Thank you very much, for your response.


No there is no JPA capability for filters in the Hibernate sense. You could 
implement GROUP BY ... HAVING in all your queries.


I would use database views.


Re: maintaining bi-directional relationships

2008-05-07 Thread Adam Hardy

Oh sorry, I should have said, the app must be JPA-compliant / portable.

Michael Vorburger on 07/05/08 17:44, wrote:

Why not use OpenJPA Managed Inverses,
http://openjpa.apache.org/docs/latest/manual/ref_guide_inverses.html ?


-Original Message-
From: Adam Hardy [mailto:[EMAIL PROTECTED] 
Sent: mercredi, 7. mai 2008 17:51

To: users@openjpa.apache.org
Subject: maintaining bi-directional relationships

This is a design question really, sorry it's a little OT.

I assume I have responsibility for maintaining parent - child
relationships reflected by the parent.getChildren collection and
child.getParent entity at the point where the change occurs.

Is there a good pattern to implement changes to both the children
collection on the parent and the parent entity on the child?

One approach: setting field-access on the entity mapping, and adding
code to the setters with some sort of mechanism to stop infinite
looping.




Re: update always called on commits (still)

2008-05-07 Thread Adam Hardy
A question to the OpenJPA developers: regarding different types of byte code 
enhancement and the disparity in functionality between them, does this reflect a 
seperate code base for enhanced versus non-enhanced OpenJPA?




Tedman Leung on 07/05/08 22:01, wrote:
I eventually did compile time enhancement (as the agent was way too 
problemmatic) and yeah all my problems went away. Thanks for the info.




On Wed, May 07, 2008 at 11:35:55AM -0700, Tedman Leung wrote:
Well according to the documentation unenhanced will become enhanced at 
runtime via Deploy-time enhancement, Java 6 class retransformation, 
Java 5 class redefinition.


I think what I was getting from Michael's posting is that the automatic 
enhancement #3 has issues so right now I'm trying to weigh option #1 and 
option #2.


I originally thought option #2 would be simple but unfortunately it's not 
as it requires all the jars to be in the initial classpath which means I'm 
now messing with tomcat's original classpath and maintaining a classpath 
outside of the automatically generated WEB-INF/lib/* ...





Just to clarify, I believe there are three possible enhancement scenarios:

1. build-time enhancement - running 
org.apache.openjpa.enhance.PCEnhancer over your bytecode before you run 
your app.


2. runtime enhancement - giving the -javaagent=openjpa.jar argument to 
the JVM when you run your app.


3. unenhanced - neither 1 nor 2.

Have you observed this problem in all three scenarios? Just (2) or (3)?




Re: question about extended context and transactions

2008-05-01 Thread Adam Hardy

Adam Hardy on 30/04/08 21:22, wrote:
If I am running JPA in extended persistence context, when I call 
EntityManager.merge() within a transaction and then commit the 
transaction, should it or shouldn't it execute the SQL?


I assumed that it would but I have a situation where it won't unless I 
call EntityManager.flush() before committing the transaction.


I'd really appreciate an answer on this one from anyone - I find the 
documentation either too obtuse or too scant to get myself a firm enough 
understanding of the situation I'm tackling at the moment.





question about extended context and transactions

2008-04-30 Thread Adam Hardy
If I am running JPA in extended persistence context, when I call 
EntityManager.merge() within a transaction and then commit the transaction, 
should it or shouldn't it execute the SQL?


I assumed that it would but I have a situation where it won't unless I call 
EntityManager.flush() before committing the transaction.



Regards
Adam



currencies

2008-04-25 Thread Adam Hardy
From googling and a quick scan of the mailing list archives, it looks as if I 
can't map java.util.Currency directly to a database column using pure JPA.


Unless anyone can tell me I'm wrong, I'm thinking of doing it with my own enum 
of the currencies that my app uses and map it with


enumerated

Then the question becomes: what would be more future-compatible - ORDINAL or 
STRING?


I assume STRING because java.util.Currency has no ordinal, but if I use STRING, 
won't JPA save the String TradedCurrency.USD or TradedCurrency.EUR instead of 
just USD or EUR?



Thanks
Adam



Re: currencies

2008-04-25 Thread Adam Hardy
Yes, I just discovered that myself - I was worried because I see a lot of 
TradeCurrency.USD come out in logs and in JSPs where I guess toString() is used.


I figure it's workable this way, although I think custom type converters for 
Java to SQL has got to be near if not at the top of my list of JPA improvements.


Tedman Leung on 25/04/08 15:24, wrote:
when I use enums it just saves the instance name i.e. USD not 
TradeCurrency.USD I believe it stores the enum.getName() call.


From googling and a quick scan of the mailing list archives, it looks as if 
I can't map java.util.Currency directly to a database column using pure JPA.


Unless anyone can tell me I'm wrong, I'm thinking of doing it with my own 
enum of the currencies that my app uses and map it with


enumerated

Then the question becomes: what would be more future-compatible - ORDINAL 
or STRING?


I assume STRING because java.util.Currency has no ordinal, but if I use 
STRING, won't JPA save the String TradedCurrency.USD or TradedCurrency.EUR 
instead of just USD or EUR?




field strategies / data type handlers making into spec?

2008-04-10 Thread Adam Hardy
Does anyone know what the forecast is for the JPA specification to adopt data 
type handler configuration to take care of such things as Money(amount, currency) ?


If at all?


Thanks
Adam


Re: exception from merging

2008-04-08 Thread Adam Hardy
I have reduced this to a stand-alone unit test and also tested it against 
Toplink and Hibernate, with which it works.


So I'm looking at a pure OpenJPA issue here.

Here is the low-down:

- Parent - child entities mapped using XML as opposed to annotations
- both inherit a mapped superclass
- both have a prepersist and preupdate listener configured
- not using enhancement

This is all I do:

EntityManager entityManager =
db.entityManagerFactory.createEntityManager();
entityManager.getTransaction().begin();
Genus genus = entityManager.find(Genus.class, new Long(1));
entityManager.getTransaction().commit();
entityManager.close();
// now detached
entityManager = db.entityManagerFactory.createEntityManager();
entityManager.getTransaction().begin();
entityManager.merge(genus);

and it throws an exception. If there is a child row in the db, I get the 
exception below.


If there is no child in existence, I get a NullPointerException.

If I enhance the entities, all is hunkydory.

This is using OpenJPA v1.1.0 snapshot from February.

Do you want a bug report filed?

Regards
Adam



Adam Hardy on 07/04/08 16:27, wrote:


I called merge on my parent entity because it is detached.

You can see from the stacktrace that something in merge() is not happening.

$Proxy13 seems to be the name of the parent's child collection property, 
I can see from debugging.


Does anybody recognise the problem?


openjpa-1.1.0-SNAPSHOT-r420667:609825 fatal general error 
org.apache.openjpa.persistence.PersistenceException: Unable to create a 
second class object proxy for final class class $Proxy13.
at 
org.apache.openjpa.util.ProxyManagerImpl.assertNotFinal(ProxyManagerImpl.java:555) 

at 
org.apache.openjpa.util.ProxyManagerImpl.generateProxyCollectionBytecode(ProxyManagerImpl.java:524) 

at 
org.apache.openjpa.util.ProxyManagerImpl.getFactoryProxyCollection(ProxyManagerImpl.java:373) 

at 
org.apache.openjpa.util.ProxyManagerImpl.copyCollection(ProxyManagerImpl.java:192) 

at 
org.apache.openjpa.kernel.AttachStrategy.copyCollection(AttachStrategy.java:342) 

at 
org.apache.openjpa.kernel.AttachStrategy.attachCollection(AttachStrategy.java:319) 

at 
org.apache.openjpa.kernel.AttachStrategy.replaceList(AttachStrategy.java:357) 

at 
org.apache.openjpa.kernel.AttachStrategy.attachField(AttachStrategy.java:222) 

at 
org.apache.openjpa.kernel.VersionAttachStrategy.attach(VersionAttachStrategy.java:151) 

at 
org.apache.openjpa.kernel.AttachManager.attach(AttachManager.java:241)
at 
org.apache.openjpa.kernel.AttachManager.attach(AttachManager.java:101)

at org.apache.openjpa.kernel.BrokerImpl.attach(BrokerImpl.java:3196)
at 
org.apache.openjpa.kernel.DelegatingBroker.attach(DelegatingBroker.java:1142) 

at 
org.apache.openjpa.persistence.EntityManagerImpl.merge(EntityManagerImpl.java:736) 

at 
org.permacode.atomictest.jpa.JpaSpeciesDao.persist(JpaSpeciesDao.java:81)







Re: exception from merging

2008-04-08 Thread Adam Hardy

OK, here it is:

https://issues.apache.org/jira/browse/OPENJPA-560

Hopefully that makes it plain where and when it is happening.

regards
Adam

Michael Dick on 08/04/08 16:17, wrote:

Hi Adam,

Please do open a JIRA. If you wouldn't mind including a testcase that
demonstrates the problem that would be much appreciated too.

-Mike

On Tue, Apr 8, 2008 at 7:16 AM, Adam Hardy [EMAIL PROTECTED]
wrote:


I have reduced this to a stand-alone unit test and also tested it against
Toplink and Hibernate, with which it works.

So I'm looking at a pure OpenJPA issue here.

Here is the low-down:

- Parent - child entities mapped using XML as opposed to annotations
- both inherit a mapped superclass
- both have a prepersist and preupdate listener configured
- not using enhancement

This is all I do:

EntityManager entityManager =
   db.entityManagerFactory.createEntityManager();
entityManager.getTransaction().begin();
Genus genus = entityManager.find(Genus.class, new Long(1));
entityManager.getTransaction().commit();
entityManager.close();
// now detached
entityManager = db.entityManagerFactory.createEntityManager();
entityManager.getTransaction().begin();
entityManager.merge(genus);

and it throws an exception. If there is a child row in the db, I get the
exception below.

If there is no child in existence, I get a NullPointerException.

If I enhance the entities, all is hunkydory.

This is using OpenJPA v1.1.0 snapshot from February.

Do you want a bug report filed?

Regards Adam




Adam Hardy on 07/04/08 16:27, wrote:


I called merge on my parent entity because it is detached.

You can see from the stacktrace that something in merge() is not 
happening.


$Proxy13 seems to be the name of the parent's child collection property, 
I can see from debugging.


Does anybody recognise the problem?


openjpa-1.1.0-SNAPSHOT-r420667:609825 fatal general error 
org.apache.openjpa.persistence.PersistenceException: Unable to create a 
second class object proxy for final class class $Proxy13. at 
org.apache.openjpa.util.ProxyManagerImpl.assertNotFinal(ProxyManagerImpl.java:555)



at 
org.apache.openjpa.util.ProxyManagerImpl.generateProxyCollectionBytecode(ProxyManagerImpl.java:524)



at 
org.apache.openjpa.util.ProxyManagerImpl.getFactoryProxyCollection(ProxyManagerImpl.java:373)



at 
org.apache.openjpa.util.ProxyManagerImpl.copyCollection(ProxyManagerImpl.java:192)



at 
org.apache.openjpa.kernel.AttachStrategy.copyCollection(AttachStrategy.java:342)



at 
org.apache.openjpa.kernel.AttachStrategy.attachCollection(AttachStrategy.java:319)



at 
org.apache.openjpa.kernel.AttachStrategy.replaceList(AttachStrategy.java:357)



at 
org.apache.openjpa.kernel.AttachStrategy.attachField(AttachStrategy.java:222)



at 
org.apache.openjpa.kernel.VersionAttachStrategy.attach(VersionAttachStrategy.java:151)



at org.apache.openjpa.kernel.AttachManager.attach(AttachManager.java:241)
 at 
org.apache.openjpa.kernel.AttachManager.attach(AttachManager.java:101) at
org.apache.openjpa.kernel.BrokerImpl.attach(BrokerImpl.java:3196) at 
org.apache.openjpa.kernel.DelegatingBroker.attach(DelegatingBroker.java:1142)



at 
org.apache.openjpa.persistence.EntityManagerImpl.merge(EntityManagerImpl.java:736)



at 
org.permacode.atomictest.jpa.JpaSpeciesDao.persist(JpaSpeciesDao.java:81)











persisting an entity and JPA behaviour with referenced entities

2008-04-07 Thread Adam Hardy
I've got an issue with the persist operation, when I use a detached entity as 
one of the entity's referenced entities.


OpenJPA throws the
org.apache.openjpa.persistence.EntityExistsException: Attempt to persist 
detached object 
[EMAIL PROTECTED].


The situation is this: my MVC layer has received a new entity which it must 
create. The parent entity for this is found in a cache, in a detached state.


What I'd like to know, is why is JPA forcing me to merge this detached entity 
before allowing me to persist the new child?


It means I can't use the cache, or I have to program the DAO to merge all 
referenced entities. This latter option seems like a job that JPA should be 
doing. JPA knows this parent is a detached entity, so why can't it merge the 
managed entity?


I can't see any text in the EJB spec that would mandate this behaviour, yet 
Hibernate does it too.


Regards
Adam


exception from merging

2008-04-07 Thread Adam Hardy


I called merge on my parent entity because it is detached.

You can see from the stacktrace that something in merge() is not happening.

$Proxy13 seems to be the name of the parent's child collection property, I can 
see from debugging.


Does anybody recognise the problem?


openjpa-1.1.0-SNAPSHOT-r420667:609825 fatal general error 
org.apache.openjpa.persistence.PersistenceException: Unable to create a second 
class object proxy for final class class $Proxy13.
	at 
org.apache.openjpa.util.ProxyManagerImpl.assertNotFinal(ProxyManagerImpl.java:555)
	at 
org.apache.openjpa.util.ProxyManagerImpl.generateProxyCollectionBytecode(ProxyManagerImpl.java:524)
	at 
org.apache.openjpa.util.ProxyManagerImpl.getFactoryProxyCollection(ProxyManagerImpl.java:373)
	at 
org.apache.openjpa.util.ProxyManagerImpl.copyCollection(ProxyManagerImpl.java:192)

at 
org.apache.openjpa.kernel.AttachStrategy.copyCollection(AttachStrategy.java:342)
	at 
org.apache.openjpa.kernel.AttachStrategy.attachCollection(AttachStrategy.java:319)

at 
org.apache.openjpa.kernel.AttachStrategy.replaceList(AttachStrategy.java:357)
at 
org.apache.openjpa.kernel.AttachStrategy.attachField(AttachStrategy.java:222)
	at 
org.apache.openjpa.kernel.VersionAttachStrategy.attach(VersionAttachStrategy.java:151)

at 
org.apache.openjpa.kernel.AttachManager.attach(AttachManager.java:241)
at 
org.apache.openjpa.kernel.AttachManager.attach(AttachManager.java:101)
at org.apache.openjpa.kernel.BrokerImpl.attach(BrokerImpl.java:3196)
at 
org.apache.openjpa.kernel.DelegatingBroker.attach(DelegatingBroker.java:1142)
	at 
org.apache.openjpa.persistence.EntityManagerImpl.merge(EntityManagerImpl.java:736)

at 
org.permacode.atomictest.jpa.JpaSpeciesDao.persist(JpaSpeciesDao.java:81)



Re: persisting an entity and JPA behaviour with referenced entities

2008-04-07 Thread Adam Hardy

So theoretically, a merge will also persist?

Actually I started out this project assuming it would, but because it has to 
work with TopLink as well OpenJPA, I stopped using merge() when I came across a 
bug in the TopLink implementation.


But TopLink aside, thanks for the reminder.



Michael Dick on 07/04/08 17:06, wrote:

It looks like the persist is being cascaded to the detached entity. If
that's the case then we're throwing the exception per these bullets in the
JPA spec :

3.2.1 Persisting an Entity Instance
A new entity instance becomes both managed and persistent by invoking the
persist method on it or
by cascading the persist operation.
The semantics of the persist operation, applied to an entity X are as
follows:
   snip
• If X is a detached object, the EntityExistsException may be thrown
when the persist
operation is invoked, or the EntityExistsException or another
PersistenceException
may be thrown at flush or commit time.
• For all entities Y referenced by a relationship from X, if the
relationship to Y has been annotated
with the cascade element value cascade=PERSIST or cascade=ALL, the persist
operation is applied to Y.

If you were to merge the new entity instead of persisting it then the merge
action would be cascaded to the parent entity. It would become managed, but
that might be one way to resolve the issue you're hitting.

-Mike

On Mon, Apr 7, 2008 at 7:09 AM, Adam Hardy [EMAIL PROTECTED]
wrote:


I've got an issue with the persist operation, when I use a detached entity
as one of the entity's referenced entities.

OpenJPA throws the
org.apache.openjpa.persistence.EntityExistsException: Attempt to persist
detached object
[EMAIL PROTECTED]
.

The situation is this: my MVC layer has received a new entity which it
must create. The parent entity for this is found in a cache, in a detached
state.

What I'd like to know, is why is JPA forcing me to merge this detached
entity before allowing me to persist the new child?

It means I can't use the cache, or I have to program the DAO to merge all
referenced entities. This latter option seems like a job that JPA should be
doing. JPA knows this parent is a detached entity, so why can't it merge the
managed entity?

I can't see any text in the EJB spec that would mandate this behaviour,
yet Hibernate does it too.




Re: Database columns not used by OpenJPA

2008-03-31 Thread Adam Hardy

David Goodenough on 31/03/08 11:06, wrote:

I have an application for which I would like to use OpenJPA, and it involves
the PostgreSQL tsearch(full text search) facility.

Now obviously I would not expect OpenJPA to understand or use (directly) 
tsearch, so I assume that I can do the actual searches using the facility that

OpenJPA has to use real SQL.

But in order to use tsearch, I need to set up a column of type tsvector, 
which is course is not something that OpenJPA can be expected to understand.

So the question is how do I tell it to ignore it, i.e. to allow it to exist
but not to try to use it.


Co-incidentally I came across a problem with an unmapped property on an entity 
causing OpenJPA to grind to a halt, until I marked it as transient, but I think 
this is the other side of the coin compared to your situation.


You don't actually say what you do or do not want to do with the column.

Presumably you don't have a property on your java entity for the column and you 
are not mapping it?


AFAIK JPA will ignore columns that are not mapped, so you should be OK, but I 
guess you're asking because it's not OK?


Re: Could not locate metadata (orm.xml)

2008-03-31 Thread Adam Hardy

Hi Lars,

that is a weird error that I haven't seen before - it looks like OpenJPA is 
confusing the table name with the class name.


In case it helps, you don't need to reference the orm.xml in the 
persistence.xml. Not sure if it's a problem when you do, try without.


orm.xml looks like it's in the same directory as persistence.xml from your 
email. perhaps you should double-check.


my orm.xml header is slightly different:

?xml version=1.0 encoding=UTF-8?
entity-mappings xmlns=http://java.sun.com/xml/ns/persistence/orm;
  xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
  xsi:schemaLocation=http://java.sun.com/xml/ns/persistence/orm 
http://java.sun.com/xml/ns/persistence/orm_1_0.xsd;

  version=1.0

I do not run the OpenJPA enhancer and you shouldn't need to, but I am using the 
OpenJPA v1.1.0 snapshot, not 1.0.x.





Lars Vogel on 30/03/08 19:29, wrote:

Hi,

Can anyone point me to a working standalone example there a a orm.xml file
is used instead of Annotations?

I'm able to build a small example with annotations but if I try the same
with orm.xml I receive the following error:

Exception in thread main openjpa-1.0.2-r420667:627158 fatal user error
org.apache.openjpa.persistence.ArgumentException: Could not locate metadata
for the class using alias MYAPPLICATION.PEOPLETABLE. This could mean that
the OpenJPA enhancer or load-time weaver was not run on the type whose alias
is MYAPPLICATION.PEOPLETABLE. Registered alias mappings: {
MYAPPLICATION.PEOPLETABLE=null, Person=[class datamodel.Person]}


I have the following orm.xml

entity-mappings xmlns=http://java.sun.com/xml/ns/persistence/orm;
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
xsi:schemaLocation=http://java.sun.com/xml/ns/persistence/ormorm_1_0.xsd;
version=1.0
entity class=datamodel.Person
  table name=MYAPPLICATION.PEOPLETABLE/
attributes
id name=id /
basic name=firstName /
basic name=lastName /
transient name=nonsenseField /
/attributes
/entity
/entity-mappings

I have the following persistence.xml

?xml version=1.0 encoding=UTF-8?
!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.  See the NOTICE file
distributed with this work for additional information
regarding copyright ownership.  The ASF licenses this file
to you under the Apache License, Version 2.0 (the
License); you may not use this file except in compliance
with the License.  You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
AS IS BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied.  See the License for the
specific language governing permissions and limitations
under the License.
--
persistence xmlns=http://java.sun.com/xml/ns/persistence;
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance; version=1.0

!--
A persistence unit is a set of listed persistent entities as well
the configuration of an EntityManagerFactory. We configure each
example in a separate persistence-unit.
--
persistence-unit name=people transaction-type=RESOURCE_LOCAL

!--
The default provider can be OpenJPA, or some other product.
This element is optional if OpenJPA is the only JPA provider
in the current classloading environment, but can be specified
in cases where there are multiple JPA implementations available.
--

provider
org.apache.openjpa.persistence.PersistenceProviderImpl
/provider
mapping-fileMETA-INF/orm.xml/mapping-file

!-- We must enumerate each entity in the persistence unit --
classdatamodel.Person/class

properties
property name=openjpa.ConnectionURL

value=jdbc:derby:C:/DerbyDatabases/hellojpa-database9;create=true /
property name=openjpa.ConnectionDriverName
value=org.apache.derby.jdbc.EmbeddedDriver /
property name=openjpa.ConnectionUserName value= /
property name=openjpa.ConnectionPassword value= /

!--
Tell OpenJPA to automatically create tables in the database
for entities. Note that this should be disabled when
running against a production database, since you probably
don't want to be altering the schema at runtime.
--
property name=openjpa.jdbc.SynchronizeMappings
value=buildSchema /

/properties
/persistence-unit

/persistence




Re: Database columns not used by OpenJPA

2008-03-31 Thread Adam Hardy

David Goodenough on 31/03/08 13:37, wrote:

On Monday 31 March 2008, Adam Hardy wrote:

David Goodenough on 31/03/08 11:06, wrote:

I have an application for which I would like to use OpenJPA, and it
involves the PostgreSQL tsearch(full text search) facility.

Now obviously I would not expect OpenJPA to understand or use (directly)
tsearch, so I assume that I can do the actual searches using the facility
that OpenJPA has to use real SQL.

But in order to use tsearch, I need to set up a column of type tsvector,
which is course is not something that OpenJPA can be expected to
understand. So the question is how do I tell it to ignore it, i.e. to
allow it to exist but not to try to use it.

Co-incidentally I came across a problem with an unmapped property on an
entity causing OpenJPA to grind to a halt, until I marked it as transient,
but I think this is the other side of the coin compared to your situation.

You don't actually say what you do or do not want to do with the column.

Presumably you don't have a property on your java entity for the column and
you are not mapping it?

AFAIK JPA will ignore columns that are not mapped, so you should be OK, but
I guess you're asking because it's not OK?


Actually that is exactly what I want, I just could not find the place in the
manual that said it.  


I need to ignore the column because I do not need to try to map a column
of type tsvector into a Java type, as I will only use it in real SQL searches. 
The results of these searches only need the normal columns.  

That is fine for normal operations, but is there any way to mark a column as 
to be retained if I use openjpa.jdbc.SynchronizeMappings?  I would not want

to do this normally, just as part of an upgrade process so that the DB is
brought from whatever state it is in to the current schema.


Sorry, never used openjpa.jdbc.SynchronizeMappings. Don't know. Maybe someone 
else here does


Re: Primary key field generation with postgres

2008-03-27 Thread Adam Hardy

You might want to make that schema.toLowerCase()

Postgres diverges from the JDBC spec by making everything lower case and it 
won't find an upper case schema.


I raised that as a bug with postgres and their developers told me it was 
unlikely to be changed any time soon.


roger.keays on 27/03/08 12:15, wrote:

Here is my patch for OpenJPA 1.0.2. You might prefer to move the code to the
superclass though.

Index:
openjpa-jdbc/src/main/java/org/apache/openjpa/jdbc/sql/PostgresDictionary.java
===
---
openjpa-jdbc/src/main/java/org/apache/openjpa/jdbc/sql/PostgresDictionary.java 
(revision 641780)

+++
openjpa-jdbc/src/main/java/org/apache/openjpa/jdbc/sql/PostgresDictionary.java 
(working copy)

@@ -149,6 +149,21 @@
 STORE, VACUUM, VERBOSE, VERSION,
 }));
 }
+
+/**
+ * Prepend schema names to sequence names if there is one. This 
+ * method does not escape reserved words in the schema name or 
+ * sequence name.

+ */
+protected String getGeneratedKeySequenceName(Column col) {
+String sequence = super.getGeneratedKeySequenceName(col);
+String schema = col.getSchemaName();
+if (schema != null  schema.length()  0) {
+return schema + . + sequence;
+} else {
+return sequence;
+}
+}
 
 public Date getDate(ResultSet rs, int column)

 throws SQLException {


Marc LaPierre wrote:

It seems like JPA isn't finding your sequence in the ves schema.

Are you sure that your sequence exists in Postgres?
Are you able to run select currval('user_id_seq') from the sql
console?


Marc


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Wednesday, February 13, 2008 8:25 AM
To: users@openjpa.apache.org
Subject: Primary key field generation with postgres

I use Postgres 8.2 with jdbc Type 3 driver.

I have got the following table definition:



create table ves.user

(idserialnot null,

 user varchar(20) not null,

 password varchar(20) not null,

 constraint pk_ves_user primary key(id)

);





My PAO Class looks like:



@Entity

@Table(schema=ves, name=user)

public class UserPAO implements Serializable {

@Id

@GeneratedValue(strategy=IDENTITY)

@Column(insertable=false, updatable = false)

private long id;





Trying to insert a new user results in an persistence exception: The
invalid statement is reported

As: select currval('user_id_seq')



But the table is in the schema ves, so the correct name of the
sequence is ves.user_id_seq.

If I put the table into the public schema, and omit the schema=ves
statement in the @Table

annotation, all works.



How can I use automatic key generation with Postgres without putting the
table into the public schema??

Table generation and all other things did not work.




equals and hashCode methods

2008-03-26 Thread Adam Hardy
I put a superclass on all my entities for a couple of general properties that 
they all share, and now I'm considering putting my equals() and hashCode() 
methods into the superclass as well, with reflection to loop over the array of 
child methods, calling whatever POJO getters are present.


The advantage in terms of not needing to maintain the equals, hashCode and 
toString methods on every entity is quite attractive, but I'm worried about


(a) performance
(b) any nasty surprises it might cause in JPA entity management

With (a), performance will be hit if an entity has large collections of 
one-to-many or many-to-many related entities. I might be able to get around 
quite easily though.


With (b) it seems more problematic, for instance calling the equals() might get 
stuck in an endless loop if I have any circular relationships in my model, which 
would otherwise be benign if only loaded by lazy loading.


Sorry that this isn't directly an OpenJPA question.

This is the kind of thing I'm thinking of:

public boolean equals(Object object) {
if (this == object) return true;
if (object == null) return false;
if (!(object instanceof TradeHistory)) return false;
TradeHistory other = (TradeHistory) object;
boolean equal = false;
Method[] methods = this.getClass().getMethods();
for (int i = 0; i  methods.length; i++) {
Method method = methods[i];
if (method.getName().equalsIgnoreCase(GETCLASS)) continue;
if ((method.getName().startsWith(get))
 (method.getParameterTypes().length == 0)) {
try {
Method otherMethod =
other.getClass().getMethod(method.getName(),
new Class[] {});
equal =
isEqual(method.invoke(this, new Object[] {}), otherMethod
.invoke(other, new Object[] {}));
}
catch (Exception e) {
}
}
}
return equal;
}


Re: difference between openJPA and Hibernate?

2008-03-24 Thread Adam Hardy

Mick Knutson on 22/03/08 18:52, wrote:

Can someone please point me to a tutorial or something that will help me
understand the difference between openJPA and Hibernate?



have a look in the archives over the last month. There was a post from Rick 
Hightower asking for comparative opinions about the different JPA packages out 
there. He posted a link to a discussion somewhere else too. You should find 
something useful there I guess.


Re: Issues trying to run OpenJPA, DBUnit example.

2008-03-21 Thread Adam Hardy

Hi Mick,

from the fact that you don't show any code that instantiates the 
entityManagerFactory variable, I assume you get this NPE because the 
instantiation failed. So effectively the NPE is masking the real exception from 
JPA. You need to turn up the logging to reveal the disappearing stacktrace, or 
find the code that's calling Persistence.createEntityManagerFactory() and make 
sure you catch the exception and log the stacktrace.



Mick Knutson on 20/03/08 22:03, wrote:

I have been trying to run this example:
http://bill.dudney.net/roller/bill/entry/20070428

Now it seems, that when the testNG tests run, I get an NPE here:

*DbUnitTestBase.java
*
@BeforeClass(groups = {database})
protected void loadSeedData() throws Exception {
logger.debug(loadSeedData);
IDatabaseConnection dbunitConn = null;
EntityManager em = null;
try {
*em = entityManagerFactory.createEntityManager();*


Here is my error in *testng-results.xml*

test name=Command line test
  class name=net.dudney.jpaund.domain.SimpleTest
test-method status=SKIP signature=initDB() name=initDB
is-config=true duration-ms=0 started-at=2008-03-20T11:51:46Z
finished-at=2008-03-20T11:51:46Z
/test-method
test-method status=SKIP signature=testCreateSiteUser()
name=testCreateSiteUser duration-ms=0 started-at=2008-03-20T11:51:46Z
finished-at=2008-03-20T11:51:46Z
/test-method
test-method status=FAIL signature=loadSeedData()
name=loadSeedData is-config=true duration-ms=0
started-at=2008-03-20T11:51:46Z finished-at=2008-03-20T11:51:46Z
  exception class=java.lang.NullPointerException
full-stacktrace
  ![CDATA[java.lang.NullPointerException
at net.dudney.jpaund.domain.DbUnitTestBase.loadSeedData(
DbUnitTestBase.java:43)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(
NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(
DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at org.testng.internal.MethodHelper.invokeMethod(MethodHelper.java:580)
at org.testng.internal.Invoker.invokeConfigurationMethod(Invoker.java
:398)
at org.testng.internal.Invoker.invokeConfigurations(Invoker.java:145)
at org.testng.internal.Invoker.invokeConfigurations(Invoker.java:82)
at org.testng.internal.TestMethodWorker.invokeBeforeClassMethods(
TestMethodWorker.java:166)
at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:103)
at org.testng.TestRunner.runWorkers(TestRunner.java:689)
at org.testng.TestRunner.privateRun(TestRunner.java:566)
at org.testng.TestRunner.run(TestRunner.java:466)
at org.testng.SuiteRunner.runTest(SuiteRunner.java:301)
at org.testng.SuiteRunner.runSequentially(SuiteRunner.java:296)
at org.testng.SuiteRunner.privateRun(SuiteRunner.java:276)
at org.testng.SuiteRunner.run(SuiteRunner.java:191)
at org.testng.TestNG.createAndRunSuiteRunners(TestNG.java:808)
at org.testng.TestNG.runSuitesLocally(TestNG.java:776)
at org.testng.TestNG.run(TestNG.java:701)
at org.apache.maven.surefire.testng.TestNGExecutor.run(
TestNGExecutor.java:62)
at org.apache.maven.surefire.testng.TestNGDirectoryTestSuite.execute(
TestNGDirectoryTestSuite.java:136)
at org.apache.maven.surefire.Surefire.run(Surefire.java:177)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(
NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(
DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(
SurefireBooter.java:338)
at org.apache.maven.surefire.booter.SurefireBooter.main(
SurefireBooter.java:997)
]]
/full-stacktrace
  /exception
/test-method
  /class
  class name=net.dudney.jpaund.dao.jpa.BaseDaoJpaTest
test-method status=PASS signature=testGetUpdateObjects()
name=testGetUpdateObjects duration-ms=13078
started-at=2008-03-20T11:50:42Z finished-at=2008-03-20T11:50:55Z
/test-method
test-method status=PASS signature=testQueryBuildingRealClass()
name=testQueryBuildingRealClass duration-ms=0
started-at=2008-03-20T11:50:55Z finished-at=2008-03-20T11:50:55Z
/test-method
test-method status=PASS
signature=testQueryBuildingNullParameter()
name=testQueryBuildingNullParameter duration-ms=0
started-at=2008-03-20T11:51:46Z finished-at=2008-03-20T11:51:46Z
/test-method
test-method status=PASS signature=testGetObject()
name=testGetObject duration-ms=219 started-at=2008-03-20T11:51:45Z
finished-at=2008-03-20T11:51:46Z
/test-method
test-method status=PASS
signature=testSaveOrUpdateObjectExistingObject()

Re: callback on superclass

2008-03-11 Thread Adam Hardy
Unfortunately, I can't get mapped-superclass callbacks to work in Toplink 
Essentials or Hibernate EntityManager.


Funny, considering it must be a fairly common requirement.

Adam Hardy on 06/03/08 18:05, wrote:

I can't get a callback to work on my entity superclass.

I've mapped it like this:

  mapped-superclass class=org.permacode.atomic.domain.AtomicEntity
access=FIELD
pre-persist method-name=prePersistCallBack /
attributes
  basic name=ownerId
column name=OWNER_ID /
  /basic
  basic name=created
column name=CREATED /
temporalDATE/temporal
  /basic
  basic name=modified
column name=MODIFIED /
temporalDATE/temporal
  /basic
  version name=version
column name=VERSION /
  /version
/attributes
  /mapped-superclass


The method prePersistCallBack() is on the superclass:

public void prePersistCallBack()
{
this.modified = new Date();
logger.info(doing prePersistCallBack() -  + this +  - modified=
+ this.modified);
}

I see no logging and I see the SQL statement contains the untouched 
modified value.




callback on superclass

2008-03-06 Thread Adam Hardy

I can't get a callback to work on my entity superclass.

I've mapped it like this:

  mapped-superclass class=org.permacode.atomic.domain.AtomicEntity
access=FIELD
pre-persist method-name=prePersistCallBack /
attributes
  basic name=ownerId
column name=OWNER_ID /
  /basic
  basic name=created
column name=CREATED /
temporalDATE/temporal
  /basic
  basic name=modified
column name=MODIFIED /
temporalDATE/temporal
  /basic
  version name=version
column name=VERSION /
  /version
/attributes
  /mapped-superclass


The method prePersistCallBack() is on the superclass:

public void prePersistCallBack()
{
this.modified = new Date();
logger.info(doing prePersistCallBack() -  + this +  - modified=
+ this.modified);
}

I see no logging and I see the SQL statement contains the untouched modified 
value.

Is there anything extra in the mapped subclass entities which is needed for 
callbacks?


Thanks
Adam



Re: callback on superclass

2008-03-06 Thread Adam Hardy

I wish it was that simple, but the annotation doesn't work either.

Do you have a mapped-superclass with an event listener?


Shibu Gope on 06/03/08 21:41, wrote:

try @PrePersist annotation

On Thu, Mar 6, 2008 at 1:05 PM, Adam Hardy [EMAIL PROTECTED] wrote:

I can't get a callback to work on my entity superclass.

 I've mapped it like this:

   mapped-superclass class=org.permacode.atomic.domain.AtomicEntity
 access=FIELD
 pre-persist method-name=prePersistCallBack /
 attributes
   basic name=ownerId
 column name=OWNER_ID /
   /basic
   basic name=created
 column name=CREATED /
 temporalDATE/temporal
   /basic
   basic name=modified
 column name=MODIFIED /
 temporalDATE/temporal
   /basic
   version name=version
 column name=VERSION /
   /version
 /attributes
   /mapped-superclass


 The method prePersistCallBack() is on the superclass:

 public void prePersistCallBack()
 {
 this.modified = new Date();
 logger.info(doing prePersistCallBack() -  + this +  - modified=
 + this.modified);
 }

 I see no logging and I see the SQL statement contains the untouched modified 
value.

 Is there anything extra in the mapped subclass entities which is needed for
 callbacks?




Re: JPA Spec JAR - RE: multiple persistence-units, each w different provider

2008-03-02 Thread Adam Hardy

Hi Pinaki,

thanks for the damning indictment of Sun's sloppy practices. You could also add 
to that list the fact that it is coded using goto and labels (unless my 
decompiler is playing tricks on me) and therefore liable to be coded with 
similarly dubious quality throughout. Hopefully though it is just a joke played 
by a bored Sun engineer to distress people like me :O


The issue 2814 and related 3229 are both marked as resolved for milestone 
9.1pe_b57 and 3229 says that v1.0.1 has been pushed to maven. I admit complete 
ignorance of the version numbering at Glassfish and how it relates to the 
persistence-api.jar. There is one at repo.maven.org under javax.persistence but 
it is only v1.0 and there's no sign of v1.0.1


Are you presumably still using your own patch? I'm slightly surprised that 
there's no alternative out there on maven.


Regards
Adam


Pinaki Poddar on 29/02/08 14:22, wrote:

Hi,
   javax.persistence.Persistence class as supplied in the distributed
version of jpa.jar has more than one shortcomings (in order of descending
criticality) :

1. This Persistence class statically caches all the persistence provider
implementations. This static caching of implementation classes breaks when
an application is undeployed and redeployed again in an Application Server
because the classloader of a statically cached version goes out of scope.

2. The error handling in Persistence needs improvement because if any of the
provider implementation fails to load then other (possibly valid) providers
do not get a chance to activate themselves.

3. The order in which Persistence class attempts to load the providers (as
in this reported case) is indeterminate and unspecified. 


4. The error reporting when things go bad (often for the uninitiated) needs
to be more informative or user-friendly.

This problem has been encountered [1] and I had supplied a patch to
GlassFish [2] -- however, I do not know when (or whether) this patch will
find its way in the distributed version of jpa.jar.

If there is interest, a patch that addresses the abovementioned issues is
available with me.

Regards --  


[1]
http://dev2dev.bea.com/blog/pinaki.poddar/archive/2007/06/the_promise_of.html
(see to the end of the post)
[2] https://glassfish.dev.java.net/issues/show_bug.cgi?id=2814 (the patch in
Glassfish repository)


Michael Vorburger-4 wrote:


Is there any particular reason why OpenJPA uses the
geronimo-jpa_3.0_spec-1.0.jar instead of the
http://mvnrepository.com/artifact/javax.persistence/persistence-api/1.0
from https://glassfish.dev.java.net/javaee5/persistence/
(https://glassfish.dev.java.net/source/browse/glassfish/persistence-api/). 
Is it OSS Licensing mess, use of a CDDL API JAR in a APL libraray a

problem?

BTW: I noticed that the persistence-api-1.0-sources.jar (e.g. from
http://mirrors.ibiblio.org/pub/mirrors/maven2/javax/persistence/persistence-api/1.0/)
have JavaDoc... while the geronimo-jpa_3.0_spec-1.0-sources.jar or
geronimo-jpa_3.0_spec-1.0-javadoc.jar don't.  If you use Source
Attachment in e.g. Eclipse this is handy...

Would there be any risk if locally we overwrote dependencies so that
developers in our org would use OpenJPA with the persistence-api-1.0.jar
instead of the geronimo-jpa_3.0_spec-1.0.jar (in order to have JavaDoc
Help in Eclipse?).  Probably not - why would there be?  Is this allowed
(more out of curiosity), license wise?  Presumably yes?

Regards,
Michael Vorburger


-Original Message-
From: Patrick Linskey [mailto:[EMAIL PROTECTED] 
Sent: vendredi, 29. février 2008 01:42

To: users@openjpa.apache.org
Subject: Re: multiple persistence-units, each w different provider

Hi,

I think that the official copy is the one at Glassfish.

-Patrick

On Thu, Feb 28, 2008 at 2:15 AM, Adam Hardy [EMAIL PROTECTED]
wrote:
I have three persistence-unit nodes in my persistence.xml: one for 
Toplink with  the toplink provider specified, one for Hibernate, and one

for OpenJPA.

 It tests various operations in JPA, configured to use whichever JPA 
supplier I  specify at the time, i.e. I just change the name of the 
persistence-unit I pass  to Persistence.createEntityManagerFactory().


 So I have, for instance at the moment, not only openjpa.jar on the 
classpath,  but also hibernate-entitymanager.jar and

toplink-essentials.jar.

 When running my test, due to the way 
javax.persistence.Persistence.class is  programmed, any of those 
providers may be picked for use, without reference to  the name of the 
persistence-unit name that I specified when calling

 createEntityManagerFactory() - somewhat surprisingly!

 If my memory serves me well, the persistence-api.jar that I am using 
is just the  one that maven downloaded automatically from the global jar

repo.

 I'm tempted to create my own javax.persistence.Persistence to do it 
properly and  position it ahead of the persistence-api.jar on my

classpath.

 This just doesn't seem right. I guess I should be asking at Sun. Was 
it Sun who

Re: multiple persistence-units, each w different provider

2008-03-02 Thread Adam Hardy
should being the operative word here. Actually I tested out the Geronimo 
package org.apache.geronimo.specs.geronimo-jpa_3.0_spec-1.1.1 and it solved the 
problem for me. Thanks.



Kevin Sutter on 29/02/08 16:43, wrote:

Right, and there should be no reason to develop your own version of the
javax.persistence.Persistence.  The one provided by Sun/Glassfish, Geronimo,
etc should all provide the same level of functionality.  (Albeit, as Pinaki
has pointed out in a separate thread, not all of the fixes have made it into
every version.)

Kevin

On Thu, Feb 28, 2008 at 6:41 PM, Patrick Linskey [EMAIL PROTECTED] wrote:


Hi,

I think that the official copy is the one at Glassfish.

-Patrick

On Thu, Feb 28, 2008 at 2:15 AM, Adam Hardy [EMAIL PROTECTED]
wrote:

I have three persistence-unit nodes in my persistence.xml: one for

Toplink with

 the toplink provider specified, one for Hibernate, and one for OpenJPA.

 It tests various operations in JPA, configured to use whichever JPA

supplier I

 specify at the time, i.e. I just change the name of the

persistence-unit I pass

 to Persistence.createEntityManagerFactory().

 So I have, for instance at the moment, not only openjpa.jar on the

classpath,

 but also hibernate-entitymanager.jar and toplink-essentials.jar.

 When running my test, due to the way

javax.persistence.Persistence.class is

 programmed, any of those providers may be picked for use, without

reference to

 the name of the persistence-unit name that I specified when calling
 createEntityManagerFactory() - somewhat surprisingly!

 If my memory serves me well, the persistence-api.jar that I am using is

just the

 one that maven downloaded automatically from the global jar repo.

 I'm tempted to create my own javax.persistence.Persistence to do it

properly and

 position it ahead of the persistence-api.jar on my classpath.

 This just doesn't seem right. I guess I should be asking at Sun. Was it

Sun who

 wrote javax.persistence.Persistence ? Or are there different versions?

Is there

 an 'OpenJPA persistence-api'?

 Has anyone here written their own and made it publicly available?


 Thanks
 Adam





--
Patrick Linskey
202 669 5907







multiple persistence-units, each w different provider

2008-02-28 Thread Adam Hardy
I have three persistence-unit nodes in my persistence.xml: one for Toplink with 
the toplink provider specified, one for Hibernate, and one for OpenJPA.


It tests various operations in JPA, configured to use whichever JPA supplier I 
specify at the time, i.e. I just change the name of the persistence-unit I pass 
to Persistence.createEntityManagerFactory().


So I have, for instance at the moment, not only openjpa.jar on the classpath, 
but also hibernate-entitymanager.jar and toplink-essentials.jar. 	


When running my test, due to the way javax.persistence.Persistence.class is 
programmed, any of those providers may be picked for use, without reference to 
the name of the persistence-unit name that I specified when calling 
createEntityManagerFactory() - somewhat surprisingly!


If my memory serves me well, the persistence-api.jar that I am using is just the 
one that maven downloaded automatically from the global jar repo.


I'm tempted to create my own javax.persistence.Persistence to do it properly and 
position it ahead of the persistence-api.jar on my classpath.


This just doesn't seem right. I guess I should be asking at Sun. Was it Sun who 
wrote javax.persistence.Persistence ? Or are there different versions? Is there 
an 'OpenJPA persistence-api'?


Has anyone here written their own and made it publicly available?


Thanks
Adam



Re: using TableGenerator for 1ary keys

2008-01-17 Thread Adam Hardy

Somebody beat me to Jira and has filed an issue there regarding this.

I tried this in Derby and it happens too, so it's not database-specific.

Adam Hardy on 16/01/08 14:12, wrote:
I use TableGenerator mapping for primary keys on all my entity mappings. 
The XML for the table-generator is in the thread below.


When I set up my app to use H2, the exact problem is as given in the 
exception: the SQL statement generated by OpenJPA for H2 refers to the 
table in the schema as


DEV.DEV.KEY_SEQUENCE

Obviously OpenJPA has prefixed the table name with the schema name twice 
instead of once.


This is only when set up to use H2. When running against MySQL (which 
has no notion of schemas), there is no problem.


I guess an easy further test would be to run the test against a 3rd 
database which does implement schemas. I have already tried Hypersonic - 
but there are other issues with that which prevent testing. I shall try 
derby or postgres soon. Theoretically I may have configured the schema 
in 2 different places in the config, but that situation should be 
catered for, methinks.


Regards
Adam

David Beer on 15/01/08 22:22, wrote:
Can you give at little more detail as to the exact problem with H2 or 
how you managed to solve the issue.


Adam Hardy wrote:
Realised the term is TableGenerator, not KEY_SEQUENCE, for the issue 
that is causing problems here.


I narrowed it down to H2. With Mysql, there is no problem.

Adam Hardy on 15/01/08 11:55, wrote:
Doing a project with no dependency on any one database vendor, I am 
using KEY_SEQUENCE tables to manage my primary keys. This was 
working fine until I introduced a schema name. Now I get the 
following exception:


org.apache.openjpa.lib.jdbc.ReportingSQLException: Table DEV not 
found; SQL statement:
SELECT LAST_KEY FROM DEV.DEV.KEY_SEQUENCE WHERE TABLE_SEQ = ? FOR 
UPDATE [42S02-64] {SELECT LAST_KEY FROM DEV.DEV.KEY_SEQUENCE WHERE 
TABLE_SEQ = ? FOR UPDATE} [code=42102, state=42S02]



My schema name is DEV and OpenJPA has prefixed it onto the table 
name twice! I configure the schema name via the database connection 
parameters, e.g.


openjpa.jdbc.Schema=DEV

This is my KEY_SEQUENCE config if it helps:

  table-generator name=codeKeySequence table=KEY_SEQUENCE
pk-column-name=TABLE_SEQ value-column-name=LAST_KEY
pk-column-value=CODE
  /table-generator

This is happening in both OpenJPA v 1.0.1 and 1.1.0. I'll have to 
look into this further now to see if it is a problem just with the 
org.apache.openjpa.jdbc.sql.H2Dictionary that I am using.


Does anyone have key sequence tables in a named schema working in 
any database?







Re: using TableGenerator for 1ary keys

2008-01-16 Thread Adam Hardy
I use TableGenerator mapping for primary keys on all my entity mappings. The XML 
for the table-generator is in the thread below.


When I set up my app to use H2, the exact problem is as given in the exception: 
the SQL statement generated by OpenJPA for H2 refers to the table in the schema as


DEV.DEV.KEY_SEQUENCE

Obviously OpenJPA has prefixed the table name with the schema name twice instead 
of once.


This is only when set up to use H2. When running against MySQL (which has no 
notion of schemas), there is no problem.


I guess an easy further test would be to run the test against a 3rd database 
which does implement schemas. I have already tried Hypersonic - but there are 
other issues with that which prevent testing. I shall try derby or postgres 
soon. Theoretically I may have configured the schema in 2 different places in 
the config, but that situation should be catered for, methinks.


Regards
Adam

David Beer on 15/01/08 22:22, wrote:
Can you give at little more detail as to the exact problem with H2 or 
how you managed to solve the issue.


Adam Hardy wrote:
Realised the term is TableGenerator, not KEY_SEQUENCE, for the issue 
that is causing problems here.


I narrowed it down to H2. With Mysql, there is no problem.

Adam Hardy on 15/01/08 11:55, wrote:
Doing a project with no dependency on any one database vendor, I am 
using KEY_SEQUENCE tables to manage my primary keys. This was working 
fine until I introduced a schema name. Now I get the following 
exception:


org.apache.openjpa.lib.jdbc.ReportingSQLException: Table DEV not 
found; SQL statement:
SELECT LAST_KEY FROM DEV.DEV.KEY_SEQUENCE WHERE TABLE_SEQ = ? FOR 
UPDATE [42S02-64] {SELECT LAST_KEY FROM DEV.DEV.KEY_SEQUENCE WHERE 
TABLE_SEQ = ? FOR UPDATE} [code=42102, state=42S02]



My schema name is DEV and OpenJPA has prefixed it onto the table name 
twice! I configure the schema name via the database connection 
parameters, e.g.


openjpa.jdbc.Schema=DEV

This is my KEY_SEQUENCE config if it helps:

  table-generator name=codeKeySequence table=KEY_SEQUENCE
pk-column-name=TABLE_SEQ value-column-name=LAST_KEY
pk-column-value=CODE
  /table-generator

This is happening in both OpenJPA v 1.0.1 and 1.1.0. I'll have to 
look into this further now to see if it is a problem just with the 
org.apache.openjpa.jdbc.sql.H2Dictionary that I am using.


Does anyone have key sequence tables in a named schema working in any 
database?




using a KEY_SEQUENCE table

2008-01-15 Thread Adam Hardy
Doing a project with no dependency on any one database vendor, I am using 
KEY_SEQUENCE tables to manage my primary keys. This was working fine until I 
introduced a schema name. Now I get the following exception:


org.apache.openjpa.lib.jdbc.ReportingSQLException: Table DEV not found; SQL 
statement:
SELECT LAST_KEY FROM DEV.DEV.KEY_SEQUENCE WHERE TABLE_SEQ = ? FOR UPDATE 
[42S02-64] {SELECT LAST_KEY FROM DEV.DEV.KEY_SEQUENCE WHERE TABLE_SEQ = ? FOR 
UPDATE} [code=42102, state=42S02]



My schema name is DEV and OpenJPA has prefixed it onto the table name twice! I 
configure the schema name via the database connection parameters, e.g.


openjpa.jdbc.Schema=DEV

This is my KEY_SEQUENCE config if it helps:

  table-generator name=codeKeySequence table=KEY_SEQUENCE
pk-column-name=TABLE_SEQ value-column-name=LAST_KEY
pk-column-value=CODE
  /table-generator

This is happening in both OpenJPA v 1.0.1 and 1.1.0. I'll have to look into this 
further now to see if it is a problem just with the 
org.apache.openjpa.jdbc.sql.H2Dictionary that I am using.


Does anyone have key sequence tables in a named schema working in any database?


Thanks
Adam


Re: using TableGenerator for 1ary keys (was: using a KEY_SEQUENCE table)

2008-01-15 Thread Adam Hardy
Realised the term is TableGenerator, not KEY_SEQUENCE, for the issue that is 
causing problems here.


I narrowed it down to H2. With Mysql, there is no problem.

Adam Hardy on 15/01/08 11:55, wrote:
Doing a project with no dependency on any one database vendor, I am 
using KEY_SEQUENCE tables to manage my primary keys. This was working 
fine until I introduced a schema name. Now I get the following exception:


org.apache.openjpa.lib.jdbc.ReportingSQLException: Table DEV not found; 
SQL statement:
SELECT LAST_KEY FROM DEV.DEV.KEY_SEQUENCE WHERE TABLE_SEQ = ? FOR UPDATE 
[42S02-64] {SELECT LAST_KEY FROM DEV.DEV.KEY_SEQUENCE WHERE TABLE_SEQ = 
? FOR UPDATE} [code=42102, state=42S02]



My schema name is DEV and OpenJPA has prefixed it onto the table name 
twice! I configure the schema name via the database connection 
parameters, e.g.


openjpa.jdbc.Schema=DEV

This is my KEY_SEQUENCE config if it helps:

  table-generator name=codeKeySequence table=KEY_SEQUENCE
pk-column-name=TABLE_SEQ value-column-name=LAST_KEY
pk-column-value=CODE
  /table-generator

This is happening in both OpenJPA v 1.0.1 and 1.1.0. I'll have to look 
into this further now to see if it is a problem just with the 
org.apache.openjpa.jdbc.sql.H2Dictionary that I am using.


Does anyone have key sequence tables in a named schema working in any 
database?



Thanks
Adam





Re: specifying schema in snapshot 1.1.0

2008-01-14 Thread Adam Hardy

with H2 instead of Hypersonic, the schema is recognised.

Prashant, thanks for helping me get to the solution.

I logged this on the Jira (url below).

regards
Adam

Adam Hardy on 11/01/08 15:51, wrote:

Hi Prashant,

thanks for that. Maybe it's a bug in the hypersonic dialect file? The 
only other database I've tested against is mysql, which of course 
doesn't have the concept of schemas at all, so it would never fail there.


I'll see whether I can hook up to h2 as well, and maybe postgres. I 
should be able to get a free dev license for Oracle I think, so I'll see 
what I can see.


Regards
Adam

Prashant Bhat on 11/01/08 14:18, wrote:

Hi Adam,

I've created simple test project for this, which is working with
schema using openjpa's latest snapshot. This is configured to use
Spring-2.5.1 and H2-1.0.64(I like the web console of it very much:-)
and it runs with maven, also it needs Java6(only because of @Override:-).

OpenJPA's mapping tool doesn't generate the db-schema, so I've bundled
the database with the schema created. So you just need to run 'mvn
test' to see it working. Btw, it also contains somewhat improved patch
from https://issues.apache.org/jira/browse/OPENJPA-332.

Hope this helps,
Prashant

P.S. I'm putting a cc to you, in case the list blocks mails sending 
zip files!



On Jan 11, 2008 8:29 PM, Adam Hardy [EMAIL PROTECTED] wrote:

I cannot see why this is not working for me.

I stripped my project down to the simplest elements possible and 
still had the

problem.

I have logged it as a Jira at 
https://issues.apache.org/jira/browse/OPENJPA-483
and included the zipped up project, in case it helps if anyone wants 
to look at it.


Thanks
Adam








Re: specifying schema in snapshot 1.1.0

2008-01-11 Thread Adam Hardy

I cannot see why this is not working for me.

I stripped my project down to the simplest elements possible and still had the 
problem.


I have logged it as a Jira at https://issues.apache.org/jira/browse/OPENJPA-483 
and included the zipped up project, in case it helps if anyone wants to look at it.


Thanks
Adam


Adam Hardy on 09/01/08 12:08, wrote:

That's interesting. Thank you very much for the help.

I ran that verification but still the query i'm testing doesn't contain 
the schema name prefix on the table in the SQL.


I put all the properties in the persistence.xml (and in debugging, I see 
the schema name is added to the metadata - at least in one place, 
although obviously not enough)


properties
  property name=openjpa.jdbc.Schema value=DEV /
  property name=openjpa.ConnectionUserName value=sa /
  property name=openjpa.ConnectionPassword value= /
  property name=openjpa.ConnectionURL 
value=jdbc:hsqldb:mem:TRADING_CODE /
  property name=openjpa.ConnectionDriverName 
value=org.hsqldb.jdbcDriver /
  property name=openjpa.jdbc.SchemaFactory 
value=native(ForeignKeys=true) /
  property name=openjpa.jdbc.DBDictionary 
value=org.apache.openjpa.jdbc.sql.HSQLDictionary /

  property name=openjpa.Log value=log4j /
  property name=openjpa.Id value=[PatternRepo OpenJPA] /
/properties

and instantiated the EntityManagerFactory without any extra properties:

entityManagerFactory =

Persistence.createEntityManagerFactory(persistenceUnitName);


I'm also double-checking that the schema exists during the test, using

metaData = connection.getMetaData();
rs = metaData.getSchemas();
while (rs.next())
{
 logger.debug(schema:  + rs.getString(TABLE_SCHEM));
}


Prashant Bhat on 09/01/08 10:52, wrote:

Yes, I see the schema name prefixed in the query and of course it is
needed to load from db.

Are you passing those properties to the emf during creation? Instead,
you can (just to verify) try setting them in persistence.xml by adding
a properties element like this:
 persistence-unit name=PatternRepo
  .
 properties

property name=openjpa.jdbc.Schema value=DEV /
/properties
/persistence-unit

or if you're using spring framework, then (I externalize these
properties using PropertyPlaceholderConfigurer)

bean id=entityManagerFactory
class=org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean 


property name=dataSource ref=dataSource /
property name=jpaVendorAdapter
bean
class=org.springframework.orm.jpa.vendor.OpenJpaVendorAdapter /
/property
property name=jpaPropertyMap
map
entry key=openjpa.jdbc.Schema value=DEV /
/map
 /property
 /bean

//
Prashant


On Jan 9, 2008 6:31 PM, Adam Hardy [EMAIL PROTECTED] wrote:
If you see the schema names in the debug logging of query SQL, then 
maybe you
can see something in my config that causes my issue - but if you 
aren't meant to

see the schema name in the logging, it could be something as simple as a
database incompatibility.

I checked that the schema exists! It does.

?xml version=1.0 encoding=UTF-8?
persistence version=1.0 
xmlns=http://java.sun.com/xml/ns/persistence;

   xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
   xsi:schemaLocation=http://java.sun.com/xml/ns/persistence
http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd;
   persistence-unit name=PatternRepo
 descriptionPattern Repo JPA Config/description
 
providerorg.apache.openjpa.persistence.PersistenceProviderImpl/provider 

 
mapping-fileorg/permacode/patternrepo/orm/Category.xml/mapping-file
 
mapping-fileorg/permacode/patternrepo/orm/Market.xml/mapping-file

 mapping-fileorg/permacode/patternrepo/orm/Code.xml/mapping-file
 
mapping-fileorg/permacode/patternrepo/orm/TestResult.xml/mapping-file 

 
mapping-fileorg/permacode/patternrepo/orm/TradingParam.xml/mapping-file 


   /persistence-unit
/persistence

orm.xml:
?xml version=1.0 encoding=UTF-8?
entity-mappings xmlns=http://java.sun.com/xml/ns/persistence/orm;
   xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
   xsi:schemaLocation=http://java.sun.com/xml/ns/persistence/orm
http://java.sun.com/xml/ns/persistence/orm_1_0.xsd;
   version=1.0
   persistence-unit-metadata
 xml-mapping-metadata-complete /
 persistence-unit-defaults
   schemaDEV/schema
 /persistence-unit-defaults
   /persistence-unit-metadata
/entity-mappings

the properties I set in code:
openjpa.ConnectionUserName=sa
openjpa.ConnectionPassword=
openjpa.ConnectionURL=jdbc:hsqldb:mem:TRADING_CODE
openjpa.ConnectionDriverName=org.hsqldb.jdbcDriver
openjpa.jdbc.Schema=DEV
openjpa.jdbc.SchemaFactory=native(ForeignKeys=true)
openjpa.jdbc.DBDictionary=org.apache.openjpa.jdbc.sql.HSQLDictionary
openjpa.Log=log4j
openjpa.Id=[PatternRepo OpenJPA]


The mappings won't help but this is it:

openjpa.MetaDataFactory:jpa(Resources=org/permacode

Re: specifying schema in snapshot 1.1.0

2008-01-11 Thread Adam Hardy

Hi Prashant,

thanks for that. Maybe it's a bug in the hypersonic dialect file? The only other 
database I've tested against is mysql, which of course doesn't have the concept 
of schemas at all, so it would never fail there.


I'll see whether I can hook up to h2 as well, and maybe postgres. I should be 
able to get a free dev license for Oracle I think, so I'll see what I can see.


Regards
Adam

Prashant Bhat on 11/01/08 14:18, wrote:

Hi Adam,

I've created simple test project for this, which is working with
schema using openjpa's latest snapshot. This is configured to use
Spring-2.5.1 and H2-1.0.64(I like the web console of it very much:-)
and it runs with maven, also it needs Java6(only because of @Override:-).

OpenJPA's mapping tool doesn't generate the db-schema, so I've bundled
the database with the schema created. So you just need to run 'mvn
test' to see it working. Btw, it also contains somewhat improved patch
from https://issues.apache.org/jira/browse/OPENJPA-332.

Hope this helps,
Prashant

P.S. I'm putting a cc to you, in case the list blocks mails sending zip files!


On Jan 11, 2008 8:29 PM, Adam Hardy [EMAIL PROTECTED] wrote:

I cannot see why this is not working for me.

I stripped my project down to the simplest elements possible and still had the
problem.

I have logged it as a Jira at https://issues.apache.org/jira/browse/OPENJPA-483
and included the zipped up project, in case it helps if anyone wants to look at 
it.

Thanks
Adam





  1   2   >