RE: Getting all the ClassMetaDatas

2007-01-02 Thread Shay Banon

Compass provides two main features with JPA: Mirroring and Indexing.
Mirroring mirrors changes made through the JPA API into the search engine
(through lifecycle listeners), and Indexing allows to automatically index
all your database using both the JPA and Searchable classes. The indexing
process requires to fetch or intersect with the current classes that are
persistent.

The indexing process fetches all the indexable entities and then iterate (in
parallel) them in order to index them into the search engine. So, I am
guessing that if classes are introduced to JPA at runtime, the user would
need to pre-register them with OpenJPA (when using the OpenJPA plugin in
order to locate persistent entities) in one of the ways that OpenJPA
provides.

The user could, if only Annotations are used, to use the default entities
locator that comes with Compass, which basically check for the @Entity
annotation. The only main drawback with this one is that it does not support
xml or other mechanism to introduce new mappings for classes.

-Shay


Patrick Linskey wrote:
 
 Is there any reason why you need to eagerly get information about
 classes to process? In general, as you've noticed, OpenJPA does allow
 dynamic registration of persistent types. One possibility would be to
 declare that in order to use Compass searching with OpenJPA, one must
 provide a static list of classes (or tell OpenJPA to compute a static
 list of classes), using one of the options that Marc pointed out
 earlier. Alternately, you could potentially just register the right type
 of listener with OpenJPA and do whatever initialization is necessary
 lazily as new classes are encountered via the callbacks.
 
 -Patrick
 
 -- 
 Patrick Linskey
 BEA Systems, Inc. 
 
 ___
 Notice:  This email message, together with any attachments, may contain
 information  of  BEA Systems,  Inc.,  its subsidiaries  and  affiliated
 entities,  that may be confidential,  proprietary,  copyrighted  and/or
 legally privileged, and is intended solely for the use of the individual
 or entity named in this message. If you are not the intended recipient,
 and have received this message in error, please immediately return this
 by email and then delete it. 
 
 -Original Message-
 From: Shay Banon [mailto:[EMAIL PROTECTED] 
 Sent: Monday, January 01, 2007 1:11 PM
 To: open-jpa-dev@incubator.apache.org
 Subject: Getting all the ClassMetaDatas
 
 
 Hi,
 
First, I hope that this is the correct forum for posting 
 questions, so
 sorry if it isn't.
 
 I have an external list of classes that I would like to match 
 against the
 persistent classes that are defined/identified by OpenJPA. I 
 would really
 like to get the ClassMetaData for each one, since it has a lot of
 information that I could use. This intersection happens after the
 EntityManagerFactory has been created.
 
 I have tried using:ClassMetaData[] classMetaDatas =
 emf.getConfiguration().getMetaDataRepositoryInstance().getMetaDatas();
 
 But it seems like the meta data repository and ClassMetaData 
 information are
 lazily loaded (i.e. when some operation is performed on a Class, the
 relevant meta data is fetched if not found in cache). So, 
 what I get is an
 empty array (even though I can see the OpenJPA identified the 
 classes).
 
 I wonder how I would be able to get all the class meta data?
 
 Something that I was thinking about is since I have the list 
 of classes that
 I would like to check if they are persistent, I could call:
 getMetaData(Class cls, ClassLoader envLoader, boolean mustExist), with
 Thread context class loader and false in mustExists. I am 
 guessing that it
 will load the ClassMetaData if not found. My main problem here is that
 OpenJPA might be configured with a different class loader (though it
 defaults to the thread context one).
 
 Any suggestions?
 
 p.s.
 
 I am the author of Compass, so once I have this nailed down, 
 we will have
 Search capabilities to OpenJPA ;)
 
 -- 
 View this message in context: 
 http://www.nabble.com/Getting-all-the-ClassMetaDatas-tf2905426
 .html#a8116958
 Sent from the open-jpa-dev mailing list archive at Nabble.com.
 
 
 
 

-- 
View this message in context: 
http://www.nabble.com/Getting-all-the-ClassMetaDatas-tf2905426.html#a8121024
Sent from the open-jpa-dev mailing list archive at Nabble.com.



Re: Getting all the ClassMetaDatas

2007-01-02 Thread Shay Banon

I tried to open the entity manager before I get the ClassMetaData, but I
still get an empty array. Here is what I do:

OpenJPAEntityManagerFactory emf =
OpenJPAPersistence.cast(entityManagerFactory);
EntityManager entityManager = emf.createEntityManager();
entityManager.close();

ClassMetaData[] classMetaDatas =
emf.getConfiguration().getMetaDataRepositoryInstance().getMetaDatas();

I do enumerate the classes in my persistence context, and I can see in the
logging that OpenJPA parses the classes.


Marc Prud wrote:
 
 Shay-
 
 Have you already obtained an EM from the EMF before you make this  
 call? If you try to get the metadatas after calling  
 emf.getEntityManager(), do you still see an empty list?
 
 Also, note that unless you enumerate the classes in your  
 persistence.xml file (in the class elements), the only way the  
 system will be able to know about your classes before they are lazily  
 evaluated is if you enable one of the scanning features (e.g., but  
 packaging all your classes in a jar and specifying the jar-file  
 element in the persistence.xml, which will be automatically scanned  
 for persistent classes).
 
 You might want to enable verbose logging and watch the make sure the  
 class metadatas are registered before you try to get the list from  
 the repository.
 
 
 
 On Jan 1, 2007, at 4:11 PM, Shay Banon wrote:
 

 Hi,

First, I hope that this is the correct forum for posting  
 questions, so
 sorry if it isn't.

 I have an external list of classes that I would like to match  
 against the
 persistent classes that are defined/identified by OpenJPA. I would  
 really
 like to get the ClassMetaData for each one, since it has a lot of
 information that I could use. This intersection happens after the
 EntityManagerFactory has been created.

 I have tried using:ClassMetaData[] classMetaDatas =
 emf.getConfiguration().getMetaDataRepositoryInstance().getMetaDatas();

 But it seems like the meta data repository and ClassMetaData  
 information are
 lazily loaded (i.e. when some operation is performed on a Class, the
 relevant meta data is fetched if not found in cache). So, what I  
 get is an
 empty array (even though I can see the OpenJPA identified the  
 classes).

 I wonder how I would be able to get all the class meta data?

 Something that I was thinking about is since I have the list of  
 classes that
 I would like to check if they are persistent, I could call:
 getMetaData(Class cls, ClassLoader envLoader, boolean mustExist), with
 Thread context class loader and false in mustExists. I am guessing  
 that it
 will load the ClassMetaData if not found. My main problem here is that
 OpenJPA might be configured with a different class loader (though it
 defaults to the thread context one).

 Any suggestions?

 p.s.

 I am the author of Compass, so once I have this nailed down, we  
 will have
 Search capabilities to OpenJPA ;)

 -- 
 View this message in context: http://www.nabble.com/Getting-all-the- 
 ClassMetaDatas-tf2905426.html#a8116958
 Sent from the open-jpa-dev mailing list archive at Nabble.com.

 
 
 

-- 
View this message in context: 
http://www.nabble.com/Getting-all-the-ClassMetaDatas-tf2905426.html#a8121096
Sent from the open-jpa-dev mailing list archive at Nabble.com.



RE: Getting all the ClassMetaDatas

2007-01-02 Thread Patrick Linskey
 -Original Message-
 From: Patrick Linskey 
 Sent: Tuesday, January 02, 2007 1:44 AM
 To: open-jpa-dev@incubator.apache.org
 Subject: RE: Getting all the ClassMetaDatas
 
 You may also be interested in the StateManager.getDirty() 
 method, which
 returns a BitSet corresponding to the entries in
 StateManager.getMetaData().getFields(). The BitSet identifies which
 fields in a given object are modified.
 
 On top of that, you could also take advantage of
 StateManager.getFlushed(), which returns another BitSet 
 indicating which
 fields have already been flushed. Combining the two, you can compute
 which fields are dirty and unflushed; in a pre-flush 
 callback, these are
 the fields that have been mutated since the last time flush() was
 invoked (directly or indirectly).

Correction: both of those methods are in OpenJPAStateManager, not
StateManager. Sorry for any confusion.

-Patrick
___
Notice:  This email message, together with any attachments, may contain
information  of  BEA Systems,  Inc.,  its subsidiaries  and  affiliated
entities,  that may be confidential,  proprietary,  copyrighted  and/or
legally privileged, and is intended solely for the use of the individual
or entity named in this message. If you are not the intended recipient,
and have received this message in error, please immediately return this
by email and then delete it.


[jira] Updated: (OPENJPA-91) java.lang.VerifyError on websphere after application reload

2007-01-02 Thread Anders Monrad (JIRA)

 [ 
http://issues.apache.org/jira/browse/OPENJPA-91?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anders Monrad updated OPENJPA-91:
-


I am using the type 4 driver, so there should not be any problems with native 
code. I tried to add the libraries as a shared library in WAS6.1 but had the 
same issue.

Now I added the OpenJPA libraries to the was61\lib\ext library, and restarted 
the server. This seems to be working!

 java.lang.VerifyError on websphere after application reload
 ---

 Key: OPENJPA-91
 URL: http://issues.apache.org/jira/browse/OPENJPA-91
 Project: OpenJPA
  Issue Type: Bug
 Environment: Using OpenJPA (openjpa-all-0.9.6-incubating.jar) in 
 Rational Developer 7 ( Websphere 6.1 test environment ) connected to Oracle 
 9.2 database.
 OS: WinXP SP2
Reporter: Anders Monrad
Priority: Minor

 Hi ..
 Not sure if this is a bug or just the way websphere reacts to openjpa. 
 I have a small test program using OpenJPA against an Oracle database. I am 
 running this program in the Websphere 6.1 test environment included with 
 Rational Developer 7. This is all working just fine. But when I make changes 
 to some ressource in the application, the chagnes are automatically published 
 to the test environment and the app is restarted. After this I get the 
 Exception below, whenever I try to access an EntityManager. 
 If I restart the entire server, the app is running fine again. So I guess 
 this is related to restarting the application.
 Caused by: java.lang.VerifyError: class loading constraint violated (class: 
 org/apache/openjpa/kernel/BrokerImpl method: 
 newQueryImpl(Ljava/lang/String;Lorg/apache/openjpa/kernel/StoreQuery;)Lorg/apache/openjpa/kernel/QueryImpl;)
  at pc: 0
   at java.lang.J9VMInternals.verifyImpl(Native Method)
   at java.lang.J9VMInternals.verify(J9VMInternals.java:59)
   at java.lang.J9VMInternals.initialize(J9VMInternals.java:120)
   at java.lang.Class.forNameImpl(Native Method)
   at java.lang.Class.forName(Class.java:131)
   at 
 org.apache.openjpa.conf.OpenJPAConfigurationImpl.class$(OpenJPAConfigurationImpl.java:65)
   at 
 org.apache.openjpa.conf.OpenJPAConfigurationImpl.init(OpenJPAConfigurationImpl.java:182)
   at 
 org.apache.openjpa.jdbc.conf.JDBCConfigurationImpl.init(JDBCConfigurationImpl.java:110)
   at 
 org.apache.openjpa.jdbc.conf.JDBCConfigurationImpl.init(JDBCConfigurationImpl.java:100)
   at 
 org.apache.openjpa.jdbc.conf.JDBCConfigurationImpl.init(JDBCConfigurationImpl.java:91)
   at 
 org.apache.openjpa.jdbc.kernel.JDBCBrokerFactory.newInstance(JDBCBrokerFactory.java:55)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:615)
   at org.apache.openjpa.kernel.Bootstrap.invokeFactory(Bootstrap.java:117)
   at 
 org.apache.openjpa.kernel.Bootstrap.newBrokerFactory(Bootstrap.java:57)
   at 
 org.apache.openjpa.persistence.PersistenceProviderImpl.createEntityManagerFactory(PersistenceProviderImpl.java:70)
   at 
 org.apache.openjpa.persistence.PersistenceProviderImpl.createEntityManagerFactory(PersistenceProviderImpl.java:78)
   at 
 javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:83)
   at 
 javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:60)
   at 
 util.EntityManagerFactoryHelper.getEntityManagerFactory(EntityManagerFactoryHelper.java:22)

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




Perform automatic drop and create db schema

2007-01-02 Thread Shay Banon

I am trying to figure out how to configure OpenJPA to perform drop and then
create the db schema. I got as far as:

property name=openjpa.jdbc.SynchronizeMappings value=buildSchema /

Which causes the schema to be created, but does not drop it when the EMF
closes (or when a new EMF starts).

The main reason I want it for is for simple tests, where each tests works
against a fresh copy of the database.

I tried doing things like: buildSchema(SchemaTool=drop) and things in that
nature, but have not managed to find the correct configuration (don't have
much time to dive into the OpenJPA code, sorry for that).

Cheers,
Shay
-- 
View this message in context: 
http://www.nabble.com/Perform-automatic-drop-and-create-db-schema-tf2909915.html#a8130220
Sent from the open-jpa-dev mailing list archive at Nabble.com.



Re: Perform automatic drop and create db schema

2007-01-02 Thread Marc Prud'hommeaux

Shay-

Unfortunately, we don't have any automatic drop-table feature, but I  
agree it would be handy (you might want to make a JIRA report with  
the suggestion).


The only other recourse, I think, would be to just manually delete  
the database files before running your tests.



On Jan 2, 2007, at 3:34 PM, Shay Banon wrote:



I am trying to figure out how to configure OpenJPA to perform drop  
and then

create the db schema. I got as far as:

property name=openjpa.jdbc.SynchronizeMappings  
value=buildSchema /


Which causes the schema to be created, but does not drop it when  
the EMF

closes (or when a new EMF starts).

The main reason I want it for is for simple tests, where each tests  
works

against a fresh copy of the database.

I tried doing things like: buildSchema(SchemaTool=drop) and things  
in that
nature, but have not managed to find the correct configuration  
(don't have

much time to dive into the OpenJPA code, sorry for that).

Cheers,
Shay
--
View this message in context: http://www.nabble.com/Perform- 
automatic-drop-and-create-db-schema-tf2909915.html#a8130220

Sent from the open-jpa-dev mailing list archive at Nabble.com.





Re: Perform automatic drop and create db schema

2007-01-02 Thread Marc Prud'hommeaux

Robert-

I completely agree. We usually just build all the tables once and  
then just try to make sure all the objects are deleted at the end of  
the test, but you are correct that this is sometimes more cumbersome  
than it could be. An easy drop-then-create option would simplify  
this, although some databases can be notoriously slow with schema  
interrogation and manipulation that doing it for each test might wind  
up being prohibitively slow.




On Jan 2, 2007, at 3:44 PM, robert burrell donkin wrote:


On 1/2/07, Marc Prud'hommeaux [EMAIL PROTECTED] wrote:

Shay-

Unfortunately, we don't have any automatic drop-table feature, but I
agree it would be handy (you might want to make a JIRA report with
the suggestion).

The only other recourse, I think, would be to just manually delete
the database files before running your tests.


support for easy integration testing is one area where i think many
JDO implementations could really improve

it's vital to start with a known database state and clean up after
each integration test. this isn't as easy as it should be when you
have a complex object-relational mapper with extensive caching. a set
of side doors for integration testing would really help productivity.

- robert




Re: JPQLExpressionBuilder uses wrong classloader

2007-01-02 Thread Marc Prud'hommeaux

Dain-

I assume you are specifying the ClassLoader by using your own  
subclass of PersistenceUnitInfoImpl. OpenJPA should be using your  
ClassLoader, although if the same class name is available in both  
your classloader as well as the system classloader, then I think the  
results are undefined.


Is it possible to check to see if your ClassLoader is used if the  
class to be loaded is *not* available in the system classloader?




On Dec 27, 2006, at 8:38 PM, Dain Sundstrom wrote:

Also it appears that the Broker be loading classes from the thread  
context class loader when persist is called.


-dain

On Dec 27, 2006, at 5:34 PM, Dain Sundstrom wrote:

I've been working on a getting JPA runtime enhancement and have  
run into a problem where OpenJPA is loading my classes from the  
wrong class loader.  When I create a persistence unit info it  
supplies a specific class loader for OpenJPA to use when resolving  
application classes.  When I execute a query the  
JPQLExpressionBuilder simply uses getClass().getClassLoader() to  
load classes.  In my case this is the system class loader which  
contains unenhanced classes and not the class loader for the  
persistence unit which has properly enhanced classes.


It is my understand that OpenJPA should use the class loader  
obtained from the PersistenceUnitInfo for all class loading.  Is  
this correct?


If that is correct, is this a bug in OpenJPA?

-dain






Re: Perform automatic drop and create db schema

2007-01-02 Thread Craig L Russell

For What It's Worth:

+1 on the drop-tables feature for OpenJPA. But I would caution  
against using it on each test.


Sadly, my experience is that drop-create-tables is 99.9% of the time  
taken in a typical test.


The JDO TCK runs hundreds of tests and we drop-create tables only on  
demand. The drop-create step takes several minutes compared to a few  
seconds to actually run the tests.


After several years of doing this kind of work, I've concluded that  
the best practical strategy (we tried beating up the database vendors  
to make drop-create as fast as insert/delete rows, to no avail) is to  
write your tests such that at the beginning of the test, you create  
your test data and at the end of the test, you delete the test data,  
leaving the database in an empty state.


JUnit facilitates this by providing a setUp and tearDown. We create  
the test data in setUp and delete it in tearDown. Of course, the  
tearDown might fail, leaving the data in an unpredictable state, but  
it does work 99.9% of the time. That's why we have a common tearDown  
that is very carefully implemented to catch exceptions, retry, etc.


Craig

On Jan 2, 2007, at 12:52 PM, Marc Prud'hommeaux wrote:


Robert-

I completely agree. We usually just build all the tables once and  
then just try to make sure all the objects are deleted at the end  
of the test, but you are correct that this is sometimes more  
cumbersome than it could be. An easy drop-then-create option would  
simplify this, although some databases can be notoriously slow with  
schema interrogation and manipulation that doing it for each test  
might wind up being prohibitively slow.




On Jan 2, 2007, at 3:44 PM, robert burrell donkin wrote:


On 1/2/07, Marc Prud'hommeaux [EMAIL PROTECTED] wrote:

Shay-

Unfortunately, we don't have any automatic drop-table feature, but I
agree it would be handy (you might want to make a JIRA report with
the suggestion).

The only other recourse, I think, would be to just manually delete
the database files before running your tests.


support for easy integration testing is one area where i think many
JDO implementations could really improve

it's vital to start with a known database state and clean up after
each integration test. this isn't as easy as it should be when you
have a complex object-relational mapper with extensive caching. a set
of side doors for integration testing would really help productivity.

- robert




Craig Russell
Architect, Sun Java Enterprise System http://java.sun.com/products/jdo
408 276-5638 mailto:[EMAIL PROTECTED]
P.S. A good JDO? O, Gasp!



smime.p7s
Description: S/MIME cryptographic signature


Re: Perform automatic drop and create db schema

2007-01-02 Thread Abe White
Unfortunately, we don't have any automatic drop-table feature, but  
I agree it would be handy (you might want to make a JIRA report  
with the suggestion).


Note that the SynchronizeMappings property allows you to use all  
the arguments of the mappingtool.  So you can try something like:


buildSchema(SchemaAction=refresh, DropTables=true)

Theoretically, that will drop unused columns and tables while adding  
any new columns and tables needed for your mappings.  If you try it,  
let us know how it works out. 
___

Notice:  This email message, together with any attachments, may contain
information  of  BEA Systems,  Inc.,  its subsidiaries  and  affiliated
entities,  that may be confidential,  proprietary,  copyrighted  and/or
legally privileged, and is intended solely for the use of the individual
or entity named in this message. If you are not the intended recipient,
and have received this message in error, please immediately return this
by email and then delete it.


Re: Perform automatic drop and create db schema

2007-01-02 Thread Marc Prud'hommeaux


Personally, I like to put both the data cleanup and data  
initialization in the setUp() stage. It's sometimes a bit slower, but  
if there is faulty cleanup logic in a previous test's tearDown(), it  
may affect some random test down the road, and it can sometimes be  
very difficult to track down.


TCKs that come out of Sun are especially notorious for this problem :)



On Jan 2, 2007, at 4:07 PM, Craig L Russell wrote:


For What It's Worth:

+1 on the drop-tables feature for OpenJPA. But I would caution  
against using it on each test.


Sadly, my experience is that drop-create-tables is 99.9% of the  
time taken in a typical test.


The JDO TCK runs hundreds of tests and we drop-create tables only  
on demand. The drop-create step takes several minutes compared to a  
few seconds to actually run the tests.


After several years of doing this kind of work, I've concluded that  
the best practical strategy (we tried beating up the database  
vendors to make drop-create as fast as insert/delete rows, to no  
avail) is to write your tests such that at the beginning of the  
test, you create your test data and at the end of the test, you  
delete the test data, leaving the database in an empty state.


JUnit facilitates this by providing a setUp and tearDown. We create  
the test data in setUp and delete it in tearDown. Of course, the  
tearDown might fail, leaving the data in an unpredictable state,  
but it does work 99.9% of the time. That's why we have a common  
tearDown that is very carefully implemented to catch exceptions,  
retry, etc.


Craig

On Jan 2, 2007, at 12:52 PM, Marc Prud'hommeaux wrote:


Robert-

I completely agree. We usually just build all the tables once and  
then just try to make sure all the objects are deleted at the end  
of the test, but you are correct that this is sometimes more  
cumbersome than it could be. An easy drop-then-create option would  
simplify this, although some databases can be notoriously slow  
with schema interrogation and manipulation that doing it for each  
test might wind up being prohibitively slow.




On Jan 2, 2007, at 3:44 PM, robert burrell donkin wrote:


On 1/2/07, Marc Prud'hommeaux [EMAIL PROTECTED] wrote:

Shay-

Unfortunately, we don't have any automatic drop-table feature,  
but I

agree it would be handy (you might want to make a JIRA report with
the suggestion).

The only other recourse, I think, would be to just manually delete
the database files before running your tests.


support for easy integration testing is one area where i think many
JDO implementations could really improve

it's vital to start with a known database state and clean up after
each integration test. this isn't as easy as it should be when you
have a complex object-relational mapper with extensive caching. a  
set
of side doors for integration testing would really help  
productivity.


- robert




Craig Russell
Architect, Sun Java Enterprise System http://java.sun.com/products/jdo
408 276-5638 mailto:[EMAIL PROTECTED]
P.S. A good JDO? O, Gasp!





Re: Perform automatic drop and create db schema

2007-01-02 Thread robert burrell donkin

On 1/2/07, Marc Prud'hommeaux [EMAIL PROTECTED] wrote:


Personally, I like to put both the data cleanup and data
initialization in the setUp() stage. It's sometimes a bit slower, but
if there is faulty cleanup logic in a previous test's tearDown(), it
may affect some random test down the road, and it can sometimes be
very difficult to track down.


i found that data cleaning in both setUp() and tearDown(), is
necessary (at least for standard commercial development)

- robert


Re: Perform automatic drop and create db schema

2007-01-02 Thread robert burrell donkin

On 1/2/07, Craig L Russell [EMAIL PROTECTED] wrote:

For What It's Worth:

+1 on the drop-tables feature for OpenJPA. But I would caution
against using it on each test.

Sadly, my experience is that drop-create-tables is 99.9% of the time
taken in a typical test.

The JDO TCK runs hundreds of tests and we drop-create tables only on
demand. The drop-create step takes several minutes compared to a few
seconds to actually run the tests.


yeh - dropping then recreating isn't very efficient but is effective

i've found that it's hard to educate developers unfamiliar with
automated testing. creating good integration tests is important but
takes a long while. too often neglected due to time pressure. i
suspect that tool developers could do more to help.

for example, IMHO containers should ship with integrated code coverage
tools. there are good enough open source ones but since they are not
bundled with containers they are not used as widely as they should be
in commercial development work.


After several years of doing this kind of work, I've concluded that
the best practical strategy (we tried beating up the database vendors
to make drop-create as fast as insert/delete rows, to no avail) is to
write your tests such that at the beginning of the test, you create
your test data and at the end of the test, you delete the test data,
leaving the database in an empty state.


+1

but this is where a side door would be of most use. sophisticated
object relational layers generally make it relatively slow and
unnatural to just delete everything in a table. which is as it should
be. it'd just be cool to able if the tool developers also created
testing only side doors.

i have an idea that this is all reasonably easily doable but isn't
well known or packaged up into tools which are easy to learn.

it would be very cool to be able to start a recording tool in setup to
intercept and record every create, update, delete in the data access
layer then in tearDown just ask the data access layer to undo
everything that was done.

it would also be very cool to have a complete dump and replace
facility for black-box-in-container functional testing. in setup, just
push a load of data as xml. the data access layer deletes data from
all the tables specified and inserts the given data.

easy, dynamic flushing of all object caches would also be useful.

(sadly, i'm really interested in meta-data ATM, both email and source
auditing so there's not much chance of hacking together something
which demonstrates what i mean any time soon...)

- robert


[jira] Commented: (OPENJPA-94) Allow MappingTool and persistence.xml to support drop-create for database schema

2007-01-02 Thread Abe White (JIRA)

[ 
http://issues.apache.org/jira/browse/OPENJPA-94?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12461839
 ] 

Abe White commented on OPENJPA-94:
--

Note that the SynchronizeMappings property allows you to use all the 
arguments of the mappingtool.  So you can try something like:

buildSchema(SchemaAction=refresh, DropTables=true)

Theoretically, that will drop unused columns and tables while adding any new 
columns and tables needed for your mappings. 

 Allow MappingTool and persistence.xml to support drop-create for database 
 schema
 

 Key: OPENJPA-94
 URL: http://issues.apache.org/jira/browse/OPENJPA-94
 Project: OpenJPA
  Issue Type: New Feature
Reporter: Shay Banon

 Currently, in the persistence context, one can define:
 property name=openjpa.jdbc.SynchronizeMappings value=buildSchema /
 Which causes OpenJPA to build the database schema based on the mapping 
 defined. Currently, there is no way to define it to drop tables if they 
 exists before creating the database schema. This is very useful for tests 
 that drop (if exists) and creates new tables for each test.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: Perform automatic drop and create db schema

2007-01-02 Thread Shay Banon

It is one of the first things I tried, I got this exception:

Caused by: org.apache.openjpa.lib.util.ParseException: There was an error
while setting up the configuration plugin option SynchronizeMappings. The
plugin was of type org.apache.openjpa.jdbc.meta.MappingTool. Setter
methods for the following plugin properties were not available in that type:
[DropTables]. Possible plugin properties are: [ACTIONS, ACTION_ADD,
ACTION_BUILD_SCHEMA, ACTION_DROP, ACTION_EXPORT, ACTION_IMPORT,
ACTION_REFRESH, ACTION_VALIDATE, DropUnusedComponents, ForeignKeys,
IgnoreErrors, Indexes, MODE_MAPPING, MODE_MAPPING_INIT, MODE_META,
MODE_NONE, MODE_QUERY, MappingWriter, MetaDataFile, PrimaryKeys, ReadSchema,
Repository, SCHEMA_ACTION_NONE, SchemaAction, SchemaGroup, SchemaTool,
SchemaWriter, Sequences].
Ensure that your plugin configuration string uses key values that correspond
to setter methods in the plugin class.
at
org.apache.openjpa.lib.conf.Configurations.configureInstance(Configurations.java:352)
at
org.apache.openjpa.lib.conf.Configurations.configureInstance(Configurations.java:280)
at
org.apache.openjpa.jdbc.kernel.JDBCBrokerFactory.synchronizeMappings(JDBCBrokerFactory.java:153)
at
org.apache.openjpa.jdbc.kernel.JDBCBrokerFactory.newBrokerImpl(JDBCBrokerFactory.java:127)
at
org.apache.openjpa.kernel.AbstractBrokerFactory.newBroker(AbstractBrokerFactory.java:164)
... 17 more


I also tried different combinations but to no avail...



Abe White wrote:
 
 Unfortunately, we don't have any automatic drop-table feature, but  
 I agree it would be handy (you might want to make a JIRA report  
 with the suggestion).
 
 Note that the SynchronizeMappings property allows you to use all  
 the arguments of the mappingtool.  So you can try something like:
 
 buildSchema(SchemaAction=refresh, DropTables=true)
 
 Theoretically, that will drop unused columns and tables while adding  
 any new columns and tables needed for your mappings.  If you try it,  
 let us know how it works out. 
 ___
 Notice:  This email message, together with any attachments, may contain
 information  of  BEA Systems,  Inc.,  its subsidiaries  and  affiliated
 entities,  that may be confidential,  proprietary,  copyrighted  and/or
 legally privileged, and is intended solely for the use of the individual
 or entity named in this message. If you are not the intended recipient,
 and have received this message in error, please immediately return this
 by email and then delete it.
 
 

-- 
View this message in context: 
http://www.nabble.com/Perform-automatic-drop-and-create-db-schema-tf2909915.html#a8131374
Sent from the open-jpa-dev mailing list archive at Nabble.com.



Re: Perform automatic drop and create db schema

2007-01-02 Thread Abe White
Caused by: org.apache.openjpa.lib.util.ParseException: There was an  
error
while setting up the configuration plugin option  
SynchronizeMappings. The

plugin was of type org.apache.openjpa.jdbc.meta.MappingTool. Setter
methods for the following plugin properties were not available in  
that type:

[DropTables].


Try it without the DropTables=true.  It won't drop unused tables, but  
it should still drop unused columns.  If that works, it should be a  
pretty minor fix to get DropTables working.  (It might work already  
if you change it to SchemaTool.DropTables=true, but it won't be  
deterministic so I'd rather leave it aside for now.) 
 
___

Notice:  This email message, together with any attachments, may contain
information  of  BEA Systems,  Inc.,  its subsidiaries  and  affiliated
entities,  that may be confidential,  proprietary,  copyrighted  and/or
legally privileged, and is intended solely for the use of the individual
or entity named in this message. If you are not the intended recipient,
and have received this message in error, please immediately return this
by email and then delete it.


Re: Perform automatic drop and create db schema

2007-01-02 Thread Shay Banon

The way I usually do things is by starting a transaction, and simply rolling
it back when my test finishes (if it wasn't committed/rolled back during the
test method). This usually supports 90% of an application integration tests
needs. In my case, I have simple tests, I am running them against an in
memory HSQL, and the costs are very small. I need this feature since I also
test the transaction integration, and it requires more complex scenario then
the test scenario I described in the beginning. I don't want to corrupt my
code with Jdbc code that could be avoided.

As developers of tools/libraries/frameworks, we need to give the developers
all the options, and educate them regarding the benefits/drawbacks of using
each one. I am glad for the response in this thread! I have been burned by
other libraries that are pretty nasty in their responses (but I won't name
names :) ).


robert burrell donkin-2 wrote:
 
 On 1/2/07, Marc Prud'hommeaux [EMAIL PROTECTED] wrote:

 Personally, I like to put both the data cleanup and data
 initialization in the setUp() stage. It's sometimes a bit slower, but
 if there is faulty cleanup logic in a previous test's tearDown(), it
 may affect some random test down the road, and it can sometimes be
 very difficult to track down.
 
 i found that data cleaning in both setUp() and tearDown(), is
 necessary (at least for standard commercial development)
 
 - robert
 
 

-- 
View this message in context: 
http://www.nabble.com/Perform-automatic-drop-and-create-db-schema-tf2909915.html#a8131565
Sent from the open-jpa-dev mailing list archive at Nabble.com.



Re: Perform automatic drop and create db schema

2007-01-02 Thread Abe White
Just using refresh does not clean up the data in the database  
(getting Unique
constraints violations). Just for kicks I tried  
SchemaTool.DropTables=true,

it did pass the configuration phase, but it still did not cleaned the
data/schema.


None of the options I mentioned are meant to clean up the data.  Just  
to drop unused schema components.

___
Notice:  This email message, together with any attachments, may contain
information  of  BEA Systems,  Inc.,  its subsidiaries  and  affiliated
entities,  that may be confidential,  proprietary,  copyrighted  and/or
legally privileged, and is intended solely for the use of the individual
or entity named in this message. If you are not the intended recipient,
and have received this message in error, please immediately return this
by email and then delete it.


RE: Getting all the ClassMetaDatas

2007-01-02 Thread Shay Banon

Regarding getting dirty fields, Compass does not make use of it. When an
object changes, it must be completely reindexed by Compass (or deleted). In
my listeners, I simply get the source, and perform the appropriate operation
on Compass. It would be nice if Compass supported dirty fields, but it is
not simple at all to support it (mainly because of Lucene). It is really
fast though, especially because of how Compass supports transactions
(explained in the next paragraph).

Compass extends Lucene core classes to add 2pc transaction support. By
default, the transactional data is stored in memory, and flushed to the
index during commit. Compass also supports storing the transactional data on
the file system, which basically allows for much longer running
transactions. This is both a configuration setting and runtime setting. Note
as well, during the Indexing operation (different than the mirroring one)
uses different transactional isolation called batch insert, which has no
problems to perform long running *fresh* indexing process.

On top of the Lucene extension, Compass integrates nicely with different
transaction managers. Namely, JTA (both JTA synchronization and XA) and
Spring PlatformTransactionManager. The only soft point is when using OpenJPA
in Resource Local transaction mode without any transaction manager. Compass
could (as you suggested) integrate its transaction management with OpenJPA
in such cases. I will look into it after I get the first integration stuff
working.

Last, regarding savepoints, Compass does not support savepoints currently,
though with the current transaction architecture it could be easily added.
The main point (as you mentioned) is integrating it with other savepoints
enabled transaction strategies.

Cheers,
Shay


Patrick Linskey wrote:
 
 You may also be interested in the StateManager.getDirty() method, which
 returns a BitSet corresponding to the entries in
 StateManager.getMetaData().getFields(). The BitSet identifies which
 fields in a given object are modified.
 
 On top of that, you could also take advantage of
 StateManager.getFlushed(), which returns another BitSet indicating which
 fields have already been flushed. Combining the two, you can compute
 which fields are dirty and unflushed; in a pre-flush callback, these are
 the fields that have been mutated since the last time flush() was
 invoked (directly or indirectly).
 
 Speaking of incremental flushing, is Compass transactional? IOW, is it
 possible to periodically (at flush() time) update Compass with
 mutations, and then only make the changes visible outside the current
 transactional scope at commit time? If so, it'd be interesting to also
 explore how we could hook up OpenJPA savepoints (when available). If
 not, then we should make sure we figure out what the memory implications
 are of using Compass + OpenJPA incremental flushes + large transactions.
 OpenJPA has features designed for optimizing memory handling in large
 transactions; Compass/OpenJPA work could probably dovetail nicely into
 some or all of these existing integration points.
 
 -Patrick
 
 -- 
 Patrick Linskey
 BEA Systems, Inc. 
 
 ___
 Notice:  This email message, together with any attachments, may contain
 information  of  BEA Systems,  Inc.,  its subsidiaries  and  affiliated
 entities,  that may be confidential,  proprietary,  copyrighted  and/or
 legally privileged, and is intended solely for the use of the individual
 or entity named in this message. If you are not the intended recipient,
 and have received this message in error, please immediately return this
 by email and then delete it. 
 
 -Original Message-
 From: Shay Banon [mailto:[EMAIL PROTECTED] 
 Sent: Tuesday, January 02, 2007 12:22 AM
 To: open-jpa-dev@incubator.apache.org
 Subject: RE: Getting all the ClassMetaDatas
 
 
 Compass provides two main features with JPA: Mirroring and Indexing.
 Mirroring mirrors changes made through the JPA API into the 
 search engine
 (through lifecycle listeners), and Indexing allows to 
 automatically index
 all your database using both the JPA and Searchable classes. 
 The indexing
 process requires to fetch or intersect with the current 
 classes that are
 persistent.
 
 The indexing process fetches all the indexable entities and 
 then iterate (in
 parallel) them in order to index them into the search engine. So, I am
 guessing that if classes are introduced to JPA at runtime, 
 the user would
 need to pre-register them with OpenJPA (when using the 
 OpenJPA plugin in
 order to locate persistent entities) in one of the ways that OpenJPA
 provides.
 
 The user could, if only Annotations are used, to use the 
 default entities
 locator that comes with Compass, which basically check for the @Entity
 annotation. The only main drawback with this one is that it 
 does not support
 xml or other mechanism to introduce new mappings for classes.
 
 -Shay
 
 
 Patrick 

RE: Perform automatic drop and create db schema

2007-01-02 Thread Shay Banon

I think so as well :)


Patrick Linskey wrote:
 
 I think that Abe and Shay are talking about slightly different features.
 
 -Patrick
 
 -- 
 Patrick Linskey
 BEA Systems, Inc. 
 
 ___
 Notice:  This email message, together with any attachments, may contain
 information  of  BEA Systems,  Inc.,  its subsidiaries  and  affiliated
 entities,  that may be confidential,  proprietary,  copyrighted  and/or
 legally privileged, and is intended solely for the use of the individual
 or entity named in this message. If you are not the intended recipient,
 and have received this message in error, please immediately return this
 by email and then delete it. 
 
 -Original Message-
 From: Abe White 
 Sent: Tuesday, January 02, 2007 2:04 PM
 To: open-jpa-dev@incubator.apache.org
 Subject: Re: Perform automatic drop and create db schema
 
  Just using refresh does not clean up the data in the database  
  (getting Unique
  constraints violations). Just for kicks I tried  
  SchemaTool.DropTables=true,
  it did pass the configuration phase, but it still did not 
 cleaned the
  data/schema.
 
 None of the options I mentioned are meant to clean up the 
 data.  Just  
 to drop unused schema components.
 __
 _
 Notice:  This email message, together with any attachments, 
 may contain
 information  of  BEA Systems,  Inc.,  its subsidiaries  and  
 affiliated
 entities,  that may be confidential,  proprietary,  
 copyrighted  and/or
 legally privileged, and is intended solely for the use of the 
 individual
 or entity named in this message. If you are not the intended 
 recipient,
 and have received this message in error, please immediately 
 return this
 by email and then delete it.
 
 
 

-- 
View this message in context: 
http://www.nabble.com/Perform-automatic-drop-and-create-db-schema-tf2909915.html#a8132032
Sent from the open-jpa-dev mailing list archive at Nabble.com.



[jira] Commented: (OPENJPA-94) Allow MappingTool and persistence.xml to support drop-create for database schema

2007-01-02 Thread Patrick Linskey (JIRA)

[ 
http://issues.apache.org/jira/browse/OPENJPA-94?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12461857
 ] 

Patrick Linskey commented on OPENJPA-94:


One downside of dropping and re-creating schemas is the latency of the 
operation. I think that it'd be more useful to have an option to automatically 
delete all records from all mapped tables. This option could potentially also 
work lazily if a full class list is not available up-front -- OpenJPA could 
issue a delete statement when a new ClassMapping is first initialized.

 Allow MappingTool and persistence.xml to support drop-create for database 
 schema
 

 Key: OPENJPA-94
 URL: http://issues.apache.org/jira/browse/OPENJPA-94
 Project: OpenJPA
  Issue Type: New Feature
Reporter: Shay Banon

 Currently, in the persistence context, one can define:
 property name=openjpa.jdbc.SynchronizeMappings value=buildSchema /
 Which causes OpenJPA to build the database schema based on the mapping 
 defined. Currently, there is no way to define it to drop tables if they 
 exists before creating the database schema. This is very useful for tests 
 that drop (if exists) and creates new tables for each test.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




RE: Perform automatic drop and create db schema

2007-01-02 Thread Shay Banon

Automatically clean that data without dropping the tables makes even more
sense. That would be a really cool feature.


Patrick Linskey wrote:
 
 IMO, a more valuable option would be a way to delete all records in all
 mapped tables, rather than doing unnecessary schema interrogation.
 
 Additionally, note that with JPA, deleting all records during setUp() is
 as easy as 'em.createQuery(delete from Employee).executeUpdate();'
 
 -Patrick
 
 -- 
 Patrick Linskey
 BEA Systems, Inc. 
 
 ___
 Notice:  This email message, together with any attachments, may contain
 information  of  BEA Systems,  Inc.,  its subsidiaries  and  affiliated
 entities,  that may be confidential,  proprietary,  copyrighted  and/or
 legally privileged, and is intended solely for the use of the individual
 or entity named in this message. If you are not the intended recipient,
 and have received this message in error, please immediately return this
 by email and then delete it. 
 
 -Original Message-
 From: robert burrell donkin [mailto:[EMAIL PROTECTED] 
 Sent: Tuesday, January 02, 2007 1:39 PM
 To: open-jpa-dev@incubator.apache.org
 Subject: Re: Perform automatic drop and create db schema
 
 On 1/2/07, Craig L Russell [EMAIL PROTECTED] wrote:
  For What It's Worth:
 
  +1 on the drop-tables feature for OpenJPA. But I would caution
  against using it on each test.
 
  Sadly, my experience is that drop-create-tables is 99.9% of the time
  taken in a typical test.
 
  The JDO TCK runs hundreds of tests and we drop-create tables only on
  demand. The drop-create step takes several minutes compared to a few
  seconds to actually run the tests.
 
 yeh - dropping then recreating isn't very efficient but is effective
 
 i've found that it's hard to educate developers unfamiliar with
 automated testing. creating good integration tests is important but
 takes a long while. too often neglected due to time pressure. i
 suspect that tool developers could do more to help.
 
 for example, IMHO containers should ship with integrated code coverage
 tools. there are good enough open source ones but since they are not
 bundled with containers they are not used as widely as they should be
 in commercial development work.
 
  After several years of doing this kind of work, I've concluded that
  the best practical strategy (we tried beating up the 
 database vendors
  to make drop-create as fast as insert/delete rows, to no 
 avail) is to
  write your tests such that at the beginning of the test, you create
  your test data and at the end of the test, you delete the test data,
  leaving the database in an empty state.
 
 +1
 
 but this is where a side door would be of most use. sophisticated
 object relational layers generally make it relatively slow and
 unnatural to just delete everything in a table. which is as it should
 be. it'd just be cool to able if the tool developers also created
 testing only side doors.
 
 i have an idea that this is all reasonably easily doable but isn't
 well known or packaged up into tools which are easy to learn.
 
 it would be very cool to be able to start a recording tool in setup to
 intercept and record every create, update, delete in the data access
 layer then in tearDown just ask the data access layer to undo
 everything that was done.
 
 it would also be very cool to have a complete dump and replace
 facility for black-box-in-container functional testing. in setup, just
 push a load of data as xml. the data access layer deletes data from
 all the tables specified and inserts the given data.
 
 easy, dynamic flushing of all object caches would also be useful.
 
 (sadly, i'm really interested in meta-data ATM, both email and source
 auditing so there's not much chance of hacking together something
 which demonstrates what i mean any time soon...)
 
 - robert
 
 
 

-- 
View this message in context: 
http://www.nabble.com/Perform-automatic-drop-and-create-db-schema-tf2909915.html#a8132118
Sent from the open-jpa-dev mailing list archive at Nabble.com.



RE: Getting all the ClassMetaDatas

2007-01-02 Thread Shay Banon

I still need to provide a classloader for this method. Another thing, I get
only the classes, where I need to ClassMetaData in order to check if it
extends another mapped class (in such cases I exclude it from the indexing
process since the select of the base class will return the derived classes
as well).

I was wondering if maybe I work in the same way the MappingTool works (since
it needs to get all the ClassMappings as well). I had a quick look at the
code, an it does some stuff that I am not sure that I should do. What do you
say?


Patrick Linskey wrote:
 
 What happens if you use MetaDataRepository.getPersistentTypeNames()
 instead?
 
 -Patrick
 
 -- 
 Patrick Linskey
 BEA Systems, Inc. 
 
 ___
 Notice:  This email message, together with any attachments, may contain
 information  of  BEA Systems,  Inc.,  its subsidiaries  and  affiliated
 entities,  that may be confidential,  proprietary,  copyrighted  and/or
 legally privileged, and is intended solely for the use of the individual
 or entity named in this message. If you are not the intended recipient,
 and have received this message in error, please immediately return this
 by email and then delete it. 
 
 -Original Message-
 From: Shay Banon [mailto:[EMAIL PROTECTED] 
 Sent: Tuesday, January 02, 2007 12:34 AM
 To: open-jpa-dev@incubator.apache.org
 Subject: Re: Getting all the ClassMetaDatas
 
 
 I tried to open the entity manager before I get the 
 ClassMetaData, but I
 still get an empty array. Here is what I do:
 
 OpenJPAEntityManagerFactory emf =
 OpenJPAPersistence.cast(entityManagerFactory);
 EntityManager entityManager = emf.createEntityManager();
 entityManager.close();
 
 ClassMetaData[] classMetaDatas =
 emf.getConfiguration().getMetaDataRepositoryInstance().getMetaDatas();
 
 I do enumerate the classes in my persistence context, and I 
 can see in the
 logging that OpenJPA parses the classes.
 
 
 Marc Prud wrote:
  
  Shay-
  
  Have you already obtained an EM from the EMF before you make this  
  call? If you try to get the metadatas after calling  
  emf.getEntityManager(), do you still see an empty list?
  
  Also, note that unless you enumerate the classes in your  
  persistence.xml file (in the class elements), the only way the  
  system will be able to know about your classes before they 
 are lazily  
  evaluated is if you enable one of the scanning features (e.g., but  
  packaging all your classes in a jar and specifying the jar-file  
  element in the persistence.xml, which will be automatically 
 scanned  
  for persistent classes).
  
  You might want to enable verbose logging and watch the make 
 sure the  
  class metadatas are registered before you try to get the list from  
  the repository.
  
  
  
  On Jan 1, 2007, at 4:11 PM, Shay Banon wrote:
  
 
  Hi,
 
 First, I hope that this is the correct forum for posting  
  questions, so
  sorry if it isn't.
 
  I have an external list of classes that I would like to match  
  against the
  persistent classes that are defined/identified by OpenJPA. 
 I would  
  really
  like to get the ClassMetaData for each one, since it has a lot of
  information that I could use. This intersection happens after the
  EntityManagerFactory has been created.
 
  I have tried using:ClassMetaData[] classMetaDatas =
  
 emf.getConfiguration().getMetaDataRepositoryInstance().getMetaDatas();
 
  But it seems like the meta data repository and ClassMetaData  
  information are
  lazily loaded (i.e. when some operation is performed on a 
 Class, the
  relevant meta data is fetched if not found in cache). So, what I  
  get is an
  empty array (even though I can see the OpenJPA identified the  
  classes).
 
  I wonder how I would be able to get all the class meta data?
 
  Something that I was thinking about is since I have the list of  
  classes that
  I would like to check if they are persistent, I could call:
  getMetaData(Class cls, ClassLoader envLoader, boolean 
 mustExist), with
  Thread context class loader and false in mustExists. I am 
 guessing  
  that it
  will load the ClassMetaData if not found. My main problem 
 here is that
  OpenJPA might be configured with a different class loader 
 (though it
  defaults to the thread context one).
 
  Any suggestions?
 
  p.s.
 
  I am the author of Compass, so once I have this nailed down, we  
  will have
  Search capabilities to OpenJPA ;)
 
  -- 
  View this message in context: 
 http://www.nabble.com/Getting-all-the- 
  ClassMetaDatas-tf2905426.html#a8116958
  Sent from the open-jpa-dev mailing list archive at Nabble.com.
 
  
  
  
 
 -- 
 View this message in context: 
 http://www.nabble.com/Getting-all-the-ClassMetaDatas-tf2905426
 .html#a8121096
 Sent from the open-jpa-dev mailing list archive at Nabble.com.
 
 
 
 

-- 
View this message in context: 
http://www.nabble.com/Getting-all-the-ClassMetaDatas-tf2905426.html#a8132249
Sent from the 

@IdClass annotation for id field of type byte[]

2007-01-02 Thread Kevin Sutter

Hi,
Some experimenting with the @IdClass support is producing a strange
exception message when attempting to map an id field of type byte[].
According to the OpenJPA documentation, we need to use an Identity Class to
use byte[] as the id field type.  Something like this:

@Entity
@IdClass (jpa.classes.Guid.class)
@Table(name=AGENT, schema=CDB)
public class Agent {

   @Id
   @Column(name=ME_GUID)
   private byte[] guid;
...

The Guid class has also been created with a single instance variable of type
byte[]:

public class Guid implements Serializable {
   private byte[] guid;
...

But, during the loading of the database, I am getting the following error...

org.apache.openjpa.util.MetaDataException: You cannot join on column 
AGENT.ME_GUID.  It is not managed by a mapping that supports joins

First off, the exception is confusing since I don't believe I am attempting
to do a join.  The guid column is in the same table as the Agent.

Also, this exception is supposedly only being produced with Oracle, not
DB2.  (I have not been able to verify that yet.)  This would seem to
indicate that it's dictionary-specific, but I'm not seeing anything there
yet...

I am in the process of validating the problem, but I thought I would drop a
line to the team to see if it rings any bells...

Thanks,
Kevin


Re: @IdClass annotation for id field of type byte[]

2007-01-02 Thread Marc Prud'hommeaux

Kevin-

Also, this exception is supposedly only being produced with Oracle,  
not

DB2.  (I have not been able to verify that yet.)  This would seem to
indicate that it's dictionary-specific, but I'm not seeing anything  
there

yet...


Does Oracle even support blob primary keys? My recollection is that  
it didn't...


I suspect that the problem might be that since Oracle has a number of  
problems with in-line blobs in statements, we frequently issue a  
separate statement to load and store blobs from and to rows, but if  
it is the primary key, then we might be conflicting with that. Can  
you post the complete stack trace?





On Jan 2, 2007, at 6:03 PM, Kevin Sutter wrote:


Hi,
Some experimenting with the @IdClass support is producing a strange
exception message when attempting to map an id field of type byte[].
According to the OpenJPA documentation, we need to use an Identity  
Class to

use byte[] as the id field type.  Something like this:

@Entity
@IdClass (jpa.classes.Guid.class)
@Table(name=AGENT, schema=CDB)
public class Agent {

   @Id
   @Column(name=ME_GUID)
   private byte[] guid;
...

The Guid class has also been created with a single instance  
variable of type

byte[]:

public class Guid implements Serializable {
   private byte[] guid;
...

But, during the loading of the database, I am getting the following  
error...


org.apache.openjpa.util.MetaDataException: You cannot join on column 
AGENT.ME_GUID.  It is not managed by a mapping that supports joins

First off, the exception is confusing since I don't believe I am  
attempting

to do a join.  The guid column is in the same table as the Agent.

Also, this exception is supposedly only being produced with Oracle,  
not

DB2.  (I have not been able to verify that yet.)  This would seem to
indicate that it's dictionary-specific, but I'm not seeing anything  
there

yet...

I am in the process of validating the problem, but I thought I  
would drop a

line to the team to see if it rings any bells...

Thanks,
Kevin




Re: @IdClass annotation for id field of type byte[]

2007-01-02 Thread Dain Sundstrom
Can you have java field of type byte[] that maps to a NUMERIC (or  
heck a varchar) in he db?  I'm guessing that Kevin's guid is a fixed  
128 bit number.  If it is and he can map it to a non-blob type, it  
should be possible to join with any database system.


-dain

On Jan 2, 2007, at 3:09 PM, Marc Prud'hommeaux wrote:


Kevin-

Also, this exception is supposedly only being produced with  
Oracle, not

DB2.  (I have not been able to verify that yet.)  This would seem to
indicate that it's dictionary-specific, but I'm not seeing  
anything there

yet...


Does Oracle even support blob primary keys? My recollection is that  
it didn't...


I suspect that the problem might be that since Oracle has a number  
of problems with in-line blobs in statements, we frequently issue a  
separate statement to load and store blobs from and to rows, but if  
it is the primary key, then we might be conflicting with that. Can  
you post the complete stack trace?





On Jan 2, 2007, at 6:03 PM, Kevin Sutter wrote:


Hi,
Some experimenting with the @IdClass support is producing a strange
exception message when attempting to map an id field of type byte[].
According to the OpenJPA documentation, we need to use an Identity  
Class to

use byte[] as the id field type.  Something like this:

@Entity
@IdClass (jpa.classes.Guid.class)
@Table(name=AGENT, schema=CDB)
public class Agent {

   @Id
   @Column(name=ME_GUID)
   private byte[] guid;
...

The Guid class has also been created with a single instance  
variable of type

byte[]:

public class Guid implements Serializable {
   private byte[] guid;
...

But, during the loading of the database, I am getting the  
following error...


org.apache.openjpa.util.MetaDataException: You cannot join on  
column 

AGENT.ME_GUID.  It is not managed by a mapping that supports joins

First off, the exception is confusing since I don't believe I am  
attempting

to do a join.  The guid column is in the same table as the Agent.

Also, this exception is supposedly only being produced with  
Oracle, not

DB2.  (I have not been able to verify that yet.)  This would seem to
indicate that it's dictionary-specific, but I'm not seeing  
anything there

yet...

I am in the process of validating the problem, but I thought I  
would drop a

line to the team to see if it rings any bells...

Thanks,
Kevin




Re: @IdClass annotation for id field of type byte[]

2007-01-02 Thread Igor Fedorenko
You can use use RAW(16) to store GUIDs in Oracle. This datatype is 
allowed in primary keys.


--
Regards,
Igor

Dain Sundstrom wrote:
Can you have java field of type byte[] that maps to a NUMERIC (or heck a 
varchar) in he db?  I'm guessing that Kevin's guid is a fixed 128 bit 
number.  If it is and he can map it to a non-blob type, it should be 
possible to join with any database system.


-dain

On Jan 2, 2007, at 3:09 PM, Marc Prud'hommeaux wrote:


Kevin-


Also, this exception is supposedly only being produced with Oracle, not
DB2.  (I have not been able to verify that yet.)  This would seem to
indicate that it's dictionary-specific, but I'm not seeing anything 
there

yet...


Does Oracle even support blob primary keys? My recollection is that it 
didn't...


I suspect that the problem might be that since Oracle has a number of 
problems with in-line blobs in statements, we frequently issue a 
separate statement to load and store blobs from and to rows, but if it 
is the primary key, then we might be conflicting with that. Can you 
post the complete stack trace?





On Jan 2, 2007, at 6:03 PM, Kevin Sutter wrote:


Hi,
Some experimenting with the @IdClass support is producing a strange
exception message when attempting to map an id field of type byte[].
According to the OpenJPA documentation, we need to use an Identity 
Class to

use byte[] as the id field type.  Something like this:

@Entity
@IdClass (jpa.classes.Guid.class)
@Table(name=AGENT, schema=CDB)
public class Agent {

   @Id
   @Column(name=ME_GUID)
   private byte[] guid;
...

The Guid class has also been created with a single instance variable 
of type

byte[]:

public class Guid implements Serializable {
   private byte[] guid;
...

But, during the loading of the database, I am getting the following 
error...


org.apache.openjpa.util.MetaDataException: You cannot join on column 
AGENT.ME_GUID.  It is not managed by a mapping that supports joins

First off, the exception is confusing since I don't believe I am 
attempting

to do a join.  The guid column is in the same table as the Agent.

Also, this exception is supposedly only being produced with Oracle, not
DB2.  (I have not been able to verify that yet.)  This would seem to
indicate that it's dictionary-specific, but I'm not seeing anything 
there

yet...

I am in the process of validating the problem, but I thought I would 
drop a

line to the team to see if it rings any bells...

Thanks,
Kevin







Re: @IdClass annotation for id field of type byte[]

2007-01-02 Thread Marc Prud'hommeaux


Interesting ... sounds like a legit bug, then (although it bears  
noting that byte[] primary keys aren't actually allowed by the JPA  
spec, as per section 2.1.4 ... support for them is an OpenJPA  
extension).


My guess is that this only affects Oracle, due to our special  
handling of blobs. It'd be interesting to see if any other databases  
that support byte[] primary keys exhibit this problem.




On Jan 2, 2007, at 7:23 PM, Igor Fedorenko wrote:

You can use use RAW(16) to store GUIDs in Oracle. This datatype is  
allowed in primary keys.


--
Regards,
Igor

Dain Sundstrom wrote:
Can you have java field of type byte[] that maps to a NUMERIC (or  
heck a varchar) in he db?  I'm guessing that Kevin's guid is a  
fixed 128 bit number.  If it is and he can map it to a non-blob  
type, it should be possible to join with any database system.

-dain
On Jan 2, 2007, at 3:09 PM, Marc Prud'hommeaux wrote:

Kevin-

Also, this exception is supposedly only being produced with  
Oracle, not
DB2.  (I have not been able to verify that yet.)  This would  
seem to
indicate that it's dictionary-specific, but I'm not seeing  
anything there

yet...


Does Oracle even support blob primary keys? My recollection is  
that it didn't...


I suspect that the problem might be that since Oracle has a  
number of problems with in-line blobs in statements, we  
frequently issue a separate statement to load and store blobs  
from and to rows, but if it is the primary key, then we might be  
conflicting with that. Can you post the complete stack trace?





On Jan 2, 2007, at 6:03 PM, Kevin Sutter wrote:


Hi,
Some experimenting with the @IdClass support is producing a strange
exception message when attempting to map an id field of type byte 
[].
According to the OpenJPA documentation, we need to use an  
Identity Class to

use byte[] as the id field type.  Something like this:

@Entity
@IdClass (jpa.classes.Guid.class)
@Table(name=AGENT, schema=CDB)
public class Agent {

   @Id
   @Column(name=ME_GUID)
   private byte[] guid;
...

The Guid class has also been created with a single instance  
variable of type

byte[]:

public class Guid implements Serializable {
   private byte[] guid;
...

But, during the loading of the database, I am getting the  
following error...


org.apache.openjpa.util.MetaDataException: You cannot join on  
column 

AGENT.ME_GUID.  It is not managed by a mapping that supports joins

First off, the exception is confusing since I don't believe I am  
attempting

to do a join.  The guid column is in the same table as the Agent.

Also, this exception is supposedly only being produced with  
Oracle, not
DB2.  (I have not been able to verify that yet.)  This would  
seem to
indicate that it's dictionary-specific, but I'm not seeing  
anything there

yet...

I am in the process of validating the problem, but I thought I  
would drop a

line to the team to see if it rings any bells...

Thanks,
Kevin






RE: @IdClass annotation for id field of type byte[]

2007-01-02 Thread Patrick Linskey
You could do this with an @Externalizer that converts the byte[] into a
long or a string or what have you, and a @Factory that reverses it.

-Patrick

-- 
Patrick Linskey
BEA Systems, Inc. 

___
Notice:  This email message, together with any attachments, may contain
information  of  BEA Systems,  Inc.,  its subsidiaries  and  affiliated
entities,  that may be confidential,  proprietary,  copyrighted  and/or
legally privileged, and is intended solely for the use of the individual
or entity named in this message. If you are not the intended recipient,
and have received this message in error, please immediately return this
by email and then delete it. 

 -Original Message-
 From: Dain Sundstrom [mailto:[EMAIL PROTECTED] 
 Sent: Tuesday, January 02, 2007 4:02 PM
 To: open-jpa-dev@incubator.apache.org
 Subject: Re: @IdClass annotation for id field of type byte[]
 
 Can you have java field of type byte[] that maps to a NUMERIC (or  
 heck a varchar) in he db?  I'm guessing that Kevin's guid is a fixed  
 128 bit number.  If it is and he can map it to a non-blob type, it  
 should be possible to join with any database system.
 
 -dain
 
 On Jan 2, 2007, at 3:09 PM, Marc Prud'hommeaux wrote:
 
  Kevin-
 
  Also, this exception is supposedly only being produced with  
  Oracle, not
  DB2.  (I have not been able to verify that yet.)  This 
 would seem to
  indicate that it's dictionary-specific, but I'm not seeing  
  anything there
  yet...
 
  Does Oracle even support blob primary keys? My recollection 
 is that  
  it didn't...
 
  I suspect that the problem might be that since Oracle has a number  
  of problems with in-line blobs in statements, we frequently 
 issue a  
  separate statement to load and store blobs from and to 
 rows, but if  
  it is the primary key, then we might be conflicting with that. Can  
  you post the complete stack trace?
 
 
 
 
  On Jan 2, 2007, at 6:03 PM, Kevin Sutter wrote:
 
  Hi,
  Some experimenting with the @IdClass support is producing a strange
  exception message when attempting to map an id field of 
 type byte[].
  According to the OpenJPA documentation, we need to use an 
 Identity  
  Class to
  use byte[] as the id field type.  Something like this:
 
  @Entity
  @IdClass (jpa.classes.Guid.class)
  @Table(name=AGENT, schema=CDB)
  public class Agent {
 
 @Id
 @Column(name=ME_GUID)
 private byte[] guid;
  ...
 
  The Guid class has also been created with a single instance  
  variable of type
  byte[]:
 
  public class Guid implements Serializable {
 private byte[] guid;
  ...
 
  But, during the loading of the database, I am getting the  
  following error...
 
  org.apache.openjpa.util.MetaDataException: You cannot join on  
  column 
  AGENT.ME_GUID.  It is not managed by a mapping that supports joins
 
  First off, the exception is confusing since I don't believe I am  
  attempting
  to do a join.  The guid column is in the same table as the Agent.
 
  Also, this exception is supposedly only being produced with  
  Oracle, not
  DB2.  (I have not been able to verify that yet.)  This 
 would seem to
  indicate that it's dictionary-specific, but I'm not seeing  
  anything there
  yet...
 
  I am in the process of validating the problem, but I thought I  
  would drop a
  line to the team to see if it rings any bells...
 
  Thanks,
  Kevin
 
 


RE: Perform automatic drop and create db schema

2007-01-02 Thread Patrick Linskey
 -Original Message-
 From: Shay Banon [mailto:[EMAIL PROTECTED] 
 Sent: Tuesday, January 02, 2007 2:33 PM
 To: open-jpa-dev@incubator.apache.org
 Subject: RE: Perform automatic drop and create db schema
 
 
 Automatically clean that data without dropping the tables 
 makes even more
 sense. That would be a really cool feature.

Deciding that two is a quorum, and needing something to do on my flight
to Salt Lake City, I implemented a new SchemaTool action called
'deleteTableContents' that does what you'd expect, more-or-less.

Along the way, I made it possible to specify multiple SchemaTool actions
via a comma-separated list.

I've done some basic testing of it; more testing (especially around the
scope of the classes that the operations happen on) would probably be a
good thing.

You can try it out like so:

property name=openjpa.jdbc.SynchronizeMappings
value=buildSchema,deleteTableContents /

or:

property name=openjpa.jdbc.SynchronizeMappings
value=refresh,deleteTableContents /

-Patrick
___
Notice:  This email message, together with any attachments, may contain
information  of  BEA Systems,  Inc.,  its subsidiaries  and  affiliated
entities,  that may be confidential,  proprietary,  copyrighted  and/or
legally privileged, and is intended solely for the use of the individual
or entity named in this message. If you are not the intended recipient,
and have received this message in error, please immediately return this
by email and then delete it.


[jira] Commented: (OPENJPA-94) Allow MappingTool and persistence.xml to support drop-create for database schema

2007-01-02 Thread Patrick Linskey (JIRA)

[ 
https://issues.apache.org/jira/browse/OPENJPA-94?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12461899
 ] 

Patrick Linskey commented on OPENJPA-94:


I added an optimization for MySQL with r492032, but disabled it by default 
since http://dev.mysql.com/doc/refman/5.0/en/delete.html mentions that it may 
fail if using InnoDB and delete constraints.

 Allow MappingTool and persistence.xml to support drop-create for database 
 schema
 

 Key: OPENJPA-94
 URL: https://issues.apache.org/jira/browse/OPENJPA-94
 Project: OpenJPA
  Issue Type: New Feature
Reporter: Shay Banon

 Currently, in the persistence context, one can define:
 property name=openjpa.jdbc.SynchronizeMappings value=buildSchema /
 Which causes OpenJPA to build the database schema based on the mapping 
 defined. Currently, there is no way to define it to drop tables if they 
 exists before creating the database schema. This is very useful for tests 
 that drop (if exists) and creates new tables for each test.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




BrokerImpl using thread class loader?

2007-01-02 Thread Dain Sundstrom
The BrokerImpl class initializes the _loader to Thread.currentThread 
().getContextClassLoader() when constructed (when an EM is  
constructed).  This cl is used while loading the mappings file.  This  
causes the entity classes to be loaded from the thread context class  
loader instead of the class loader specified in the PersistenceUnit.   
Is this expected behavior?


In the mean time, I'll code around it.

-dain