Re: Using DDL generation in a Java EE environment?

2007-04-02 Thread Marina Vatkina

Marc,

All my experiments are just that - experiments. My goal is to enable DDL 
generation in GlassFish when the persistence provider is set to that of the 
OpenJPA and the user-visible behavior is similar to the one we have for CMP 
(i.e. our internal code) and for TopLink Essentials as the persistence provider.


Foe now I'm exploring my options.

I'm using example (I started with the EE version, and now switched to the SE 
version) from the GlassFish persistence example page 
(https://glassfish.dev.java.net/javaee5/persistence/persistence-example.html). 
The example has 2 entities, Customer and Order, and 1ToMany relationship between 
them. Of course, I'm slightly modifying persistence.xml to make it work with the 
OpenJPA.


In one of my experiments I created a customer table with columns A and B, A 
being a PK before running the mapping tool. Mapping tool results were a bit 
surprising ;).


thanks,
-marina

Marc Prud'hommeaux wrote:


On Mar 30, 2007, at 4:17 PM, Marina Vatkina wrote:


Marc,

I'd rather have provider-neutral code in GlassFish ;(.



Well, until the JPA spec (or some other spec) provides details on how  
schema creation and/or migration should work, I don't think it is  
realistic to expect every vendor to behave in exactly the same way  
w.r.t. how their schema manipulation tools work. We're always open to  
specific suggestions for how to improve how our operate, though.



The problem with adding new columns to the existing table in my  
example below is a) in having a PK in the table that is not mapped  to 
any field the entity, and b) not making the entity's id a pk (I  
didn't get any warning from not being able to add or create a PK -  
does OpenJPA suppress them?).



I don't really understand. Are you saying you want to map to a table  
that has a primary key, but you don't want to map any field in your  
entity to that primary key? Is that is the case, how do you expect  
primary key generation to work?


I wouldn't be surprised, however, if we don't actually validate that  
what you declare to be a primary key in your entity is actually  defined 
as a primary key in the database. We don't require that the  column 
actually be declared to be a primary key in order for OpenJPA  to work 
correctly (although I can't envision any reason why you  wouldn't want 
to make it a proper primary key).




thanks,
-marina

Marc Prud'hommeaux wrote:


Marina-
The problem is that OpenJPA just ignores extra, unmapped columns.   
Since we don't require that you map all of the columns of a  
database  table to an entity, tables can exist that have unmapped  
columns. By  default, we tend to err on the side of caution, so we  
never drop  tables or columns. The deleteTableContents flag  merely 
deletes all  the rows in a table, it doesn't actually drop  the table.
We don't have any options for asserting that the table is mapped   
completely. That might be a nice enhancement, and would allows   
OpenJPA to warn when it sees a existing table with unmapped columns.
You could manually drop the tables using the mappingtool by   
specifying the schemaAction argument to drop, but there's no  
way  to do it automatically using the SynchronizeMappings. Note  that 
there  is nothing preventing you from manually invoking the  
MappingTool  class from any startup to glue code that you want.

On Mar 29, 2007, at 4:18 PM, Marina Vatkina wrote:


Marc, Patrick,

I didn't look into the file story yet, but what I've seen as the   
result of using


  property name=openjpa.jdbc.SynchronizeMappings
value=buildSchema  
(SchemaAction='add,deleteTableContents')/


looks surprising: if I have there is an entity Foo with  
persistence  fields 'x' and 'y' and a table FOO already exists in  
the database  with columns A and B (there are no fields 'a' and  'b' 
in the  entity), the table is not recreated, but the columns  X and 
Y are  added to the table FOO. The 'deleteTableContents'  doesn't 
affect  this behavior.


Is it an expected behavior?

What should I use to either create the table properly or get a   
message that such table already exist (and as in my case doesn't   
match the entity)?


thanks,
-marina

Marina Vatkina wrote:

Then I'll first start with an easier task - check what happens  in  
EE if entities are not explicitly listed in the  persistence.xml  
file :).

thanks,
-marina
Marc Prud'hommeaux wrote:


Marina-

Let me give it a try. How would the persistence.xml property   
look  like to generate .sql file?






Actually, I just took a look at this, and it look like it  isn't   
possible to use the SynchronizeMappings property to   
automatically  output a sql file. The reason is that the  
property  takes a standard  OpenJPA plugin string that  configures 
an  instances of MappingTool,  but the MappingTool  class doesn't 
have  a setter for the SQL file to  write out to.


So I think your only recourse would be to write your own  adapter  
to  to this that manually 

Re: Using DDL generation in a Java EE environment?

2007-04-01 Thread Marc Prud'hommeaux

Marina-

The sql flag merely says that OpenJPA should write the SQL to an  
external file. It still needs to connect to the database in order to  
see which tables currently exist, so it can determine if it needs to  
create new tables or columns.


If you just want a fresh database view for the mapping tool, such  
that the mapping tool thinks that the database has no schema defined,  
then you can specify the -SchemaFactory flag to be file(my- 
schema.xml), where my-schema.xml file is a schema definition file  
(see docs to the format) that contains no tables or columns. This  
should also prevent OpenJPA from having to connect to the database in  
order to read the columns and tables.





On Mar 30, 2007, at 4:58 PM, Marina Vatkina wrote:


Marc,

I'm trying to run MappingTool to look at -sql option, but I can't  
make it to work with a PU without connecting to the database (my  
persistence.xml has jta-data-source), and I can't find in the  
docs how to specify the DBDictionary without persistence.xml.


thanks,
-marina

Marc Prud'hommeaux wrote:

Marina-
The problem is that OpenJPA just ignores extra, unmapped columns.   
Since we don't require that you map all of the columns of a  
database  table to an entity, tables can exist that have unmapped  
columns. By  default, we tend to err on the side of caution, so we  
never drop  tables or columns. The deleteTableContents flag  
merely deletes all  the rows in a table, it doesn't actually drop  
the table.
We don't have any options for asserting that the table is mapped   
completely. That might be a nice enhancement, and would allows   
OpenJPA to warn when it sees a existing table with unmapped columns.
You could manually drop the tables using the mappingtool by   
specifying the schemaAction argument to drop, but there's no  
way  to do it automatically using the SynchronizeMappings. Note  
that there  is nothing preventing you from manually invoking the  
MappingTool  class from any startup to glue code that you want.

On Mar 29, 2007, at 4:18 PM, Marina Vatkina wrote:

Marc, Patrick,

I didn't look into the file story yet, but what I've seen as the   
result of using


  property name=openjpa.jdbc.SynchronizeMappings
value=buildSchema  
(SchemaAction='add,deleteTableContents')/


looks surprising: if I have there is an entity Foo with  
persistence  fields 'x' and 'y' and a table FOO already exists in  
the database  with columns A and B (there are no fields 'a' and  
'b' in the  entity), the table is not recreated, but the columns  
X and Y are  added to the table FOO. The 'deleteTableContents'  
doesn't affect  this behavior.


Is it an expected behavior?

What should I use to either create the table properly or get a   
message that such table already exist (and as in my case doesn't   
match the entity)?


thanks,
-marina

Marina Vatkina wrote:

Then I'll first start with an easier task - check what happens  
in  EE if entities are not explicitly listed in the  
persistence.xml  file :).

thanks,
-marina
Marc Prud'hommeaux wrote:


Marina-

Let me give it a try. How would the persistence.xml property   
look  like to generate .sql file?





Actually, I just took a look at this, and it look like it  
isn't   possible to use the SynchronizeMappings property to   
automatically  output a sql file. The reason is that the  
property  takes a standard  OpenJPA plugin string that  
configures an  instances of MappingTool,  but the MappingTool  
class doesn't have  a setter for the SQL file to  write out to.


So I think your only recourse would be to write your own  
adapter  to  to this that manually creates a MappingTool  
instance and runs  it with  the correct flags for outputting a  
sql file. Take a look  at the  javadocs for the MappingTool to  
get started, and let us  know if you  have any questions about  
proceeding.




On Mar 20, 2007, at 4:59 PM, Marina Vatkina wrote:


Marc,

Marc Prud'hommeaux wrote:


Marina-

They do in SE, but as there is no requirement to do it in   
EE,   people try to reduce the amount of typing ;).




Hmm ... we might not actually require it in EE, since we do
examine  the ejb jar to look for persistent classes. I'm not   
sure  though.
You should test with both listing them and not listing them.   
I'd  be  interested to know if it works without.





Let me give it a try. How would the persistence.xml property   
look  like to generate .sql file? Where will it be placed in  
EE   environment?  Does it use use the name as-is or prepend  
it with   some path?


thanks.


On Mar 20, 2007, at 4:19 PM, Marina Vatkina wrote:


Marc,

Marc Prud'hommeaux wrote:


Marina-
On Mar 20, 2007, at 4:02 PM, Marina Vatkina wrote:


Marc,

Thanks for the pointers. Can you please answer the   
following  set  of  questions?


1. The doc requires that In order to enable automatic
runtime   mapping, you must first list all your  
persistent   classes. Is  this  true for EE case also?





Yes. 

Re: Using DDL generation in a Java EE environment?

2007-04-01 Thread Marc Prud'hommeaux


On Mar 30, 2007, at 4:17 PM, Marina Vatkina wrote:


Marc,

I'd rather have provider-neutral code in GlassFish ;(.


Well, until the JPA spec (or some other spec) provides details on how  
schema creation and/or migration should work, I don't think it is  
realistic to expect every vendor to behave in exactly the same way  
w.r.t. how their schema manipulation tools work. We're always open to  
specific suggestions for how to improve how our operate, though.



The problem with adding new columns to the existing table in my  
example below is a) in having a PK in the table that is not mapped  
to any field the entity, and b) not making the entity's id a pk (I  
didn't get any warning from not being able to add or create a PK -  
does OpenJPA suppress them?).


I don't really understand. Are you saying you want to map to a table  
that has a primary key, but you don't want to map any field in your  
entity to that primary key? Is that is the case, how do you expect  
primary key generation to work?


I wouldn't be surprised, however, if we don't actually validate that  
what you declare to be a primary key in your entity is actually  
defined as a primary key in the database. We don't require that the  
column actually be declared to be a primary key in order for OpenJPA  
to work correctly (although I can't envision any reason why you  
wouldn't want to make it a proper primary key).




thanks,
-marina

Marc Prud'hommeaux wrote:

Marina-
The problem is that OpenJPA just ignores extra, unmapped columns.   
Since we don't require that you map all of the columns of a  
database  table to an entity, tables can exist that have unmapped  
columns. By  default, we tend to err on the side of caution, so we  
never drop  tables or columns. The deleteTableContents flag  
merely deletes all  the rows in a table, it doesn't actually drop  
the table.
We don't have any options for asserting that the table is mapped   
completely. That might be a nice enhancement, and would allows   
OpenJPA to warn when it sees a existing table with unmapped columns.
You could manually drop the tables using the mappingtool by   
specifying the schemaAction argument to drop, but there's no  
way  to do it automatically using the SynchronizeMappings. Note  
that there  is nothing preventing you from manually invoking the  
MappingTool  class from any startup to glue code that you want.

On Mar 29, 2007, at 4:18 PM, Marina Vatkina wrote:

Marc, Patrick,

I didn't look into the file story yet, but what I've seen as the   
result of using


  property name=openjpa.jdbc.SynchronizeMappings
value=buildSchema  
(SchemaAction='add,deleteTableContents')/


looks surprising: if I have there is an entity Foo with  
persistence  fields 'x' and 'y' and a table FOO already exists in  
the database  with columns A and B (there are no fields 'a' and  
'b' in the  entity), the table is not recreated, but the columns  
X and Y are  added to the table FOO. The 'deleteTableContents'  
doesn't affect  this behavior.


Is it an expected behavior?

What should I use to either create the table properly or get a   
message that such table already exist (and as in my case doesn't   
match the entity)?


thanks,
-marina

Marina Vatkina wrote:

Then I'll first start with an easier task - check what happens  
in  EE if entities are not explicitly listed in the  
persistence.xml  file :).

thanks,
-marina
Marc Prud'hommeaux wrote:


Marina-

Let me give it a try. How would the persistence.xml property   
look  like to generate .sql file?





Actually, I just took a look at this, and it look like it  
isn't   possible to use the SynchronizeMappings property to   
automatically  output a sql file. The reason is that the  
property  takes a standard  OpenJPA plugin string that  
configures an  instances of MappingTool,  but the MappingTool  
class doesn't have  a setter for the SQL file to  write out to.


So I think your only recourse would be to write your own  
adapter  to  to this that manually creates a MappingTool  
instance and runs  it with  the correct flags for outputting a  
sql file. Take a look  at the  javadocs for the MappingTool to  
get started, and let us  know if you  have any questions about  
proceeding.




On Mar 20, 2007, at 4:59 PM, Marina Vatkina wrote:


Marc,

Marc Prud'hommeaux wrote:


Marina-

They do in SE, but as there is no requirement to do it in   
EE,   people try to reduce the amount of typing ;).




Hmm ... we might not actually require it in EE, since we do
examine  the ejb jar to look for persistent classes. I'm not   
sure  though.
You should test with both listing them and not listing them.   
I'd  be  interested to know if it works without.





Let me give it a try. How would the persistence.xml property   
look  like to generate .sql file? Where will it be placed in  
EE   environment?  Does it use use the name as-is or prepend  
it with   some path?


thanks.


On Mar 20, 2007, at 4:19 PM, 

Re: Using DDL generation in a Java EE environment?

2007-03-30 Thread Marina Vatkina

Marc,

I'd rather have provider-neutral code in GlassFish ;(.

The problem with adding new columns to the existing table in my example below is 
a) in having a PK in the table that is not mapped to any field the entity, and 
b) not making the entity's id a pk (I didn't get any warning from not being able 
to add or create a PK - does OpenJPA suppress them?).


thanks,
-marina

Marc Prud'hommeaux wrote:

Marina-

The problem is that OpenJPA just ignores extra, unmapped columns.  Since 
we don't require that you map all of the columns of a database  table to 
an entity, tables can exist that have unmapped columns. By  default, we 
tend to err on the side of caution, so we never drop  tables or columns. 
The deleteTableContents flag merely deletes all  the rows in a table, 
it doesn't actually drop the table.


We don't have any options for asserting that the table is mapped  
completely. That might be a nice enhancement, and would allows  OpenJPA 
to warn when it sees a existing table with unmapped columns.


You could manually drop the tables using the mappingtool by  specifying 
the schemaAction argument to drop, but there's no way  to do it 
automatically using the SynchronizeMappings. Note that there  is nothing 
preventing you from manually invoking the MappingTool  class from any 
startup to glue code that you want.




On Mar 29, 2007, at 4:18 PM, Marina Vatkina wrote:


Marc, Patrick,

I didn't look into the file story yet, but what I've seen as the  
result of using


  property name=openjpa.jdbc.SynchronizeMappings
value=buildSchema 
(SchemaAction='add,deleteTableContents')/


looks surprising: if I have there is an entity Foo with persistence  
fields 'x' and 'y' and a table FOO already exists in the database  
with columns A and B (there are no fields 'a' and 'b' in the  entity), 
the table is not recreated, but the columns X and Y are  added to the 
table FOO. The 'deleteTableContents' doesn't affect  this behavior.


Is it an expected behavior?

What should I use to either create the table properly or get a  
message that such table already exist (and as in my case doesn't  
match the entity)?


thanks,
-marina

Marina Vatkina wrote:

Then I'll first start with an easier task - check what happens in  EE 
if entities are not explicitly listed in the persistence.xml  file :).

thanks,
-marina
Marc Prud'hommeaux wrote:


Marina-

Let me give it a try. How would the persistence.xml property  look  
like to generate .sql file?





Actually, I just took a look at this, and it look like it isn't   
possible to use the SynchronizeMappings property to  
automatically  output a sql file. The reason is that the property  
takes a standard  OpenJPA plugin string that configures an  
instances of MappingTool,  but the MappingTool class doesn't have  a 
setter for the SQL file to  write out to.


So I think your only recourse would be to write your own adapter  
to  to this that manually creates a MappingTool instance and runs  
it with  the correct flags for outputting a sql file. Take a look  
at the  javadocs for the MappingTool to get started, and let us  
know if you  have any questions about proceeding.




On Mar 20, 2007, at 4:59 PM, Marina Vatkina wrote:


Marc,

Marc Prud'hommeaux wrote:


Marina-

They do in SE, but as there is no requirement to do it in  EE,   
people try to reduce the amount of typing ;).




Hmm ... we might not actually require it in EE, since we do   
examine  the ejb jar to look for persistent classes. I'm not  
sure  though.
You should test with both listing them and not listing them.  I'd  
be  interested to know if it works without.





Let me give it a try. How would the persistence.xml property  look  
like to generate .sql file? Where will it be placed in EE   
environment?  Does it use use the name as-is or prepend it with   
some path?


thanks.


On Mar 20, 2007, at 4:19 PM, Marina Vatkina wrote:


Marc,

Marc Prud'hommeaux wrote:


Marina-
On Mar 20, 2007, at 4:02 PM, Marina Vatkina wrote:


Marc,

Thanks for the pointers. Can you please answer the  following  
set  of  questions?


1. The doc requires that In order to enable automatic   
runtime   mapping, you must first list all your persistent   
classes. Is  this  true for EE case also?





Yes. People usually list them all in the class tags in  the
persistence.xml file.






They do in SE, but as there is no requirement to do it in  EE,   
people try to reduce the amount of typing ;).


If OpenJPA can identify all entities in EE world, why can't  it  
do  the same for the schema generation?


I'll check the rest.

thanks,
-marina

2. Section 1.2.Generating DDL SQL talks about .sql files,   
but   what I am looking for are jdbc files, i.e. files  with  
the  lines  that can be used directly as java.sql  statements 
to  be  executed  against database.





The output should be sufficient. Try it out and see if the   
format  is  something you can use.


3. Is there a document 

Re: Using DDL generation in a Java EE environment?

2007-03-30 Thread Marina Vatkina

Marc,

I'm trying to run MappingTool to look at -sql option, but I can't make it to 
work with a PU without connecting to the database (my persistence.xml has 
jta-data-source), and I can't find in the docs how to specify the DBDictionary 
without persistence.xml.


thanks,
-marina

Marc Prud'hommeaux wrote:

Marina-

The problem is that OpenJPA just ignores extra, unmapped columns.  Since 
we don't require that you map all of the columns of a database  table to 
an entity, tables can exist that have unmapped columns. By  default, we 
tend to err on the side of caution, so we never drop  tables or columns. 
The deleteTableContents flag merely deletes all  the rows in a table, 
it doesn't actually drop the table.


We don't have any options for asserting that the table is mapped  
completely. That might be a nice enhancement, and would allows  OpenJPA 
to warn when it sees a existing table with unmapped columns.


You could manually drop the tables using the mappingtool by  specifying 
the schemaAction argument to drop, but there's no way  to do it 
automatically using the SynchronizeMappings. Note that there  is nothing 
preventing you from manually invoking the MappingTool  class from any 
startup to glue code that you want.




On Mar 29, 2007, at 4:18 PM, Marina Vatkina wrote:


Marc, Patrick,

I didn't look into the file story yet, but what I've seen as the  
result of using


  property name=openjpa.jdbc.SynchronizeMappings
value=buildSchema 
(SchemaAction='add,deleteTableContents')/


looks surprising: if I have there is an entity Foo with persistence  
fields 'x' and 'y' and a table FOO already exists in the database  
with columns A and B (there are no fields 'a' and 'b' in the  entity), 
the table is not recreated, but the columns X and Y are  added to the 
table FOO. The 'deleteTableContents' doesn't affect  this behavior.


Is it an expected behavior?

What should I use to either create the table properly or get a  
message that such table already exist (and as in my case doesn't  
match the entity)?


thanks,
-marina

Marina Vatkina wrote:

Then I'll first start with an easier task - check what happens in  EE 
if entities are not explicitly listed in the persistence.xml  file :).

thanks,
-marina
Marc Prud'hommeaux wrote:


Marina-

Let me give it a try. How would the persistence.xml property  look  
like to generate .sql file?





Actually, I just took a look at this, and it look like it isn't   
possible to use the SynchronizeMappings property to  
automatically  output a sql file. The reason is that the property  
takes a standard  OpenJPA plugin string that configures an  
instances of MappingTool,  but the MappingTool class doesn't have  a 
setter for the SQL file to  write out to.


So I think your only recourse would be to write your own adapter  
to  to this that manually creates a MappingTool instance and runs  
it with  the correct flags for outputting a sql file. Take a look  
at the  javadocs for the MappingTool to get started, and let us  
know if you  have any questions about proceeding.




On Mar 20, 2007, at 4:59 PM, Marina Vatkina wrote:


Marc,

Marc Prud'hommeaux wrote:


Marina-

They do in SE, but as there is no requirement to do it in  EE,   
people try to reduce the amount of typing ;).




Hmm ... we might not actually require it in EE, since we do   
examine  the ejb jar to look for persistent classes. I'm not  
sure  though.
You should test with both listing them and not listing them.  I'd  
be  interested to know if it works without.





Let me give it a try. How would the persistence.xml property  look  
like to generate .sql file? Where will it be placed in EE   
environment?  Does it use use the name as-is or prepend it with   
some path?


thanks.


On Mar 20, 2007, at 4:19 PM, Marina Vatkina wrote:


Marc,

Marc Prud'hommeaux wrote:


Marina-
On Mar 20, 2007, at 4:02 PM, Marina Vatkina wrote:


Marc,

Thanks for the pointers. Can you please answer the  following  
set  of  questions?


1. The doc requires that In order to enable automatic   
runtime   mapping, you must first list all your persistent   
classes. Is  this  true for EE case also?





Yes. People usually list them all in the class tags in  the
persistence.xml file.






They do in SE, but as there is no requirement to do it in  EE,   
people try to reduce the amount of typing ;).


If OpenJPA can identify all entities in EE world, why can't  it  
do  the same for the schema generation?


I'll check the rest.

thanks,
-marina

2. Section 1.2.Generating DDL SQL talks about .sql files,   
but   what I am looking for are jdbc files, i.e. files  with  
the  lines  that can be used directly as java.sql  statements 
to  be  executed  against database.





The output should be sufficient. Try it out and see if the   
format  is  something you can use.


3. Is there a document that describes all possible values  
for   the  openjpa.jdbc.SynchronizeMappings property?






Re: Using DDL generation in a Java EE environment?

2007-03-20 Thread Marc Prud'hommeaux

Marina-

On Mar 20, 2007, at 4:02 PM, Marina Vatkina wrote:


Marc,

Thanks for the pointers. Can you please answer the following set of  
questions?


1. The doc requires that In order to enable automatic runtime  
mapping, you must first list all your persistent classes. Is this  
true for EE case also?


Yes. People usually list them all in the class tags in the  
persistence.xml file.



2. Section 1.2.Generating DDL SQL talks about .sql files, but  
what I am looking for are jdbc files, i.e. files with the lines  
that can be used directly as java.sql statements to be executed  
against database.


The output should be sufficient. Try it out and see if the format is  
something you can use.



3. Is there a document that describes all possible values for the  
openjpa.jdbc.SynchronizeMappings property?


Unfortunately, no. Basically, the setting of the  
SynchronizeMappings property will be of the form action 
(Bean1=value1,Bean2=value2), where the bean values are those  
listed in org.apache.openjpa.jdbc.meta.MappingTool (whose javadoc you  
can see http://incubator.apache.org/openjpa/docs/latest/javadoc/org/ 
apache/openjpa/jdbc/meta/MappingTool.html ).





thank you,
-marina

Marc Prud'hommeaux wrote:

Marina-
On Mar 15, 2007, at 5:01 PM, Marina Vatkina wrote:

Hi,

I am part of the GlassFish persistence team and was wondering  
how  does OpenJPA support JPA auto DDL generation (we call it  
java2db)  in a Java EE application server.


Our application server supports java2db via creating two sets of   
files for each PU: a ...dropDDL.jdbc and a ...createDDL.jdbc  
file  on deploy (i.e. before the application  is actually loaded  
into the  container) and then executing 'create' file as the last  
step in  deployment, and 'drop' file on undeploy or the 1st step  
in  redeploy. This allows us to drop tables created by the  
previous  deploy operation.


This approach is done for both, the CMP and the default JPA   
provider. It would be nice to add java2db support for OpenJPA as   
well, and I'm wondering if we need to do anything special, or  
it'll  all work just by itself?
We do have support for runtime creation of the schema via the   
openjpa.jdbc.SynchronizeMappings property. It is described at:
  http://incubator.apache.org/openjpa/docs/latest/manual/  
manual.html#ref_guide_mapping_synch
The property can be configured to run the mappingtool (also  
described  in the documentation) at runtime against all the  
registered  persistent classes.

Here are my 1st set of questions:

1. Which API would trigger the process, assuming the correct  
values  are specified in the persistence.xml file? Is it:

a) provider.createContainerEntityManagerFactory(...)? or
b) the 1st call to emf.createEntityManager() in this VM?
c) something else?

b

2. How would a user drop the tables in such environment?
I don't think it can be used to automatically drop then create   
tables. The mappingtool can be executed manually twice, the  
first  time to drop all the tables, and the second time to re- 
create them,  but I don't think it can be automatically done at  
runtime with the  SynchronizeMappings property.
3. If the answer to either 1a or 1b is yes, how does the code   
distinguish between the server startup time and the application   
being loaded for the 1st time?
That is one of the reasons why we think it would be inadvisable  
to  automatically drop tables at runtime :)
4. Is there a mode that allows creating a file with the jdbc   
statements to create or drop the tables and constraints?

Yes. See:
  http://incubator.apache.org/openjpa/docs/latest/manual/  
manual.html#ref_guide_ddl_examples

thank you,
-marina







Re: Using DDL generation in a Java EE environment?

2007-03-20 Thread Marina Vatkina

Marc,

Marc Prud'hommeaux wrote:

Marina-

On Mar 20, 2007, at 4:02 PM, Marina Vatkina wrote:


Marc,

Thanks for the pointers. Can you please answer the following set of  
questions?


1. The doc requires that In order to enable automatic runtime  
mapping, you must first list all your persistent classes. Is this  
true for EE case also?



Yes. People usually list them all in the class tags in the  
persistence.xml file.


They do in SE, but as there is no requirement to do it in EE, people try to 
reduce the amount of typing ;).


If OpenJPA can identify all entities in EE world, why can't it do the same for 
the schema generation?


I'll check the rest.

thanks,
-marina



2. Section 1.2.Generating DDL SQL talks about .sql files, but  what 
I am looking for are jdbc files, i.e. files with the lines  that can 
be used directly as java.sql statements to be executed  against database.



The output should be sufficient. Try it out and see if the format is  
something you can use.



3. Is there a document that describes all possible values for the  
openjpa.jdbc.SynchronizeMappings property?



Unfortunately, no. Basically, the setting of the  SynchronizeMappings 
property will be of the form action (Bean1=value1,Bean2=value2), where 
the bean values are those  listed in 
org.apache.openjpa.jdbc.meta.MappingTool (whose javadoc you  can see 
http://incubator.apache.org/openjpa/docs/latest/javadoc/org/ 
apache/openjpa/jdbc/meta/MappingTool.html ).





thank you,
-marina

Marc Prud'hommeaux wrote:


Marina-
On Mar 15, 2007, at 5:01 PM, Marina Vatkina wrote:


Hi,

I am part of the GlassFish persistence team and was wondering  how  
does OpenJPA support JPA auto DDL generation (we call it  
java2db)  in a Java EE application server.


Our application server supports java2db via creating two sets of   
files for each PU: a ...dropDDL.jdbc and a ...createDDL.jdbc  file  
on deploy (i.e. before the application  is actually loaded  into 
the  container) and then executing 'create' file as the last  step 
in  deployment, and 'drop' file on undeploy or the 1st step  in  
redeploy. This allows us to drop tables created by the  previous  
deploy operation.


This approach is done for both, the CMP and the default JPA   
provider. It would be nice to add java2db support for OpenJPA as   
well, and I'm wondering if we need to do anything special, or  
it'll  all work just by itself?


We do have support for runtime creation of the schema via the   
openjpa.jdbc.SynchronizeMappings property. It is described at:
  http://incubator.apache.org/openjpa/docs/latest/manual/  
manual.html#ref_guide_mapping_synch
The property can be configured to run the mappingtool (also  
described  in the documentation) at runtime against all the  
registered  persistent classes.



Here are my 1st set of questions:

1. Which API would trigger the process, assuming the correct  
values  are specified in the persistence.xml file? Is it:

a) provider.createContainerEntityManagerFactory(...)? or
b) the 1st call to emf.createEntityManager() in this VM?
c) something else?


b


2. How would a user drop the tables in such environment?


I don't think it can be used to automatically drop then create   
tables. The mappingtool can be executed manually twice, the  first  
time to drop all the tables, and the second time to re- create them,  
but I don't think it can be automatically done at  runtime with the  
SynchronizeMappings property.


3. If the answer to either 1a or 1b is yes, how does the code   
distinguish between the server startup time and the application   
being loaded for the 1st time?


That is one of the reasons why we think it would be inadvisable  to  
automatically drop tables at runtime :)


4. Is there a mode that allows creating a file with the jdbc   
statements to create or drop the tables and constraints?


Yes. See:
  http://incubator.apache.org/openjpa/docs/latest/manual/  
manual.html#ref_guide_ddl_examples



thank you,
-marina









Re: Using DDL generation in a Java EE environment?

2007-03-20 Thread Marc Prud'hommeaux

Marina-

They do in SE, but as there is no requirement to do it in EE,  
people try to reduce the amount of typing ;).


Hmm ... we might not actually require it in EE, since we do examine  
the ejb jar to look for persistent classes. I'm not sure though.


You should test with both listing them and not listing them. I'd be  
interested to know if it works without.




On Mar 20, 2007, at 4:19 PM, Marina Vatkina wrote:


Marc,

Marc Prud'hommeaux wrote:

Marina-
On Mar 20, 2007, at 4:02 PM, Marina Vatkina wrote:

Marc,

Thanks for the pointers. Can you please answer the following set  
of  questions?


1. The doc requires that In order to enable automatic runtime   
mapping, you must first list all your persistent classes. Is  
this  true for EE case also?
Yes. People usually list them all in the class tags in the   
persistence.xml file.


They do in SE, but as there is no requirement to do it in EE,  
people try to reduce the amount of typing ;).


If OpenJPA can identify all entities in EE world, why can't it do  
the same for the schema generation?


I'll check the rest.

thanks,
-marina
2. Section 1.2.Generating DDL SQL talks about .sql files, but   
what I am looking for are jdbc files, i.e. files with the  
lines  that can be used directly as java.sql statements to be  
executed  against database.
The output should be sufficient. Try it out and see if the format  
is  something you can use.
3. Is there a document that describes all possible values for  
the  openjpa.jdbc.SynchronizeMappings property?
Unfortunately, no. Basically, the setting of the   
SynchronizeMappings property will be of the form action  
(Bean1=value1,Bean2=value2), where the bean values are those   
listed in org.apache.openjpa.jdbc.meta.MappingTool (whose javadoc  
you  can see http://incubator.apache.org/openjpa/docs/latest/ 
javadoc/org/ apache/openjpa/jdbc/meta/MappingTool.html ).

thank you,
-marina

Marc Prud'hommeaux wrote:


Marina-
On Mar 15, 2007, at 5:01 PM, Marina Vatkina wrote:


Hi,

I am part of the GlassFish persistence team and was wondering   
how  does OpenJPA support JPA auto DDL generation (we call it   
java2db)  in a Java EE application server.


Our application server supports java2db via creating two sets  
of   files for each PU: a ...dropDDL.jdbc and  
a ...createDDL.jdbc  file  on deploy (i.e. before the  
application  is actually loaded  into the  container) and then  
executing 'create' file as the last  step in  deployment, and  
'drop' file on undeploy or the 1st step  in  redeploy. This  
allows us to drop tables created by the  previous  deploy  
operation.


This approach is done for both, the CMP and the default JPA
provider. It would be nice to add java2db support for OpenJPA  
as   well, and I'm wondering if we need to do anything special,  
or  it'll  all work just by itself?


We do have support for runtime creation of the schema via the
openjpa.jdbc.SynchronizeMappings property. It is described at:
  http://incubator.apache.org/openjpa/docs/latest/manual/   
manual.html#ref_guide_mapping_synch
The property can be configured to run the mappingtool (also   
described  in the documentation) at runtime against all the   
registered  persistent classes.



Here are my 1st set of questions:

1. Which API would trigger the process, assuming the correct   
values  are specified in the persistence.xml file? Is it:

a) provider.createContainerEntityManagerFactory(...)? or
b) the 1st call to emf.createEntityManager() in this VM?
c) something else?


b


2. How would a user drop the tables in such environment?


I don't think it can be used to automatically drop then create
tables. The mappingtool can be executed manually twice, the   
first  time to drop all the tables, and the second time to re-  
create them,  but I don't think it can be automatically done at   
runtime with the  SynchronizeMappings property.


3. If the answer to either 1a or 1b is yes, how does the code
distinguish between the server startup time and the  
application   being loaded for the 1st time?


That is one of the reasons why we think it would be inadvisable   
to  automatically drop tables at runtime :)


4. Is there a mode that allows creating a file with the jdbc
statements to create or drop the tables and constraints?


Yes. See:
  http://incubator.apache.org/openjpa/docs/latest/manual/   
manual.html#ref_guide_ddl_examples



thank you,
-marina









RE: Using DDL generation in a Java EE environment?

2007-03-20 Thread Pinaki Poddar
  They do in SE, but as there is no requirement to do it in EE, people 
 try to reduce the amount of typing ;).

In EE, persistent classes can be specified via
a) explictly via class
b) via one or more jar-file
c) via one or more mapping-file
d) leave everything unspecified and OpenJPA will scan for @Entity
annotated classes in the deployed unit 


Pinaki Poddar
BEA Systems
415.402.7317  


-Original Message-
From: Marc Prud'hommeaux [mailto:[EMAIL PROTECTED] On Behalf Of
Marc Prud'hommeaux
Sent: Tuesday, March 20, 2007 6:22 PM
To: open-jpa-dev@incubator.apache.org
Subject: Re: Using DDL generation in a Java EE environment?

Marina-

 They do in SE, but as there is no requirement to do it in EE, people 
 try to reduce the amount of typing ;).

Hmm ... we might not actually require it in EE, since we do examine the
ejb jar to look for persistent classes. I'm not sure though.

You should test with both listing them and not listing them. I'd be
interested to know if it works without.



On Mar 20, 2007, at 4:19 PM, Marina Vatkina wrote:

 Marc,

 Marc Prud'hommeaux wrote:
 Marina-
 On Mar 20, 2007, at 4:02 PM, Marina Vatkina wrote:
 Marc,

 Thanks for the pointers. Can you please answer the following set of

 questions?

 1. The doc requires that In order to enable automatic runtime   
 mapping, you must first list all your persistent classes. Is this  
 true for EE case also?
 Yes. People usually list them all in the class tags in the   
 persistence.xml file.

 They do in SE, but as there is no requirement to do it in EE, people 
 try to reduce the amount of typing ;).

 If OpenJPA can identify all entities in EE world, why can't it do the 
 same for the schema generation?

 I'll check the rest.

 thanks,
 -marina
 2. Section 1.2.Generating DDL SQL talks about .sql files, but   
 what I am looking for are jdbc files, i.e. files with the lines  
 that can be used directly as java.sql statements to be executed  
 against database.
 The output should be sufficient. Try it out and see if the format is

 something you can use.
 3. Is there a document that describes all possible values for the  
 openjpa.jdbc.SynchronizeMappings property?
 Unfortunately, no. Basically, the setting of the   
 SynchronizeMappings property will be of the form action  
 (Bean1=value1,Bean2=value2), where the bean values are those   
 listed in org.apache.openjpa.jdbc.meta.MappingTool (whose javadoc you

 can see http://incubator.apache.org/openjpa/docs/latest/
 javadoc/org/ apache/openjpa/jdbc/meta/MappingTool.html ).
 thank you,
 -marina

 Marc Prud'hommeaux wrote:

 Marina-
 On Mar 15, 2007, at 5:01 PM, Marina Vatkina wrote:

 Hi,

 I am part of the GlassFish persistence team and was wondering   
 how  does OpenJPA support JPA auto DDL generation (we call it   
 java2db)  in a Java EE application server.

 Our application server supports java2db via creating two sets  
 of   files for each PU: a ...dropDDL.jdbc and  
 a ...createDDL.jdbc  file  on deploy (i.e. before the application

 is actually loaded  into the  container) and then executing 
 'create' file as the last  step in  deployment, and 'drop' file on

 undeploy or the 1st step  in  redeploy. This allows us to drop 
 tables created by the  previous  deploy operation.

 This approach is done for both, the CMP and the default JPA
 provider. It would be nice to add java2db support for OpenJPA  
 as   well, and I'm wondering if we need to do anything special,  
 or  it'll  all work just by itself?

 We do have support for runtime creation of the schema via the
 openjpa.jdbc.SynchronizeMappings property. It is described at:
   http://incubator.apache.org/openjpa/docs/latest/manual/   
 manual.html#ref_guide_mapping_synch
 The property can be configured to run the mappingtool (also   
 described  in the documentation) at runtime against all the   
 registered  persistent classes.

 Here are my 1st set of questions:

 1. Which API would trigger the process, assuming the correct   
 values  are specified in the persistence.xml file? Is it:
 a) provider.createContainerEntityManagerFactory(...)? or
 b) the 1st call to emf.createEntityManager() in this VM?
 c) something else?

 b

 2. How would a user drop the tables in such environment?

 I don't think it can be used to automatically drop then create
 tables. The mappingtool can be executed manually twice, the   
 first  time to drop all the tables, and the second time to re-  
 create them,  but I don't think it can be automatically done at   
 runtime with the  SynchronizeMappings property.

 3. If the answer to either 1a or 1b is yes, how does the code
 distinguish between the server startup time and the  
 application   being loaded for the 1st time?

 That is one of the reasons why we think it would be inadvisable   
 to  automatically drop tables at runtime :)

 4. Is there a mode that allows creating a file with the jdbc
 statements to create or drop the tables and constraints?

 Yes. See

Re: Using DDL generation in a Java EE environment?

2007-03-20 Thread Marina Vatkina

Marc,

Marc Prud'hommeaux wrote:

Marina-

They do in SE, but as there is no requirement to do it in EE,  people 
try to reduce the amount of typing ;).



Hmm ... we might not actually require it in EE, since we do examine  the 
ejb jar to look for persistent classes. I'm not sure though.


You should test with both listing them and not listing them. I'd be  
interested to know if it works without.


Let me give it a try. How would the persistence.xml property look like to 
generate .sql file? Where will it be placed in EE environment?  Does it use use 
the name as-is or prepend it with some path?


thanks.





On Mar 20, 2007, at 4:19 PM, Marina Vatkina wrote:


Marc,

Marc Prud'hommeaux wrote:


Marina-
On Mar 20, 2007, at 4:02 PM, Marina Vatkina wrote:


Marc,

Thanks for the pointers. Can you please answer the following set  
of  questions?


1. The doc requires that In order to enable automatic runtime   
mapping, you must first list all your persistent classes. Is  this  
true for EE case also?


Yes. People usually list them all in the class tags in the   
persistence.xml file.



They do in SE, but as there is no requirement to do it in EE,  people 
try to reduce the amount of typing ;).


If OpenJPA can identify all entities in EE world, why can't it do  the 
same for the schema generation?


I'll check the rest.

thanks,
-marina

2. Section 1.2.Generating DDL SQL talks about .sql files, but   
what I am looking for are jdbc files, i.e. files with the  lines  
that can be used directly as java.sql statements to be  executed  
against database.


The output should be sufficient. Try it out and see if the format  
is  something you can use.


3. Is there a document that describes all possible values for  the  
openjpa.jdbc.SynchronizeMappings property?


Unfortunately, no. Basically, the setting of the   
SynchronizeMappings property will be of the form action  
(Bean1=value1,Bean2=value2), where the bean values are those   
listed in org.apache.openjpa.jdbc.meta.MappingTool (whose javadoc  
you  can see http://incubator.apache.org/openjpa/docs/latest/ 
javadoc/org/ apache/openjpa/jdbc/meta/MappingTool.html ).



thank you,
-marina

Marc Prud'hommeaux wrote:


Marina-
On Mar 15, 2007, at 5:01 PM, Marina Vatkina wrote:


Hi,

I am part of the GlassFish persistence team and was wondering   
how  does OpenJPA support JPA auto DDL generation (we call it   
java2db)  in a Java EE application server.


Our application server supports java2db via creating two sets  
of   files for each PU: a ...dropDDL.jdbc and  a 
...createDDL.jdbc  file  on deploy (i.e. before the  application  
is actually loaded  into the  container) and then  executing 
'create' file as the last  step in  deployment, and  'drop' file 
on undeploy or the 1st step  in  redeploy. This  allows us to drop 
tables created by the  previous  deploy  operation.


This approach is done for both, the CMP and the default JPA
provider. It would be nice to add java2db support for OpenJPA  
as   well, and I'm wondering if we need to do anything special,  
or  it'll  all work just by itself?



We do have support for runtime creation of the schema via the
openjpa.jdbc.SynchronizeMappings property. It is described at:
  http://incubator.apache.org/openjpa/docs/latest/manual/   
manual.html#ref_guide_mapping_synch
The property can be configured to run the mappingtool (also   
described  in the documentation) at runtime against all the   
registered  persistent classes.



Here are my 1st set of questions:

1. Which API would trigger the process, assuming the correct   
values  are specified in the persistence.xml file? Is it:

a) provider.createContainerEntityManagerFactory(...)? or
b) the 1st call to emf.createEntityManager() in this VM?
c) something else?



b


2. How would a user drop the tables in such environment?



I don't think it can be used to automatically drop then create
tables. The mappingtool can be executed manually twice, the   
first  time to drop all the tables, and the second time to re-  
create them,  but I don't think it can be automatically done at   
runtime with the  SynchronizeMappings property.


3. If the answer to either 1a or 1b is yes, how does the code
distinguish between the server startup time and the  application   
being loaded for the 1st time?



That is one of the reasons why we think it would be inadvisable   
to  automatically drop tables at runtime :)


4. Is there a mode that allows creating a file with the jdbc
statements to create or drop the tables and constraints?



Yes. See:
  http://incubator.apache.org/openjpa/docs/latest/manual/   
manual.html#ref_guide_ddl_examples



thank you,
-marina











Re: Using DDL generation in a Java EE environment?

2007-03-20 Thread Marina Vatkina
Then I'll first start with an easier task - check what happens in EE if entities 
are not explicitly listed in the persistence.xml file :).


thanks,
-marina

Marc Prud'hommeaux wrote:

Marina-

Let me give it a try. How would the persistence.xml property look  
like to generate .sql file?



Actually, I just took a look at this, and it look like it isn't  
possible to use the SynchronizeMappings property to automatically  
output a sql file. The reason is that the property takes a standard  
OpenJPA plugin string that configures an instances of MappingTool,  but 
the MappingTool class doesn't have a setter for the SQL file to  write 
out to.


So I think your only recourse would be to write your own adapter to  to 
this that manually creates a MappingTool instance and runs it with  the 
correct flags for outputting a sql file. Take a look at the  javadocs 
for the MappingTool to get started, and let us know if you  have any 
questions about proceeding.




On Mar 20, 2007, at 4:59 PM, Marina Vatkina wrote:


Marc,

Marc Prud'hommeaux wrote:


Marina-

They do in SE, but as there is no requirement to do it in EE,   
people try to reduce the amount of typing ;).


Hmm ... we might not actually require it in EE, since we do  examine  
the ejb jar to look for persistent classes. I'm not sure  though.
You should test with both listing them and not listing them. I'd  be  
interested to know if it works without.



Let me give it a try. How would the persistence.xml property look  
like to generate .sql file? Where will it be placed in EE  
environment?  Does it use use the name as-is or prepend it with  some 
path?


thanks.


On Mar 20, 2007, at 4:19 PM, Marina Vatkina wrote:


Marc,

Marc Prud'hommeaux wrote:


Marina-
On Mar 20, 2007, at 4:02 PM, Marina Vatkina wrote:


Marc,

Thanks for the pointers. Can you please answer the following  set  
of  questions?


1. The doc requires that In order to enable automatic  runtime   
mapping, you must first list all your persistent  classes. Is  
this  true for EE case also?



Yes. People usually list them all in the class tags in the
persistence.xml file.




They do in SE, but as there is no requirement to do it in EE,   
people try to reduce the amount of typing ;).


If OpenJPA can identify all entities in EE world, why can't it  do  
the same for the schema generation?


I'll check the rest.

thanks,
-marina

2. Section 1.2.Generating DDL SQL talks about .sql files,  but   
what I am looking for are jdbc files, i.e. files with  the  
lines  that can be used directly as java.sql statements to  be  
executed  against database.



The output should be sufficient. Try it out and see if the  format  
is  something you can use.


3. Is there a document that describes all possible values for   
the  openjpa.jdbc.SynchronizeMappings property?



Unfortunately, no. Basically, the setting of the
SynchronizeMappings property will be of the form action   
(Bean1=value1,Bean2=value2), where the bean values are  those   
listed in org.apache.openjpa.jdbc.meta.MappingTool  (whose javadoc  
you  can see http://incubator.apache.org/openjpa/ docs/latest/ 
javadoc/org/ apache/openjpa/jdbc/meta/ MappingTool.html ).



thank you,
-marina

Marc Prud'hommeaux wrote:


Marina-
On Mar 15, 2007, at 5:01 PM, Marina Vatkina wrote:


Hi,

I am part of the GlassFish persistence team and was  wondering   
how  does OpenJPA support JPA auto DDL generation  (we call it   
java2db)  in a Java EE application server.


Our application server supports java2db via creating two  sets  
of   files for each PU: a ...dropDDL.jdbc and   a 
...createDDL.jdbc  file  on deploy (i.e. before the   
application  is actually loaded  into the  container) and  then  
executing 'create' file as the last  step in   deployment, and  
'drop' file on undeploy or the 1st step  in   redeploy. This  
allows us to drop tables created by the   previous  deploy  
operation.


This approach is done for both, the CMP and the default  JPA
provider. It would be nice to add java2db support for  OpenJPA  
as   well, and I'm wondering if we need to do  anything 
special,  or  it'll  all work just by itself?




We do have support for runtime creation of the schema via  the
openjpa.jdbc.SynchronizeMappings property. It is  described at:
  http://incubator.apache.org/openjpa/docs/latest/manual/
manual.html#ref_guide_mapping_synch
The property can be configured to run the mappingtool (also
described  in the documentation) at runtime against all the
registered  persistent classes.



Here are my 1st set of questions:

1. Which API would trigger the process, assuming the  correct   
values  are specified in the persistence.xml file?  Is it:

a) provider.createContainerEntityManagerFactory(...)? or
b) the 1st call to emf.createEntityManager() in this VM?
c) something else?




b


2. How would a user drop the tables in such environment?




I don't think it can be used to automatically drop then  
create

RE: Using DDL generation in a Java EE environment?

2007-03-18 Thread Patrick Linskey
Hi,

I'm typing this while offline, so don't have access to the OpenJPA docs
URL, so can't include any links. Much of this is discussed in the
documentation, however.

 be nice to add java2db support for OpenJPA as well, and I'm 
 wondering if we need 
 to do anything special, or it'll all work just by itself?

OpenJPA does already have features that generally do what you're
mentioning.

 1. Which API would trigger the process, assuming the correct 
 values are 
 specified in the persistence.xml file? Is it:
 a) provider.createContainerEntityManagerFactory(...)? or
 b) the 1st call to emf.createEntityManager() in this VM?
 c) something else?

When using the openjpa.jdbc.SynchronizeMappings property in the
persistence.xml file, I believe that it's the first call to
emf.createEntityManager().  You can also directly interact with the
MappingTool and SchemaTool programmatically.

 2. How would a user drop the tables in such environment?

The MappingTool and SchemaTool provide table drop capabilities. However,
why do you want to drop the tables in such an environment? Typically,
I've found that what people want is to clean out their tables so that at
the beginning of a test run, they're working with empty tables. OpenJPA
supports an option to automatically synchronize the database tables with
what's in the current mappings, and then issue a DELETE statement
against each table. In a test environment, this is often much faster
than doing schema mutation. Additionally, it is more common to have
permission to delete all rows in the database than to do schema
manipulation. See https://issues.apache.org/jira/browse/OPENJPA-94 for
details about how to do this.

 3. If the answer to either 1a or 1b is yes, how does the code 
 distinguish 
 between the server startup time and the application being 
 loaded for the 1st time?

It doesn't.

 4. Is there a mode that allows creating a file with the jdbc 
 statements to 
 create or drop the tables and constraints?

Yes.

-Patrick

-- 
Patrick Linskey
BEA Systems, Inc. 

___
Notice:  This email message, together with any attachments, may contain
information  of  BEA Systems,  Inc.,  its subsidiaries  and  affiliated
entities,  that may be confidential,  proprietary,  copyrighted  and/or
legally privileged, and is intended solely for the use of the individual
or entity named in this message. If you are not the intended recipient,
and have received this message in error, please immediately return this
by email and then delete it. 

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
 Sent: Thursday, March 15, 2007 5:01 PM
 To: open-jpa-dev@incubator.apache.org
 Subject: Using DDL generation in a Java EE environment?
 
 Hi,
 
 I am part of the GlassFish persistence team and was wondering 
 how does OpenJPA 
 support JPA auto DDL generation (we call it java2db) in a 
 Java EE application 
 server.
 
 Our application server supports java2db via creating two sets 
 of files for each 
 PU: a ...dropDDL.jdbc and a ...createDDL.jdbc file on deploy 
 (i.e. before the 
 application  is actually loaded into the container) and then 
 executing 'create' 
 file as the last step in deployment, and 'drop' file on 
 undeploy or the 1st step 
 in redeploy. This allows us to drop tables created by the 
 previous deploy operation.
 
 This approach is done for both, the CMP and the default JPA 
 provider. It would 
 be nice to add java2db support for OpenJPA as well, and I'm 
 wondering if we need 
 to do anything special, or it'll all work just by itself?
 
 Here are my 1st set of questions:
 
 1. Which API would trigger the process, assuming the correct 
 values are 
 specified in the persistence.xml file? Is it:
 a) provider.createContainerEntityManagerFactory(...)? or
 b) the 1st call to emf.createEntityManager() in this VM?
 c) something else?
 
 2. How would a user drop the tables in such environment?
 
 3. If the answer to either 1a or 1b is yes, how does the code 
 distinguish 
 between the server startup time and the application being 
 loaded for the 1st time?
 
 4. Is there a mode that allows creating a file with the jdbc 
 statements to 
 create or drop the tables and constraints?
 
 thank you,
 -marina
 
 


Re: Using DDL generation in a Java EE environment?

2007-03-16 Thread Marc Prud'hommeaux

Marina-

On Mar 15, 2007, at 5:01 PM, Marina Vatkina wrote:


Hi,

I am part of the GlassFish persistence team and was wondering how  
does OpenJPA support JPA auto DDL generation (we call it java2db)  
in a Java EE application server.


Our application server supports java2db via creating two sets of  
files for each PU: a ...dropDDL.jdbc and a ...createDDL.jdbc file  
on deploy (i.e. before the application  is actually loaded into the  
container) and then executing 'create' file as the last step in  
deployment, and 'drop' file on undeploy or the 1st step in  
redeploy. This allows us to drop tables created by the previous  
deploy operation.


This approach is done for both, the CMP and the default JPA  
provider. It would be nice to add java2db support for OpenJPA as  
well, and I'm wondering if we need to do anything special, or it'll  
all work just by itself?


We do have support for runtime creation of the schema via the  
openjpa.jdbc.SynchronizeMappings property. It is described at:


  http://incubator.apache.org/openjpa/docs/latest/manual/ 
manual.html#ref_guide_mapping_synch


The property can be configured to run the mappingtool (also described  
in the documentation) at runtime against all the registered  
persistent classes.




Here are my 1st set of questions:

1. Which API would trigger the process, assuming the correct values  
are specified in the persistence.xml file? Is it:

a) provider.createContainerEntityManagerFactory(...)? or
b) the 1st call to emf.createEntityManager() in this VM?
c) something else?


b



2. How would a user drop the tables in such environment?


I don't think it can be used to automatically drop then create  
tables. The mappingtool can be executed manually twice, the first  
time to drop all the tables, and the second time to re-create them,  
but I don't think it can be automatically done at runtime with the  
SynchronizeMappings property.



3. If the answer to either 1a or 1b is yes, how does the code  
distinguish between the server startup time and the application  
being loaded for the 1st time?


That is one of the reasons why we think it would be inadvisable to  
automatically drop tables at runtime :)



4. Is there a mode that allows creating a file with the jdbc  
statements to create or drop the tables and constraints?


Yes. See:

  http://incubator.apache.org/openjpa/docs/latest/manual/ 
manual.html#ref_guide_ddl_examples




thank you,
-marina