Hello all --
For those of you who don't know me, my name is Abe White, I'm a BEA
employee (by way of SolarMetric), and I expect that I'll be spending
most of my time working on OpenJPA once we separate out our
proprietary bits and get it off the ground.
As you've been discussing, one
Doug Lea's original concurrency utils ( http://g.oswego.edu/dl/
classes/EDU/oswego/cs/dl/util/concurrent/intro.html ) are very good
in jdk 1.4/1.3.
Doug seems to recommend using the backport instead of his libs on the
linked page, though.
ConfigurationProviders don't know anything about PersistenceProviders,
of course, and need to be constructible via a no-args constructor. But
maybe the PersistenceProviderImpl could populate the
ConfigurationProvdierImpl with information about which subclass of
PersistenceProviderImpl it should
So that's basically it for combining ConfigurationProviders and
ProductDerivations into a single service. In a followup email I'll
address how we can give ProductDerivations the ability to extend
the EMF, etc.
Adding the ability to extend EMFs though a ProductDerivation is
actually
I agree. Since it sounds like Abe is working out the initial
re-organization of the ConfigurationProvider and ProductDerivation,
I will
wait for those changes before continuing with the OpenJPA derivative
implementation. Thanks.
I'm not currently working on those changes. If no one else
- You mention in several places about separating away the notion of
specs and stores. In a general sense, I understand what these
are. But,
can you elaborate on how these types are used in the
ConfigurationProvider
and ProductDerivation interfaces?
What I meant was that the
- Is there a plan to migrate this stuff do a different location?
(either http://wiki.apache.org/incubator/openjpa/ or http://
cwiki.apache.org/confluence/display/openjpa/Index)
- Is cwiki.apache.org preferred to wiki.apache.org?
- There are certain resources that have bad links to
Since the subclass data is in the same table as the base table being
queries, shouldn't this
data be retrieved with a single select automatically ?
Yes, it should be fetched by default. Unless of course all your
subclass fields set their fetch mode to LAZY or are to-many relations
that
That's a lousy error message, but I'm not sure what it causing it.
Do you have all of the openjpa-*.jar files in your CLASSPATH? It
almost sounds like one of the META-INF/services/ files that reports
the available metadata factories is not being found. How did you
create the jars? Did you
Sounds like a good plan. New or updated tests would be good to
demonstrate
that a fix works. Even if we would provide a working testcase with
the fix,
that would be a good way to populate our test bucket.
Concerning all of the query related JIRA reports that have been
opened, do
we have
I'd like to apply for permission to resolve OpenJPA JIRA issues.
Anyone know what steps I need to take?
___
Notice: This email message, together with any attachments, may contain
information of BEA Systems, Inc., its
But that does not seem to work. Is this a code bug or is a
document error?
Or an error in my understanding of FetchGroup depth ?
Both a documentation error and a misunderstanding. I'll try to
update the docs later today. In the meantime, I don't know what you
mean when you say it does
- When you delete a parent object and the operation cascades to
children, the object-level operation order is delete parent, then
delete children.
In my experience, the cascade should delete the children first.
This solves 99% of the cascade delete issues.
It seems to me you'd just
I think that cascade delete is most commonly used where there is a
one-to-possibly-zero relationship (with a [zero or one or many]-to-
one on the other side). Thus, the other side has the foreign key,
and the side with the cascade delete definition is the side with
the existence that
mvn site should build the docbook files, although it is a tad
slow and untested (and network-intensive, since it dynamically
downloads all the stylesheets).
you have to run mvn install first.
Register now for BEA World 2006 --- See http://www.bea.com/beaworld
@Entity
@FetchGroup (name=detail, attributes={
@FetchAttribute(name=emps ,recursionDepth=2})
public class DeptBean{
@OneToMany
CollectionEmpBean emps
...
@Entity
@FetchGroup (name=detail, attributes = {
@FetchAttribute (name=tasks, recursionDepth=2)})
public class EmpBean {
...
@OneToMany
I'm trying to set up a really simple example inside geronimo with
container managed transactions/persistence contexts. When I create
and persist an object the TableJDBCSeq is trying to commit the
connection before closing it: since it's in a JTA tx of course this
fails.
With regard to your JIRA suggestion, are you saying there is no way
using these properties to choose between e.g. TableJDBCSeq and
ClassTableJDBCSeq or the oracle sequence sequence? I'm happy to
file lots of jiras but I'd prefer to be a little more sure of what
the problem is :-)
No, I
If I define a cascade persist field in an EntityX and that field is
declared to be of type EntityA but at runtime contains a non-Entity
subclass of the EntityA, (e.g. NonEntityB extends EntityA), what
happens at commit?
IMO the commit should fail with an appropriate exception.
+1
___
Notice: This email message, together with any attachments, may contain
information of BEA Systems, Inc., its subsidiaries and affiliated
entities, that may be confidential, proprietary, copyrighted and/or
You're definitely missing the JDBC bits of OpenJPA, or at least the
JDBCProductDerivation isn't being found. If you get the latest
version, you can invoke the
org.apache.openjpa.conf.ProductDerivations class's main method to
print details about derivation loading to System.out. You might
+/- 0 Neutral. Do whatever.
___
Notice: This email message, together with any attachments, may contain
information of BEA Systems, Inc., its subsidiaries and affiliated
entities, that may be confidential,
I thought we (OpenJPA) supported the mapping of Collections of
embedded
types.
OpenJPA never supported this. It was accidentally left in from the
Kodo docs.
___
Notice: This email message, together with any
Does anyone mind if I move this class from the
org.apache.openjpa.util package to the org.apache.openjpa.ee
package? It's a very EE-specific class, and in my mind is not a
general utility other parts of the system will ever use. I'd even
consider removing the class altogether and just
+0.5 for moving the logic in to WASManagedRuntime.main(). Go ahead
and move
it unless someone objects, there's no real need for another class.
I went ahead and did this. I also moved WASManagedRuntime's caching
logic to its endConfiguration() callback to avoid the threading
issues that
+0.5 for moving the logic in to WASManagedRuntime.main(). Go ahead
and move
it unless someone objects, there's no real need for another class.
I went ahead and did this. I also moved WASManagedRuntime's
caching logic to its endConfiguration() callback to avoid the
threading issues that
The situation I'm looking at right now, for one: I'm seeing an
optimistic lock exception, and the OpenJPA exception translation is
obscuring the full stack, so I just get the stack from the point of
translation, rather than from the point of origin.
Then the solution is to fix the exception
What is this 'inner exception' that you speak of? The underlying
exception is set to the cause, but the cause is not being printed.
IOW, I get all the information I need when I do an e.printStackTrace()
OK, that's how it should be then. I don't want to deviate from
Java's standard
AbstractPCData needs to be modified to special-case primitive arrays
to just copy the array and cache it is an opaque value, rather than
using a List where each element is a potentially separately-cached
instance. Roger, I'd encourage you to file a JIRA issue for this
problem.
p.s. In
@Strategy allows the specification of a custom class name in string
form. However, class names inside quotes don't refactor all that
well. I
think we should add an @StrategyClass annotation that allows
specification of a strongly-typed Class instance. What say ye?
I'm not voting either way;
Our internal testing framework, in this case, is vanilla JUnit. Seems
like that might be a use case that we should care about, no?
IMO, you don't subject all your users to nonstandard behavior (in
this case, adding a stack to an exception's toString) to satisfy one
braindead testing harness
Can anybody tell me if it is possible to stream binary data from
the database
using JPA? I noticed in the spec, support for java.sql.Blob was
removed in
one of the revisions. OpenJPA, however, seems to have a few classes
related
to streaming binary data.
OpenJPA does not have any
I investigated a bit further and found that if I change
PersistenceProviderImpl.ClassTransformerImpl to use a
JDBCConfigurationImpl instead of OpenJPAConfigurationImpl then the
problem goes away. I'm reasonably certain that there is no need to
use a JDBCConfigurationImpl here in the class
I did a little playing around, in my non-exhaustive testing it
seemed to
call classForName each time you accessed a field on an Entity.
Select p.firstName from Person p where p.id = 1 called
QueryImpl.classForName twice.
QueryImpl.classForName uses the MethodDataRepository method I
Geronimo seems to be running into this problem also when is
this serp binary going to be published properly? Making sure that
an artifact is publically avaiable before using it is a pretty good
idea :-)
Sorry about the serp confusion, guys. As you may have seen, the
dependency has
The merge operation takes as a parameter a detached instance, not a
new instance.
TopLink Essentials might not be able to tell the difference between
a detached and a new instance. So it might work in that environment.
But the behavior you're describing is not portable.
That said, you
The merge operation takes as a parameter a detached instance, not a
new instance.
TopLink Essentials might not be able to tell the difference between
a detached and a new instance. So it might work in that environment.
But the behavior you're describing is not portable.
That said, you
I tried the 0.9.7 release. orm.xml looks much better. It even
resolved my issues with one-to-many relations.
It's not usable as is, but it's a lot closer. I have to manually
tweak the results for the following items. I don't know if all of
them are possible - or even desirable to address,
I just wanted to make sure everyone sees this, because I bet many of
us don't have our config file set up correctly (I know I didn't):
Probably the offender's local svn file settings are wrong and not as
recommended at
http://www.apache.org/dev/svn-eol-style.txt
thanks
david jencks
1. You must use a new EntityManager for each test.
2. If you have the DataCache enabled, you'll have to clear it. You
can do that through:
((OpenJPAEntityManagerFactory) emf).getStoreCache().evictAll();
3. There is no way to clear the cache on cluster node B from cluster
node A; you have
Why not add a way to clear the cluster cache?
Any developer is free to do so.
___
Notice: This email message, together with any attachments, may contain
information of BEA Systems, Inc., its subsidiaries and affiliated
1. Loaded instances last stored with a null or empty collection/map
are restored with an empty collection/map, period. You can ignore
all the talk about null indicators.
2. Instances you construct yourself will maintain their null vs.
empty field values at least until persist. Beyond that
OneToMany mappings default to using a join table unless you name the
inverse field with mapped-by.
___
Notice: This email message, together with any attachments, may contain
information of BEA Systems, Inc., its
Unfortunately, we don't have any automatic drop-table feature, but
I agree it would be handy (you might want to make a JIRA report
with the suggestion).
Note that the SynchronizeMappings property allows you to use all
the arguments of the mappingtool. So you can try something like:
Caused by: org.apache.openjpa.lib.util.ParseException: There was an
error
while setting up the configuration plugin option
SynchronizeMappings. The
plugin was of type org.apache.openjpa.jdbc.meta.MappingTool. Setter
methods for the following plugin properties were not available in
that
Just using refresh does not clean up the data in the database
(getting Unique
constraints violations). Just for kicks I tried
SchemaTool.DropTables=true,
it did pass the configuration phase, but it still did not cleaned the
data/schema.
None of the options I mentioned are meant to clean up
I don't agree with this implementation. It doesn't leave any room
for customization through MappingDefaults, it ties the ClassMapping
to the XMLSchemaParser (?!), and it's totally different than our
mapping of indexes, foreign keys, and primary keys, the other
supported constraint types.
I've been fighting for some time now with my OpenJPA configuration
and just
discovered why. It seems that you *either* consider the
persistence.xml file
*or* the map passed as parameter of
Persistence.createEntityManagerFactory.
If you look at PersistenceProductDerivation.load(String rsrc,
I'm able to reproduce the ConnectionDriverName problem. I'll have
more info in a bit.
___
Notice: This email message, together with any attachments, may contain
information of BEA Systems, Inc., its subsidiaries and
OK, the problem is that we're only paying attention to openjpa.*
property keys with String values when you bootstrap through
Persistence. I have no idea why, and I'll change it momentarily.
But for now, you can work around the problem for your DataSource
using the
OK, the problem is that we're only paying attention to openjpa.*
property keys with String values when you bootstrap through
Persistence. I have no idea why, and I'll change it momentarily.
Actually I now see why, and I might not be able to fix it before I
leave work today. For anyone
Is there a reason why @ForeignKey is not allowed for @ManyToMany?
Because the field value is a collection, not a reference. You want
to use @ElementForeignKey.
___
Notice: This email message, together with any attachments,
I think it could be nicer and a bit easier if OpenJPA was
automatically
eliminating the first _ from attribute names to build its default
coumn
names, don't you think?
The default column names are mandated by the JPA specification. And
unless you're using OpenJPA-specific mappings, you
Try performing a persistence operation (persisting an entity,
changing a persistent entity, deleting an entity, etc) before setting
the savepoint. It could just be a bug with empty savepoints in
optimistic transactions (which aren't too useful anyway).
You'll also want to set ConnectionRetainMode to transaction, which
will
ensure that OpenJPA has a single connection for the duration of the
transaction.
Performing a persistence operation and then setting a savepoint is
enough to ensure you keep the same connection, regardless of the
OpenJPA does not log exceptions that are thrown to your code.
___
Notice: This email message, together with any attachments, may contain
information of BEA Systems, Inc., its subsidiaries and affiliated
entities, that may
Since the time that we implemented that, we added logic (the
RuntimeExceptionTranslator interface and its implementors) that takes
care of translating from internal exception types to spec-defined
exception types across the board. We certainly could move the logic
for
logging at the trace
Is this the way it is supposed to work?
Yes.
___
Notice: This email message, together with any attachments, may contain
information of BEA Systems, Inc., its subsidiaries and affiliated
entities, that may be
Basically, if we have dirtied the Persistence Context, then do a
flush()
followed by the detachAllInternal(). I don't think the clear()
should be
doing this flush() operation. Any disagreement?
I agree. But note that just removing the flush call won't work for a
couple of reasons: it's
This sounds like something that was fixed a while ago. What version
of OpenJPA are you using?
___
Notice: This email message, together with any attachments, may contain
information of BEA Systems, Inc., its subsidiaries
Sadly, not currently. OpenJPA domain model extensions are not
currently
available in XML form. See OPENJPA-87 for a related issue.
While this is true, I believe if you just don't declare any @Id
fields we'll probably default to datastore identity automatically.
I upgraded to openjpa-0.9.7-incubating-SNAPSHOT and still get the
same
results. By the way, thanks for your help so far.
I get table and column names when I run the tool. Are you sure you
aren't looking at old output? Can someone else attempt to reproduce
this problem?
I think if a user throws an exception in a callback outside of a
commit operation, we should simply rethrow it to the user after we
clean up our internal state. Presumably, the specific runtime
exception has meaning to the user's code, and we don't add much
value in wrapping the exception.
==
--- incubator/openjpa/trunk/openjpa-kernel/src/main/java/org/apache/
openjpa/ee/JNDIManagedRuntime.java (original)
+++ incubator/openjpa/trunk/openjpa-kernel/src/main/java/org/apache/
Thanks for the feedback. I know that we have been running with
both the IBM
and Sun JDK's, but let me get together with my performance team and
discuss
your concerns. We will try to provide more concrete data on these
updates.
Cool. Maybe I'll learn something about weak refs and the
does openJPA provide the strategy to re-generate another value for
the PK
or do we have to manually deal with this case?
http://incubator.apache.org/openjpa/docs/latest/manual/
manual.html#ref_guide_sequence
___
Notice:
I doubt anyone has done much performance analysis of stock OpenJPA,
especially without caching. BEA's performance efforts obviously
focus on Kodo instead. Which is why we all appreciate the
performance work you're doing now, even if we (well, mostly just me)
might harp on the details.
We'd also have to set the LockManager property to pessimistic to get
database locks. And just to build on what Patrick is saying: OpenJPA
can do locking within optimistic transactions on individual
instances, but you have to set a lock level on the FetchPlan in code,
which I don't think
Secondly, are we parsing the XML file multiple times? The only
way for the first warning to be fired is if OpenJPA has read my
entity mappings, but the exception that is thrown later is a SAX
exception which implies that OpenJPA is reading the file again.
Sounds like an inefficient startup
This change is causing problems in an application server
environment. I
pulled in this change this morning, re-built, and tried our perf
benchmark
(SunOne with OpenJPA) and I get the following exception:
Sorry about that. I'll revert it ASAP. I have a vague notion of
what might be
3. OpenJPA does support a means of passing Oracle hints along
through to
the DBDictionary. Should we be trying to reuse some of the
capabilities
here?
+1
4. In the following snippets, I'd rather if we used 'Integer.valueOf
(1)'
or, better yet, a symbolic constant, instead of creating new
Okay, I think we need to back out these last two changes and revert
back to
revision 509885. Dave needs to go back to the drawing board for
this db2
optimization change, probably create a JIRA report for this proposed
change, and use the design discussion associated with the JIRA
process
What's the word on this issue?
Last record I have is:
Okay, I think we need to back out these last two changes and revert
back to
revision 509885. Dave needs to go back to the drawing board for
this db2
optimization change, probably create a JIRA report for this proposed
change, and use
If you set the version field of an instance to a non-default value,
OpenJPA assumes the instance was detached, or that you're actively
trying to make it behave as a detached instance.
___
Notice: This email message, together
Thanks, Abe. This explanation helps a great deal. Should we
update the
documentation with some of this information?
As far as I can tell the documentation on cascade=DELETE and the
documentation on the Dependent metadata extension already contains
everything I said. Feel free to change
Can I map a many to one association through a join table
(association table)?
You can't use a join table, but you can put your many-one foreign key
column(s) in a secondary table, which amounts to the same thing.
___
EM.flush() doesn't help. :-(
On 3/16/07, Patrick Linskey [EMAIL PROTECTED] wrote:
Have you tried incrementally calling EM.flush() during the course of
your transaction? Also, for deleting that many records, you may
want to
look at using JPQL's bulk delete APIs.
Do you have the data cache
AFIACT, the problem is that the openjpa.enhance.PCRegistry class
uses static
fields to store Meta information. When the second instance is
loaded, the
PCRegistry has been initialized, but doesn't contain that instance's
subclasses and an exception is thrown
The PCRegistry has to use static
I've committed an equivalent patch to 0.9.7 in SVN revision 522623.
Can you confirm whether this fixes your problem and, if so, close any
CR you might have opened on this?
This works for me :) Here's a patch for 0.9.6 source. I've gone for
the
simplest solution, but it might be improved
I am trying to use MappingTool with a JDO file named package.jdo to
generate the package-mysql.orm and create the tables in my
database, the orm file is created succesfully with one exception,
the table name is not what i expected with subsequent call to the
MappingTool.
The mapping
the book field in the PageId class must be of type String to match
the primary key field type of Book
was i mistaken when i understood you to say that Page could have a
PK field of type Book?
No, you weren't mistaken. But as I said, the book field in the
Page*Id* class must match the
Note that I don't think OpenJPA reads index names from the existing
schema by default, so if you're adding fields or classes these might
be plain old naming conflicts due to truncation based on database
name length limits. There is a readSchema option on the mapping tool
that forces it to
Hibernate and TopLink will fail once you get rid of the if
statement. It's because according to the spec (and my
understanding) merge should rise exception when the versioned
property is outdated.
No, the exception can be deferred until flush/commit. Read section
3.2.4.1.
Notice:
I see several possible problems with this commit. Are there tests
for this functionality checked in?
1. The property and logic for using the DefaultSchemaName are defined
in MappingInfo, but the default schema name is only ever set into
ClassMappingInfos. Not FieldMappingInfos,
This is strange. Autoboxing turned off somehow? I must be doing
something wrong.
...
/Users/clr/openjpa/openjpa/trunk/openjpa-jdbc/src/main/java/org/
apache/openjpa/jdbc/sql/DB2Dictionary.java:[242,31] incompatible types
Stuff if kernel needs to be 1.4-compatible.
Notice: This email
The DB2Dictionary class doesn't compile with 1.4 due to autoboxing.
Can you please fix this?
And I'd like to see all those hints defined as static constants on
the dictionary class and named for DB2 (if they're that specific) and
capitalized while you're at it -- see
an pluggable exception factory (in open-jpa) might make this approach
a little easier
See DBDictionary.newStoreException(...).
Notice: This email message, together with any attachments, may contain
information of BEA Systems, Inc., its subsidiaries and affiliated
entities, that may
Comments inline...
+/**
+ * pThe isolation level for queries issued to the database.
This overrides
+ * the persistence-unit-wide
codeopenjpa.jdbc.TransactionIsolation/code
+ * value./p
+ *
+ * pMust be one of [EMAIL PROTECTED] Connection#TRANSACTION_NONE},
+
Why is this setting called IsolationLevel where our global setting
is called TransactionIsolation? Shouldn't this local setting just
be called Isolation for consistency? Same with the
FetchPlan facade.
Personally, I feel that 'IsolationLevel' is a more-well-known term for
the concept. I'm
I think that JDBCFetchPlan should take a Java 5 enum, and
JDBCFetchConfiguration should use the Connection values.
Certainly JDBCFetchConfiguration should use the Connection values. I
personally have never had a problem with symbolic constants for
settings, but enums for the FetchPlan are
Went ahead and restored the previous behavior where the QueryImpl
itself checks for non-uniqueness and throws the expected exception.
That breaks the single result optimization that was added for
OPENJPA-168 when getSingleResult() is called. There was a reason we
moved the validation to
I'm trying to use the @Externalizer annotation but have problems
with the resulting type of the DB-field - it's always a byte-array.
You shouldn't need the @Type annotation -- the type will be inferred
from the return type of the externalizer method. Are you dropping
the database table in
I can't reproduce this. When I leave out the @Persistent annotation
on the field or replace it with @Basic, I do indeed end up with a
binary column. But if I correctly include the @Persistent annotation
along with the @Externalizer, I get a varchar column. Can you narrow
down your
OpenJPA expands the available ordering syntax. See ??? in the
Reference
Guide for details.
I'm assuming this should be referring to our Large Result Set
capabilities.
LRS capabilities don't affect OrderBy. The note should be removed --
OpenJPA does not expand the OrderBy
I believe that OpenJPA will allow float pk fields, it's just that
there's no built-in id class for them (see util.LongId, ShortId,
etc). So you'd have to either add a built-in id class and alter
OpenJPA's internal code to use it appropriately, or create your own
id class and use it via
Although the spec clearly recommends against the use of floating
points,
floats are a primitive type (or the Float wrapper) and need to be
allowed.
With no special AllowStupidApproximatePrimaryKeys flag. :-)
Am I trying to read too much into the spec or Dain's request? This
seems
Generally in favor of including this performance patch with the
release. Just a few questions:
1. How good is the patch? Has it been put through whatever
extensive Unit Tests tests anyone has?
As others have said, it does pass the OpenJPA test suite, but
unfortunately that isn't saying
I did notice that my lazy scenarios are almost 50% slower now, but
looking at the sql dumps it appears that we were fetching eagerly
even in those scenarios and this (or another JIRA?) seems to have
fixed that functional error. Does that seem like something your
changes would resolve?
It sounds like you're never starting or committing a transaction.
You're persisting outside of a transaction, which just means that the
object will be cached until the next transaction, waiting for a flush/
commit to be inserted into the database.
Any thoughts about this patch? It changes how @Embeddable and
@MappedSuperclass classes are enhanced.
I'd be a little nervous about NPEs registering a null alias, but it
seems that it's been tested so apparently that's fine. The only
thing that might be missing is to put an if (alias !=
1 - 100 of 171 matches
Mail list logo