Thanks for the response...you added some interesting point. I knew somewhat that the OJB cache only worked with QueryByIdentity(), but the gains are still significant for us. Our object model has some rather deep and wide trees (for better or worse), and the gain on object materialization on the hanging objects is a real positive. Maybe I am overstating it's benefit, but I do see a significant real-world difference when the cache is on and off -- however I am not above attributing that to bad design on my part for the object model.
As for OTM -- I will look into that. However, I don't see any documentation, or for that matter mention, of it on the OJB site. Am I missing something? -Andrew -----Original Message----- From: Brian McCallister [mailto:[EMAIL PROTECTED] Sent: Friday, October 17, 2003 9:47 AM To: OJB Users List Subject: Re: Best practice for using ODMG with EJB? (Cache also) Depending on your deployment timeline you might want to look into the OTM for this type of deployment. It provides the high-level functionality you are looking for from ODMG, but it also knows how to play very nicely with JTA transactions -- a big plus in EJB containers. The OTM is the least mature of OJB's API's, however. That said, it is my favorite by a long shot. An important note about OJB's cache -- the only query type that completely reads from the cache as compared to querying is the QueryByIdentity type query. The cache is primarily used to avoid object materialization and maintain reference integrity. Think for a moment about the query "select products from org.apache.ojb.tutorials.Product where cost > 10.0". The query has to be executed as the cache cannot know that it has all the satisfied objects. Objects in the cache won't be re-materialized at least, but the query still needs to be run against the database. It *is* possible to get major caching benefits from OJB, but it needs to be done above the query execution level. As a great many queries are against a unique id ( "select product from org.apache.ojb.tutorials.Product where id=$1" ) you can optimize this a great deal by providing a hardcoded query against the primary keys on your query encapsulator that uses an LRUMap to look for a cached identity and queries on the identity if it finds one. For example: public class Product { public Integer id; } public class ProductRepository { // from commons-collections private LRUMap cache; // this uses a single PK, but you can use multiple pk's with a multi-map type structure public Product findById(Integer id) { ObjectIdentity oid = cache.get(id); if (oid != null) { // use QueryByIdentity } else { // use where clause // add key to this.cache } } } -Brian On Friday, October 17, 2003, at 09:24 AM, Clute, Andrew wrote: > I currently have our application running using OJB. I am using the PB > interface because it was the easiest to prototype and get up and > running. > > We have a Struts application that calls a collection of EJB services > for > retrieving specific object-trees that the web app needs, along with > Add/Update/Delete methods on the EJB's. One of my main selling points > for > convincing the team to move away from PHP to Java/J2EE was the > strengths of > O/R tools like OJB, specifically the cache -- I think it is a strong > seller, > especially in a 80% read-only application. > > So, to facilitate that, I constructed a Fa�ade wrapper around the > PersistenceBroker (so, if I wanted to, I could swap it to ODMG/JDO), > and it seems to work well. I have deployed our 'Core' application as a > collection > of EJB's that make use of OJB under the hood, and then our web > application > as separate war file. But, because they are in the same container > (Jboss), > it makes use of the Local versus Remote interfaces -- which is desired. > However, when using the cache, and the local interface, any > manipulation > done by the web application on it's objects is manipulating the object > in > cache. > > I always though of the cache as a 'clean' representation of what was > in the > database -- so in all of my retrieve methods in my EJB's, I return > clone's > of the DataObjects. This allows for the client applications to > manipulate > them, and not affect the cache objects, and send them for committing, > also > updating the cache. > > But because PB API is not a full persistence API, I am starting to hit > the > issues that API's like ODMB fix (deleted objects in collections, object > locking, etc) -- and want to get a feel for how best to use something > like > ODMG in my situation. > > My goals are: > > 1) To have a centralized application that handles all database and > service > level transactions. It would hand out objects from the cache > (preferably > clones) and receive objects to store them. We only have one client > application that would be using this, but down the road we will have > many > more > 2) Move to an ODMG like API that can manage locking and whatnot to > free up > not having to manage object locking, deletion, etc > 3) For Goal 1 to make use of the cache -- most of our applications are > read-only. So that makes sense to make heavy use of the cache -- but > at the > same time we do have update scenarios that I would like to be 'atomic'. > > Is there a pattern that facilitates these goals? > > Thanks! > > -Andrew > > --------------------------------------------------------------------- > To unsubscribe, e-mail: [EMAIL PROTECTED] > For additional commands, e-mail: [EMAIL PROTECTED] > > --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
