nobody else seems to have an idea, so will will do some bla bla ;-)

Armin:


Thanks... this seems pretty complicated, and a little risky. I think if anything I'd rather make alternative objects that are only used for this batch data load, and use the more limited class descriptors for them. Seems safer, and I don't mind having separate API for processing the batch files.

I'm thinking about actually even implementing stored procedures and "storing" the data pretty much as it comes in and letting the database do the heavy lifting. I think if I do this I could even invalidate the cache per ID after each write and minimize the risk of dirty reads. I just haven't learned out to do stored procedures in OJB yet!

Thanks for taking the time. I wish we could get our integration partners to use a more modern message-oriented data update approach, but until they do, we're stuck with batch processing. I'm not sure I can justify how or why OJB would or should make this easier for me, even though I wish it would :-)

Joe


I don't see any easy way to realize this completely within OJB. You can use MetadataManager "per thread metadata mode" to deal with different repository files. The "normal" repository file and you can load an additional "shrunk" repository were in class-descriptor all "big" fields removed. In your case a org class-descriptor only with PK and FK fields.

See javadoc of MetadataManager to see the details.
http://db.apache.org/ojb/api/org/apache/ojb/broker/metadata/MetadataManager.html
It is now possible to use a different object metadata profile (normal or shrunk repository) for each thread by calling


MM.loadProfile("shrunk");

Next call to PBF.createPersistenceBroker return a PB instance deals only with the limited 'org' version.
Only problem is the cache (I don't have a smart solution for this problem - any proposals are welcome ;-)). If you use a shared cache (e.g. ObjectCacheDefaultImpl) other threads with "normal" repository may get "dirty reads" of shrunked org objects. To avoid this you can specify a local cache for org class on class-descriptor level (e.g. ObjectCachePerBrokerImpl - not really good performance) or enable in OJB.properties the 'descriptorBasedCaches' property. Then OJB use separate caches for each cache defined class-descriptor based on ObjectCacheDescriptor instance
(object-cache element within class-descriptor). This means if you declare an object-cache within class-descriptor of 'org' you will end up in different cache instances for each profile ("normal", "shrunk").


Don't know if this is helpful ;-)

regards,
Armin

Joe Germuska wrote:

I have a process which requires an extreme amount of overhead to manage with OJB that would be so simple to do in straight SQL. I'm hoping some OJB specialists can help me figure out how to tune this process.

For an application, we have a number of users who can log in to operate on behalf of their organizations. In the object model, there are two relevant objects: the organization (org for short) and the org-agent. The org is a fairly rich object, while the org-agent is merely the relationship between the agent's ID (a string) and the org itself.

I periodically get a data feed which essentially just maps agent ids to organization ids. In SQL, I'd just manage a single two-column table directly, and the process would be done in a matter of minutes. (About five minutes using a script I've written) However, in OJB, I seem to be dragging these heavy "org" objects around with everything I want to do, and it makes the thing take more like five hours.

I'm thinking that I could just run the direct-SQL script instead of using OJB, and then signal the application that its cache of these objects may be invalid. This is probably safe, in a practical sense, since the process is much more likely to insert new objects than change old ones, but I can't be certain of that.

The alternative would be to figure out a way to get OJB to accept updates to objects that may or may not have been loaded yet without performing a full load for each. I'm not totally clear on how OJB functions here either, but I think that if I make a new object with the same ID (primary key) as one for which OJB already has a cached instance, then OJB isn't real happy. Maybe I'm writing my tests wrong, but a few things I did to try to test this seem to bear it out.

Any advice or insight into how to achieve this best would be greatly appreciated...

Thanks
    Joe




--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]


--
Joe Germuska [EMAIL PROTECTED] http://blog.germuska.com "Imagine if every Thursday your shoes exploded if you tied them the usual way. This happens to us all the time with computers, and nobody thinks of complaining."
-- Jef Raskin


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Reply via email to