Sale, Doug wrote:
> <convinced> :) </convinced>
> 
> but isn't there some way to map multiple results to a single object?  

Of course there is. In O/R speak it is called "uniquing". Basically, 
each object is identified by its "id" which is an encapsulation of its 
primary key values. This "id" would serve as a key to look up object in 
the memory cache. Every time a new fetch is done, and when O/R layer 
reads the ResultSet, it would use cached object if ResultSet row primary 
key values match an id of an already existsing object. In some cases, 
fetched row would be even thrown away all together if there is an 
existing matching object to improve performance.


can't
> an object be backed by a result set, so all results aren't in memory?

In most cases you would want to close the ResultSet to free database 
resources, while you still want to keep your objects. It looks like JDBC 
people also realize that. In particular, JDBC 3 (part of JSDK 1.4) 
introduced a disconnected RowSet. As far as cleaning extra objects goes, 
cache management is a topic of its own....


~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-
- Andrei (a.k.a. Andrus) Adamchik
Home of Cayenne - O/R Persistence Framework
http://objectstyle.org/cayenne/
email: andrus-jk at objectstyle dot org


> 
>>-----Original Message-----
>>From: Andrus Adamchik [mailto:[EMAIL PROTECTED]]
>>Sent: Wednesday, April 03, 2002 10:09 PM
>>To: Jakarta General List
>>Subject: Re: Open Source JDO Implementation??
>>
>>
>>Actually this is not that simple. I don't remember how JDO 
>>handles it, 
>>but here is the general problem description. Within any O/R layer you 
>>would need to maintain a balance between 2 things - fresh data and 
>>caching for performance. Lets take a look at 2 extremes:
>>
>>
>>1. Objects data is never cached.
>>
>>Every time you need to read an object, a query needs to be 
>>issued to the 
>>database. This is bad for 2 reasons - speed (since database 
>>queries are 
>>relatively slow), exessive object creation. Every time 
>>somebody does a 
>>fetch from a table with 10000 rows, 10000 objects will be 
>>created. Next 
>>second the same user hits "search again" button and gets a new 10000 
>>objects again. This new batch of objects is logically the 
>>same objects, 
>>since they represent the same database rows. But since we had 
>>no cache, 
>>we have to create them again, all 10000 of them. Very slow....
>>
>>2. Objects are always cached.
>>
>>There is an application-wide cache. Every time you need an 
>>object with a 
>>certain id, it is first looked up in the cache, and only if it is not 
>>there, it is being fetched from the database. Good for 
>>performance, but 
>>bad for everything else. Problems with this is large memory footprint 
>>(eventually you can suck the whole database in the cache), 
>>stale data. 
>>Stale data is the worst of the two. Especially when the database is 
>>being updated externally via SQL or from other applications (which is 
>>the case in 99% of real life scenarious).
>>
>>
>>
>>A balanced O/R layer would have some sort of combination of the two 
>>abovementioned approaches. It would maintain the cache, 
>>expire it when 
>>needed (very non-trivial task in fact), and also would give user an 
>>opportunity to explicitly bypass the cache. IMHO if this problem 
>>(caching vs. fresh data) is solved properly, O/R way would rule the 
>>world :-). State of the art now is pretty close via different 
>>optimization techniques, etc. but there is still lots of 
>>mileage to cover.
>>
>


--
To unsubscribe, e-mail:   <mailto:[EMAIL PROTECTED]>
For additional commands, e-mail: <mailto:[EMAIL PROTECTED]>

Reply via email to