| Dave,
It depends. If it's a to-one relationship, and either:
a) The object is already in your current editing context from your fetch, or b) The object is in a different editing context, but the database snapshot is recent enough for your editing context to accept it
then yes, there will be no database activity. This is because the object can be found or created purely based on the global ID, which can be constructed because a to-one relationship requires you to know the entire primary key.
However, if the snapshot is old, your editing context might refuse the snapshot and request a fresh one (it depends on that context's timestamp or the default timestamp lag).
Lastly, to-many faults will always require a trip to the database, since the set information is not known. You can always improve this by setting the to-many relationship to batch fault. Don't forget though, if only one instance of a to-many fault exists at a particular time in the context, no amount of batch faulting will help you. Another way to improve this is to have a method in your class that determines the to-many grouping via an in-memory qualification (if you already have a group of objects you know are the superset). This of course gets dicey, because you're doing EOF's job...but some circumstances might warrant it.
To answer your original question, fetches from fetch specs or faulting go through the same plumbing.
Ken On Jan 18, 2006, at 12:56 PM, Dave Rathnow wrote: Is there a difference in the way EOF caches an object if it is fetched using an EOFetchSpecification or if it is retrieved by traversing a relationship? If I do a bulk fetch using a fetchSpec and then access these object via a relationship from a related object, will EOF use the cached object or fetch it from the database? Thanks, Dave. Dave,
What you're doing is OK, and pretty typical (at least for me). I try to limit the destruction of snapshots to the destruction of an editing context, so I don't have objects lying around that are not garbage collected, but their snapshots have been removed from the database context.
In one situation, I would have a singleton manage the 'ref data' context (which stored often used reference data). When a 'transaction' editing context was destroyed, it would invalidated all the EOs in THAT editing context that did not have a companion in the ref editing context (managed by the singleton). In that manner, we would hold on to ref data indefinitely, but only hold on to transactional data for the life of the context.
Ken On Jan 17, 2006, at 12:07 PM, Dave Rathnow wrote:
We have an application that processes well data that is reported at regular and irregular reporting periods. The bulk of the data arrives in the morning within a two hour window. The application is headless, that is, it has no UI. When we first wrote the program we were having problems with running out of memory and the simplest solution we found was to periodically dispose of the editing context and then nuke all snapshots from the EODatabase object. This solved the memory problems but of course we lost the advantage of any EOF caching. Unfortunately, we are now at a point where we are falling too far behind during busy periods so we have to revisit the problem and find a better solution. This application is one part of a bigger system and it's primary job is to process incoming data according to a set of business rule. The processing involves fectching object from the DB and creating other objects that are then stored in the DB and consumed by other applications in the system. Most of the fetched objects should stay in memory since they will likely be used in the future to process data from the same location. Other objects could probably be nuked after a period of time. The newly created objects are not required and could be released as soon as they are saved to the DB. I've been playing around with different optimization strategies including batch fetching but the one that seems to work the best is to save all inserted objects in my EC before I call saveChanges and then nuke the snapshots from the EODatabase after the saveChanges has finished. My overall memory requirements have increased but they seem to level out at an acceptable level. So here are my question: 1. Does anyone have some ideas how to handle this type of caching requirements? Is there another/better approach or different solution? 2. Assuming what I'm doing is an acceptable approach, can I control how long objects are cached by EOF and what objects are cached? 3. Does nuking snapshots in the way I'm doing it have an bad side effects. Thanks, Dave. _______________________________________________ Do not post admin requests to the list. They will be ignored. Help/Unsubscribe/Update your Subscription:
|