On May 6, 2011, at 8:53 PM, Bill Curtis wrote:

> I think I have figured out what's going on here.
> 
> request-1:
> 
> 1.1) query runs, model object gets loaded
> 1.2) model object gets stuffed in session.identity map
> 1.3) model object gets stuffed in secondary beaker cache
> 1.4) deferred field gets loaded
> 1.5) model object gets updated in session identity map
> 1.6) somehow secondary cache gets updated
> 
> request-2:
> 
> 2.1) query runs, model object gets loaded FROM secondary cache
> 2.2) model object gets stuffed in session.identity map
> 2.3) deferred field does not need to be loaded, b/c it was in secondary 
> beaker cache
> 
> Periodically, the cache times out, and I request-1 behavior repeats.
> 
> I'm a little unclear exactly when and how step 1.6 is happening.  If anyone 
> has thoughts, I'd be happy to hear them.  My beaker-cache code it largely 
> derived from the example here, if anyone else is using it:

if a query is run with the beaker cache switch on, a row representing the 
object in question is loaded, the row matches the object in the identity map, 
the beaker caching query then caches the results of that load, including your 
object that came from the identity map.

I dont understand the ultimate issue, unless its that you're getting the wrong 
data back.  if its just that the data is being cached instead of it loading 
deferred, then yes that's just the caching query happening.   it would need to 
be more careful about the state its placing in the cache - like, when the 
object is serialized for caching, have it expire those attributes you don't 
want in the cache.

> 
> FWIW, my higher-level concerns are around how to find and invalidate objects 
> in the secondary cache, once they have become dirty.

the way the example works right now, you need to create a Query that represents 
the same cache key as one that you'd like to clear out, and invalidate.  if 
you're looking to locate objects based on identity, you'd probably want to 
change the scheme in which data is cached - probably cache result sets which 
serialize only a global identifier for each object, then store each object 
individually under those individual identifiers.   it would be slower on a get 
since it means a get for the result plus a get for each member, but would allow 
any identity to be cleared globally.    the get for each member could be 
ameliorated by the fact that they'd be in the identity map first, before out in 
a cache.

not at all simple and its why 2nd level caching is not a built in feature !   
too many ways to do it.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

Reply via email to