On Aug 30, 2013, at 9:25 AM, Torsten Landschoff <[email protected]> wrote:
> Hi *, > > I am trying to cache SQLAlchemy queries in memory for a rich client > application. To invalidate the cache for changes seen in the database, I am > trying to drop in-memory instances that have been changed or deleted. > > This requires comparing the identity of the deleted objects with in-memory > objects. I tried using identity_key for this and failed, because it tries to > reload from the database and I expire the instances when I am told they had > some changes. > The attached IPython notebook shows the behaviour. Short summary: > Reloads expired state (potential ObjectDeletedError) > identity_key(instance=instance) > mapper.identity_key_from_instance(instance) > mapper.primary_key_from_instance(instance) > Uses old primary key (no reload, no ObjectDeletedError) > object_state(user).identity_key > object_state(user).identity > object_state(user).key > > The main reason why I care is that identity_key may generate database queries > which kill any performance improvement of my query cache. > I think this should be documented in SQLAlchemy, I did not expect those > functions to ever raise an exception. well those are old functions and they should document that what you usually want is just inspect(obj).key, if you have an object already. I added http://www.sqlalchemy.org/trac/ticket/2816 for that. just to verify, state.key does what you want, right?
signature.asc
Description: Message signed with OpenPGP using GPGMail
