That's actually not quite caching. That's just optimizing memory
usage.

Here's my proposal:

* Create a custom ModelBase, CachedModelBase -- this would rely on
CACHE_BACKEND to pull get() requests (PK requests) using a similar
method as #17s optimization
* CachedModelBase overrides save/delete to handle invalidation
* CachedModelBase overrides objects, and sets it as CacheManager, and
adds a no_cache which is the default manager
* CachedOptions would also extend the Options() meta class, and
provide additional options for setting defaults such as cache_prefix,
and cache_timers (for get, filter, and all requests)
* Possibly some "smart" detection for saying "I'm cached 50 entries
here, and they requested 10, pull the cache of 50 and return 10"

On Sep 15, 12:17 pm, Philippe Raoult <[EMAIL PROTECTED]>
wrote:
> I'm too wasted to write a coherent answer, but I'd like to point out
> thathttp://code.djangoproject.com/ticket/17
> has a patch for caching that has been refactored during the sprint.
>
> Regards,
> Philippe


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to