Hi all-- Long time user, first time poster, here. Thank you all for Django!
The current local memory cache (locmem) in Django uses a pseudo-random culling strategy. Rather than random, the OrderedDict data type can be used to implement an LRU eviction policy. A prototype implementation is already used by functools.lru_cache and Python 3 now supports OrderedDict.move_to_end and OrderedDict.popitem to ease the implementation. I have created an example set of changes at https://github.com/grantjenks/django/tree/ticket_28977 in commit https://github.com/grantjenks/django/commit/b06574f6713d4b7d367d7a11e0268fb62f5fd1d1 Is there a consensus as to the value of these changes? Sincerely, Grant Jenks -- You received this message because you are subscribed to the Google Groups "Django developers (Contributions to Django itself)" group. To unsubscribe from this group and stop receiving emails from it, send an email to django-developers+unsubscr...@googlegroups.com. To post to this group, send email to django-developers@googlegroups.com. Visit this group at https://groups.google.com/group/django-developers. To view this discussion on the web visit https://groups.google.com/d/msgid/django-developers/1f9d2225-18e8-46f3-9311-b07177c4baca%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.