On Jun 15, 2008, at 12:20 PM, zooko wrote:

So, it would be easy to change those two behaviors in order to use this implementation. There are actually three implementations in that file: one that is asymptotically O(1) for all operations (using a double-linked list woven into the values of the dict), and one that uses a Python list to hold the order, so it is faster for small enough dicts.

P.S. I didn't mean to fall for the common misunderstanding that hash table operations are O(1). What I should have written is that my ordered dict technique *adds* only O(1) time to the time of the dict on which it is built.

As to the question of how important or common this data structure is, I have to admit that while I implemented this one and used it several times (always exclusively for LRU caching), I currently don't use it for anything. Nowadays I try to avoid doing transparent caching (such as LRU caches are often used for) in favor of explicit management of the resource.

Regards,

Zooko

_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to