On 4/23/07, Daniel Stutzbach <[EMAIL PROTECTED]> wrote: > If a user creates a BList from whole-cloth (e.g., BList(iterable)), > they will get a best-case BList. This would probably be the common > case for user's who don't plan to modify their list much. > > If a user is doing lots of inserts and deletes, the BList's node's > will typically be three-quarters full, translating into 33% extra > memory above a regular Python list. However (and this is a big > however!), these users will greatly appreciate the BList's additional > speed for these operations.
For contrast, the existing list seems to over-allocate by 12.5%, for an average of 6.25% wasted. However, when deleting it only resizes when it drops below half full, leaving it with a worst case of 100%, the same as BList. The usage patterns can make a big difference there though. A list's worst-case is only reached if you have a spike, then drop to just above half-full. A BList's worst-case is reached through random slicing. Daniel, does repeated getitem/setitem (as in random.shuffle()) increase memory usage, or does it have no effect? A random.shuffle() benchmark might be a better a better demonstration of the overall costs. -- Adam Olsen, aka Rhamphoryncus _______________________________________________ Python-3000 mailing list Python-3000@python.org http://mail.python.org/mailman/listinfo/python-3000 Unsubscribe: http://mail.python.org/mailman/options/python-3000/archive%40mail-archive.com