Following Oscar's comment, It's also O(n) vs. O(1) tradeoff. --wesley
On Tue, May 7, 2013 at 12:09 PM, Oscar Benjamin <oscar.j.benja...@gmail.com>wrote: > On 7 May 2013 19:09, Bjorn Madsen <bjorn.h.mad...@googlemail.com> wrote: > > Hi, Thanks for the quick response. > > > > "Being curious" I actually expected something like this: > >>>> L={x:None for x in range(10000)} > >>>> sys.getsizeof(L) > > 196660 > >>>> > > > > That is why I wondered why 6 times is a lot given that a dict can do the > > same at 3/4 of the mem-footprint. > > Just got surprised about the overhead as "sets" some are more lightweight > > than dictionaries. > > I think that the 3/4 is pretty much random. sets and dicts resize > peroidically depending on how they are used. > > > additional overhead must have something to do with set-specific > operations, > > I imagine such as: > > set1 - set2 > > set | set2 > > set1 & set2 > > set1 <= set2 > > .... > > I don't think so. Here's the results on a 64-bit Linux machine (Python > 2.7): > >>> import sys > >>> sys.getsizeof([x for x in range(10000)]) > 87632 > >>> sys.getsizeof({x for x in range(10000)}) > 524520 > >>> sys.getsizeof({x: None for x in range(10000)}) > 786712 > > So in this case the set ends up being 2/3 the size of the dict. As I > said before these results will not be consistent from one computer to > another. > > > Oscar > _______________________________________________ > Tutor maillist - Tutor@python.org > To unsubscribe or change subscription options: > http://mail.python.org/mailman/listinfo/tutor > -- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - "A computer never does what you want... only what you tell it." +wesley chun : wescpy at gmail : @wescpy Python training & consulting : http://CyberwebConsulting.com "Core Python" books : http://CorePython.com Python blog: http://wescpy.blogspot.com
_______________________________________________ Tutor maillist - Tutor@python.org To unsubscribe or change subscription options: http://mail.python.org/mailman/listinfo/tutor