Seth Bromberger added the comment:

>I'm just pointing out that if he thinks 56 bytes per object is too large, he's 
>going to be disappointed with what Python has to offer. General purpose 
>user-defined Python objects don't optimize for low memory usage, and even 
>__slots__ only gets you so far.

"He" thinks that 1300% overhead is a bit too much for a simple object that can 
be represented by a 32-bit number, and "he" has been using python for several 
years and understands, generally, what the language has to offer (though not 
nearly so well as some of the respondents here). It may be in the roundoff at n 
< 1e5, but when you start using these objects for real-world analysis, the 
overhead becomes problematic. I note with some amusement that the overhead is 
several orders of magnitude greater than those of the protocols these objects 
represent :)

In any case - I have no issues with the decision not to make these objects 
memory-efficient in stdlib and consequently with closing this out. Rolling my 
own package appears to be the best solution in any case.

Thanks for the discussion.

----------
resolution:  -> wont fix
status: open -> closed

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue23103>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to