Antoine Pitrou <pit...@free.fr> added the comment: When compiling a source file to bytecode, Python first builds a syntax tree in memory. It is very likely that the memory consumption you observe is due to the size of the syntax tree. It is also unlikely that someone else than you will want to modifying the parsing code to accomodate such an extreme usage scenario :-)
For persistence of large data structures, I suggest using cPickle or a similar mechanism. You can even embed the pickles in literal strings if you still need your sessions to be Python source code: >>> import cPickle >>> f = open("test.py", "w") >>> f.write("import cPickle\n") >>> f.write("x = cPickle.loads(%s)" % repr(cPickle.dumps(range(5000000), protocol=-1))) >>> f.close() >>> import test >>> len(test.x) 5000000 ---------- nosy: +pitrou _______________________________________ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue5557> _______________________________________ _______________________________________________ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com