Nadeem Vawda added the comment: I agree that being able to limit output size is useful and desirable, but I'm not keen on copying the max_length/unconsumed_tail approach used by zlib's decompressor class. It feels awkward to use, and it complicates the implementation of the existing decompress() method, which is already unwieldy enough.
As an alternative, I propose a thin wrapper around the underlying C API: def decompress_into(self, src, dst, src_start=0, dst_start=0): ... This would store decompressed data in a caller-provided bytearray, and return a pair of integers indicating the end points of the consumed and produced data in the respective buffers. The implementation should be extremely simple - it does not need to do any memory allocation or reference management. I think it could also be useful for optimizing the implementation of BZ2File and LZMAFile. I plan to write a prototype and run some benchmarks some time in the next few weeks. (Aside: if implemented for zlib, this could also be a nicer (I think) solution for the problem raised in issue 5804.) ---------- stage: -> needs patch _______________________________________ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue15955> _______________________________________ _______________________________________________ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com