Ben Cipollini added the comment:

@Martin: yes, that reproduces the problem.

>I think the ideal fix would be to cap the limit at 2**32 - 1 in the zlib 
>library.

If this cap is implemented, is there any workaround how we can efficiently open 
gzipped files over 4GB? Such files exist, exist in high-priority, 
government-funded datasets, and neuroinformatics in Python requires a way to 
open them efficiently.

>another option would be to cap the limit in the gzip library

Again, these limitations are new in Python 3.5 and currently block us from 
using Python 3.5 to run neuroinformatics efficiently. Is there any chance to 
revert this new behavior, or any known memory-efficient workaround?

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue25626>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to