Martin Panter added the comment:

## BufferedReader.peek() ##
See <https://bugs.python.org/issue5811#msg233750>, but basically my concern is 
that the documentation says “the number of bytes returned may be less or more 
than requested”, without any mention of EOF or other conditions.

## Buffer sizing ##
In the code review, Nikolaus raised the idea of allowing a custom “buffer_size” 
parameter for the BufferedReader. I think this would need a bit of 
consideration about how it should work:

1. Should it be a direct wrapper around BufferedReader(buffer_size=...)?
2. Should it also support an unbuffered reader mode like open(buffering=0), 
socket.makefile(buffering=0), and subprocess.Popen(bufsize=0)?
3. Should there be any consideration for buffering in write mode (mirroring the 
equivalent open(), etc parameters)?

## Common raw decompression stream class ##
Having a common base class mapping the generic decompressor object API to the 
RawIOBase API is a good thing. I will try to make one in my next patch 
iteration. In the mean time, here are a couple of issues to consider:

* What module should it reside in? Perhaps “gzip”, because it is the most 
commonly-used of the three decompressors? Perhaps “io”, being the most relevant 
common dependency? Or a brand new internal module?!
* The zlib.decompressobj() API is slightly different, since it has the 
“unconsumed_tail” attribute and flush() methods, and decompress(max_length=0) 
does not mean return zero bytes (Issue 23200).

In Issue 23528, Nikolaus also pointed out that “GzipFile would need to 
additionally overwrite read() and write() in order to handle the CRC and gzip 
header.”

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue23529>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to