[issue4428] make io.BufferedWriter observe max_buffer_size limits

2009-10-09 Thread Antoine Pitrou

Antoine Pitrou pit...@free.fr added the comment:

max_buffer_size is no longer used, so this issue is obsolete ;)

--
resolution:  - out of date
status: open - closed

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue4428
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue4428] make io.BufferedWriter observe max_buffer_size limits

2009-01-31 Thread Gregory P. Smith

Changes by Gregory P. Smith g...@krypto.org:


Added file: http://bugs.python.org/file12914/issue4428-io-bufwrite-gps05.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue4428
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue4428] make io.BufferedWriter observe max_buffer_size limits

2009-01-31 Thread Gregory P. Smith

Gregory P. Smith g...@krypto.org added the comment:

I've uploaded a new patch set with more extensive unit tests.  It also
handles the case of writing array.array objects (or anything with a
memoryview itemsize  1).  The previous code would buffer by item rather
than by byte.  it has been updated in codereview as well.

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue4428
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue4428] make io.BufferedWriter observe max_buffer_size limits

2009-01-31 Thread Gregory P. Smith

Gregory P. Smith g...@krypto.org added the comment:

fwiw, I decided Guido and Antoine were right and took out the support
for input that did not support len() to keep things a bit simpler.

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue4428
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5011] issue4428 - make io.BufferedWriter observe max_buffer_size limits

2009-01-20 Thread Antoine Pitrou

New submission from Antoine Pitrou pit...@free.fr:

http://codereview.appspot.com/12470/diff/1/2
File Lib/io.py (right):

http://codereview.appspot.com/12470/diff/1/2#newcode1055
Line 1055: # b is an iterable of ints, it won't always support len().
There is no reason for write() to accept arbitrary iterable of ints,
only bytes-like and buffer-like objects. It will make the code simpler.

http://codereview.appspot.com/12470/diff/1/2#newcode1060
Line 1060: # No buffer API?  Make intermediate slice copies instead.
Objects without the buffer API shouldn't be supported at all.

http://codereview.appspot.com/12470/diff/1/2#newcode1066
Line 1066: while chunk and len(self._write_buf)  self.buffer_size:
What if buffer_size == max_buffer_size? Is everything still written ok?

http://codereview.appspot.com/12470/diff/1/2#newcode1070
Line 1070: written += e.characters_written
e.characters_written can include bytes which were already part of the
buffer before write() was called, but the newly raised BlockingIOError
should only count those bytes which were part of the object passed to
write().

http://codereview.appspot.com/12470/diff/1/3
File Lib/test/test_io.py (right):

http://codereview.appspot.com/12470/diff/1/3#newcode496
Line 496: def testWriteNoLengthIterable(self):
This shouldn't work at all. If it works right now, it is only a
side-effect of the implementation.
(it won't work with FileIO, for example)

http://codereview.appspot.com/12470

--
messages: 80242
nosy: gregory.p.smith, pitrou
severity: normal
status: open
title: issue4428 - make io.BufferedWriter observe max_buffer_size limits

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue5011
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5011] issue4428 - make io.BufferedWriter observe max_buffer_size limits

2009-01-20 Thread Gregory P. Smith

Gregory P. Smith g...@krypto.org added the comment:

Reviewers: Antoine Pitrou,

Message:
Just responding to your comments on the support for generators and non
buffer api supporting inputs.

I'll get to the other comments in the code soon with new unit tests for
those cases.

http://codereview.appspot.com/12470/diff/1/2
File Lib/io.py (right):

http://codereview.appspot.com/12470/diff/1/2#newcode1055
Line 1055: # b is an iterable of ints, it won't always support len().
On 2009/01/20 11:12:47, Antoine Pitrou wrote:
 There is no reason for write() to accept arbitrary iterable of ints,
only
 bytes-like and buffer-like objects. It will make the code simpler.


Agreed.  But since I want to merge this into release30-maint doing that
sounds like a behavior change.  I'd be fine with removing it for the
3.1/2.7 version of this code (though I hope people will be using the C
implementation instead).

http://codereview.appspot.com/12470/diff/1/2#newcode1060
Line 1060: # No buffer API?  Make intermediate slice copies instead.
On 2009/01/20 11:12:47, Antoine Pitrou wrote:
 Objects without the buffer API shouldn't be supported at all.

Same reason as above.

http://codereview.appspot.com/12470/diff/1/3
File Lib/test/test_io.py (right):

http://codereview.appspot.com/12470/diff/1/3#newcode496
Line 496: def testWriteNoLengthIterable(self):
On 2009/01/20 11:12:47, Antoine Pitrou wrote:
 This shouldn't work at all. If it works right now, it is only a
side-effect of
 the implementation.
 (it won't work with FileIO, for example)


hmm in that case it might not be too large of a thing to break when
merged into release30-maint.  I'll leave that up to the release manager
 bdfl.  my gut feeling for a release branch change is to say we have to
live with this being supported for now.

Description:
http://bugs.python.org/issue4428 - patch gps04.

Please review this at http://codereview.appspot.com/12470

Affected files:
   Lib/io.py
   Lib/test/test_io.py

Index: Lib/io.py
===
--- Lib/io.py   (revision 68796)
+++ Lib/io.py   (working copy)
@@ -1047,11 +1047,42 @@
  self._flush_unlocked()
  except BlockingIOError as e:
  # We can't accept anything else.
-# XXX Why not just let the exception pass through?
+# Reraise this with 0 in the written field as none of  
the
+# data passed to this call has been written.
  raise BlockingIOError(e.errno, e.strerror, 0)
  before = len(self._write_buf)
-self._write_buf.extend(b)
-written = len(self._write_buf) - before
+bytes_to_consume = self.max_buffer_size - before
+# b is an iterable of ints, it won't always support len().
+if hasattr(b, '__len__') and len(b)  bytes_to_consume:
+try:
+chunk = memoryview(b)[:bytes_to_consume]
+except TypeError:
+# No buffer API?  Make intermediate slice copies  
instead.
+chunk = b[:bytes_to_consume]
+# Loop over the data, flushing it to the underlying raw IO
+# stream in self.max_buffer_size chunks.
+written = 0
+self._write_buf.extend(chunk)
+while chunk and len(self._write_buf)  self.buffer_size:
+try:
+self._flush_unlocked()
+except BlockingIOError as e:
+written += e.characters_written
+raise BlockingIOError(e.errno, e.strerror, written)
+written += len(chunk)
+assert not self._write_buf, _write_buf should be  
empty
+if isinstance(chunk, memoryview):
+chunk = memoryview(b)[written:
+  written +  
self.max_buffer_size]
+else:
+chunk = b[written:written + self.max_buffer_size]
+self._write_buf.extend(chunk)
+else:
+# This could go beyond self.max_buffer_size as we don't  
know
+# the length of b.  The alternative of iterating over it  
one
+# byte at a time in python would be slow.
+self._write_buf.extend(b)
+written = len(self._write_buf) - before
  if len(self._write_buf)  self.buffer_size:
  try:
  self._flush_unlocked()
Index: Lib/test/test_io.py
===
--- Lib/test/test_io.py (revision 68796)
+++ Lib/test/test_io.py (working copy)
@@ -479,6 +479,33 @@

  self.assertEquals(babcdefghijkl, writer._write_stack[0])

+def testWriteMaxBufferSize(self):
+writer = MockRawIO()
+bufio = 

[issue4428] make io.BufferedWriter observe max_buffer_size limits

2009-01-20 Thread Antoine Pitrou

Antoine Pitrou pit...@free.fr added the comment:

Hi!

that sounds
 like a behavior change.  I'd be fine with removing it for the 3.1/2.7
version of
 this code (though I hope people will be using the C implementation
instead).

Well, either it's supported and it will have to go through a deprecation
phase, or it's unsupported and it can be ripped out right now...

I don't think it should be supported at all, given that the semantics of
writing an iterable of ints are totally non-obvious. Reading both the
PEP and the docstrings in io.py, I only see mentions of bytes and
buffer, not of an iterable of ints. Perhaps Guido should pronounce.

(do you know of any code relying on this behaviour? the C version
obviously does not support it and all regression tests pass fine, except
for an SSL bug I filed)

http://codereview.appspot.com/12470

--
nosy: +pitrou
title: io.BufferedWriter does not observe buffer size limits - make 
io.BufferedWriter observe max_buffer_size limits

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue4428
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue4428] make io.BufferedWriter observe max_buffer_size limits

2009-01-20 Thread Guido van Rossum

Guido van Rossum gu...@python.org added the comment:

@Gregory, that sounds like an odd enough use case to skip.  However you
might want to look for __length_hint__ before giving up?

OTOH unless the use case is real, why not support it but making it slow?

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue4428
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com