New submission from Robert Collins:

The io library rejects unbuffered text I/O, but this is not documented - and in 
fact can be manually worked around:
    binstdout = io.open(sys.stdout.fileno(), 'wt', 0)
    sys.stdout = io.TextIOWrapper(binstdout, encoding=sys.stdout.encoding)
will get a sys.stdout that is unbuffered.

Note that writing to a pipe doesn't really need to care about buffering anyway, 
if the user writes 300 characters, the codec will output a single block and the 
IO made will be one write:

This test script:
import sys
import io
stream = io.TextIOWrapper(io.open(sys.stdout.fileno(), 'wb', 0), 
encoding='utf8')
for r in range(10):
  stream.write(u'\u1234'*500)

When run under strace -c does exactly 10 writes: so the performance is 
predictable. IMO it doesn't make sense to prohibit unbuffered text write I/O. 
readers may be another matter, but that doesn't suffer the same latency issues.

----------
messages: 184025
nosy: rbcollins
priority: normal
severity: normal
status: open
title: ValueError: can't have unbuffered text I/O for io.open(1, 'wt', 0)
type: behavior
versions: Python 2.6, Python 2.7, Python 3.1, Python 3.2, Python 3.3, Python 
3.4, Python 3.5

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue17404>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to