New submission from Paul Sokolovsky: This issue was brought is somewhat sporadic manner on python-tulip mailing list, hence this ticket. The discussion on the ML:
https://groups.google.com/d/msg/python-tulip/JA0-FC_pliA/knMvVGxp2WsJ (all other messages below threaded from this) https://groups.google.com/d/msg/python-tulip/JA0-FC_pliA/lGqT54yupOIJ https://groups.google.com/d/msg/python-tulip/JA0-FC_pliA/U0NBC1jLGSgJ https://groups.google.com/d/msg/python-tulip/JA0-FC_pliA/zIx59jj8krsJ https://groups.google.com/d/msg/python-tulip/JA0-FC_pliA/zSpjGKv23ioJ https://groups.google.com/d/msg/python-tulip/JA0-FC_pliA/3mfGI8HIe_gJ https://groups.google.com/d/msg/python-tulip/JA0-FC_pliA/rM4fyA9qlY4J Summary of arguments: 1. This would make such async_write() (a tentative name) symmetrical in usage with read() method (i.e. be a coroutine to be used with "yield from"/"await"), which certainly reduce user confusion and will help novices to learn/use asyncio. 2. write() method is described (by transitively referring to WriteTransport.write()) as "This method does not block; it buffers the data and arranges for it to be sent out asynchronously." Such description implies requirement of unlimited data buffering. E.g., being fed with 1TB of data, it still must buffer it. Bufferings of such size can't/won't work in practice - they only will lead to excessive swapping and/or termination due to out of memory conditions. Thus, providing only synchronous high-level write operation goes against basic system reliability/security principles. 3. The whole concept of synchronous write in an asynchronous I/O framework stems from: 1) the way it was done in some pre-existing Python async I/O frameworks ("pre-existing" means brought up with older versions of Python and based on concepts available at that time; many people use word "legacy" in such contexts); 2) on PEP3153, which essentially captures ideas used in the aforementioned pre-existing Python frameworks. PEP3153 was rejected; it also contains some "interesting" claims like "Considered API alternatives - Generators as producers - [...] - nobody produced actually working code demonstrating how they could be used." That wasn't true at the time of PEP writing (http://www.dabeaz.com/generators/ , 2008, 2009), and asyncio is actually *the* framework which uses generators as producers. asyncio also made a very honorable step of uniting generators/coroutine and Transport paradigm - note that as PEP3153 shows, Transport proponents contrasted it with coroutine-based design. But asyncio also blocked (in both senses) high-level I/O on Transport paradigm. What I'm arguing is not that Transports are good or bad, but that there should be a way to consistently use coroutine paradigm for I/O in asyncio - for people who may appreciate it. This will also enable alternative implementations of asyncio subsets without Transport layer, with less code size, and thus more suitable for constrained environments. Proposed change is to add following to asyncio.StreamWriter implementation: @coroutine def async_write(self, data): self.write(data) I.e. default implementation will be just coroutine version of synchronous write() method. The messages linked above discuss alternative implementations (which are really interesting for complete alternative implementations of asyncio). The above changes are implemented in MicroPython's uasyncio package, which asyncio subset for memory-constrained systems. Thanks for your consideration! ---------- components: asyncio messages: 245336 nosy: gvanrossum, haypo, pfalcon, yselivanov priority: normal severity: normal status: open title: Please add async write method to asyncio.StreamWriter versions: Python 3.5 _______________________________________ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue24449> _______________________________________ _______________________________________________ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com