On 10 Dec, 05:25 pm, [email protected] wrote:
On Fri, Dec 9, 2011 at 7:31 PM, Itamar Turner-Trauring
<[email protected]> wrote:
� � What's the best solution? Apply the patch attached on this ticket,
moving to a producer/consumer approach, or any other idea?

The patch will just delay the problem... you're creating a huge number of
strings, faster than the transport can write them out. The solution is
indeed to use the consumer API to pause creation of more data until the
transport clears its buffers.

Right, I will try with the consumer API.

But I have one last question:

In my previous example, the memory usage grows until a MemoryError exception.

But other scenario is when my "event send loop" iterates a high number
of times (but not enough to raise a exception) and then stops.

I expected that when the loop ends, all the strings would be flushed
and as a consequence, the memory usage of the process would return to
a normal level. But this does not happen... It's normal?

Data may or may not be put onto the network as you are directing a transport to write it. It's up to the particular transport implementation to decide on buffering logic, including logic about whether data is sent immediately when a write() call is, or only later after control returns to the event loop.

As of Twisted 11.1, the posix-based reactor implementations all buffer data until control is returned to the event loop. This has been the case for some time, but not _all_ time, and it may change in the future.

Jean-Paul

_______________________________________________
Twisted-Python mailing list
[email protected]
http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python

Reply via email to