[EMAIL PROTECTED] writes:

>> I just saw this blog post on Planet Python:
>>
>>   
>> http://oubiwann.blogspot.com/2008/06/async-batching-with-twisted-walkthrough.html
>>
>> Examples 7 and 8 seems interesting for us: it deals with how one
>> can limit the number of outstanding requests. I hope we might be
>> able to use this to fight the memory usage problems discussed here:
>>
>>   http://article.gmane.org/gmane.comp.cryptography.viff.devel/256
>
> I've justed gone over it lightly, and it seems like it's exactly the
> right tool for the job. I'll let you know how my progress goes.

Great, I'll look forward to it!

We are in a bit of tension here: on one hand we really want the
fine-grained concurrency we get by just building a big execution tree
and starting as much network communication as possible, but on the
other hand we really don't want to hold a huge tree of Deferreds
(Shares) in memory...

I hope we can find a good and simple solution that will allow us to
keep enough of the tree in memory to gain the advantages of running
things in parallel, while at the same time keeping memory usage at an
acceptable level.

-- 
Martin Geisler
_______________________________________________
viff-devel mailing list (http://viff.dk/)
[email protected]
http://lists.viff.dk/listinfo.cgi/viff-devel-viff.dk

Reply via email to