Steve Steiner (listsin) wrote: [...] > I expect the usual flurry of "you must post your exact code or we > can't help you at all, moron" posts, but...
I'll try to restrain myself ;) > In spite of my not having posted specific code, could someone with > some actual experience in this please give me a clue, within an order > of magnitude, how many deferreds might start to cause real trouble? None. Deferreds aren't the problem; they are just Python objects. You can probably have *millions* of them without great difficulty. They are a symptom, not a cause. The problem is more likely the underlying operations that are linked to the Deferreds. My two top guesses are: 1) the web server failing to cope with thousands of concurrent requests gracefully, or 2) the number of sockets is hitting a system limit (number of FDs you can pass to select(), or hitting the max number of file descriptors, something like that) in that order. For the second one, assuming you're on Linux, you may benefit from a trivial change to use the epoll reactor rather than the default one. For the first one, you're at the mercy of the webserver. IIRC the RFCs say that clients SHOULD use no more than two concurrent connections to a server... Regardless, I imagine you're unlikely to get much performance benefit from hammering a server with 1000 concurrent requests over something much smaller, like 5 or 10. So I'd use a DeferredSemaphore, or perhaps look into using Cooperator, and not worry about solving the mystery how to make 1000s of concurrent requests work. Of course, if you give more specific info about how your code fails and what it does I might be able to give more specific advice... ;) -Andrew. _______________________________________________ Twisted-Python mailing list Twisted-Python@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python