On 10/6/05, Tom Metro <[EMAIL PROTECTED]> wrote: > Uri Guttman wrote: > > ...i recommend an event loop server which is stable, faster and > > easier to code for in most situations. > > Unless POE does a really good job of hiding the details of non-blocking > IO, I'd find it hard to believe that an event loop server is easier to code. > > Once you learn a small bit of threading API, a multi threaded server is > conceptually quite simple and most of the code looks as it would for a > single threaded, blocking IO application. > > > Ben Tilly writes: > >> I don't think an event loop would help, because a computationally slow > >> procedure or one that makes a further RPC call would still block other > >> clients. > > > > You can do this with an event loop and multiple processes. > > > > The RPC server doesn't make RPC calls. Instead it sends a message to > > a child process that makes the RPC call. The child process then sends > > a message back to the RPC server when it has the answer. The RPC > > server can now use a select loop to cycle through getting RPC > > requests, forwarding them to children, getting responses, and writing > > back to the clients. > > I'm not sure I see the win in using a mixed model of event loops and > multiple processes, aside from avoiding forking/threading inefficiencies > at the expense of greater code complexity.
I said that you can do this. Not that I'd recommend doing this. :-) > Shouldn't it be possible to use non-blocking IO in the main process to > talk to other RPC servers, rather than splitting that off to a child > process? Yes. But unless you implement everything yourself, you'll find that many convenient pieces (eg database drivers) issue blocking calls. > As for dealing with something "computationally slow," I assume POE and > other similar event loop implementations support a mechanism for > yielding to the event loop. If so, then as long as your computationally > slow chunk of code can be broken up, this too should be doable in the > main process. Yes. But now you have the problem that when you use cooperative multi-tasking, you can only take advantage of one CPU. And multiple CPUs is a direction that computers are heading. In fact single CPU machines these days like to emulate 2 CPUs. Given current trends in the factors leading to chip designs, expect to see more of this strategy going forward, not less, so a cooperatively tasked program will be able to extract ever diminishing fractions of the available CPU performance. Therefore multiple threads or multiple processes can be a win over event loops. In fact, given the fact that OS designers have noticed that communication between threads is faster if all threads are on one CPU, multiple processes potentially scale a lot better on one machine than multiple threads. (And multiple processes let you split things across a server farm for real scalability.) Cheers, Ben _______________________________________________ Boston-pm mailing list [email protected] http://mail.pm.org/mailman/listinfo/boston-pm

