If it was me starting from scratch (and I do have experience building fast servers that handle hundreds of simultaneous client connections in limited resources), I would have one thread per blocking resource, driven by work queues. What the viewer might see as a single operation turns into a "workflow" along a series of these queues, each thread doing its part like an assembly line. The only thing that blocks is a hardware interface.
So for example, there only needs to be one listening thread per UDP port, not one per viewer. A network interface can only deliver one packet at a time. The sending IP address on the packet keys you to the correct user context to match it up with. A disk interface on the other hand works better with multiple requests outstanding at once, so that the kernel seek optimization has something to work on; you would have perhaps 5 threads to round-robin handle disk I/O. The per-user context is where you keep track of the state of all the things that the viewer is doing at once, rather than spreading this information all over the call stacks of threads. As soon as the processing of an input packet needs to do something that might block, the request is put on the input queue of another thread that handles the blocking operation. It takes a bit to get used to programming like this, but I can report that the performance results are quite amazing regarding scalability. It also reduces the need for locks, since it is mostly just the work queues that are touched by more than one thread. _______________________________________________ Opensim-dev mailing list [email protected] https://lists.berlios.de/mailman/listinfo/opensim-dev
