Mojito Sorbet Wrote:
> The per-user context is where you keep track of the state of all the
> things that the viewer is doing at once, rather than spreading this
> information all over the call stacks of threads.

>From what I have measured, this spreading of data over the call stacks of 
>hundreds of threads severely limits scaling the number of concurrent clients. 
>UDP packets all come in on a single port and are put into incoming queues for 
>each client view. The client view UPD threads (1 per client) then dequeue that 
>packet and make the appropriate calls into scene. When the scene has updates, 
>it puts them into the outbound queue for every client view and finally the 
>client view thread comes around and sends the packets out to clients. Inbound 
>processing on the scene happens on the individual client view thread, but 
>often this work is small and most of the time is spent switching threads in 
>and out. 

I tested a change where I eliminated the outbound queue and had the main thread 
call directly into the send packet routing. As the number of active agents 
increases, outbound traffic grows exponentially compared to inbound. This 
change alone allowed me to scale the number of active viewers on a single 
region using TestClient up to more than 200. I also tried eliminating the 
inbound queues and instead making the calls to scene directly. This behavior, 
as Tom pointed out, causes a single client to stop up the entire packet 
processing thread when a complex operation comes along. 

I think that it would be ideal, as I believe Mojito described, if there were a 
small number of threads handling outbound client packet sending and they 
blocked on their packet queue to get work. For inbound traffic, simple 
operations should be called directly on the scene and the more complex 
operations put into a queue for a set of slower processing threads. Having 300 
threads (1 per client) is too many, but 1 was not enough. We could certainly 
start with 1 thread for outbound UDP for all clients and grow as the number of 
clients grows. Something on the order of 1 outbound thread per 50 clients and 1 
inbound thread per 5 or 10 clients would be about ideal in my estimation. 

Dan lake
Software Engineer
Visual Applications Research
Intel Labs

-----Original Message-----
From: [email protected] 
[mailto:[email protected]] On Behalf Of Mojito Sorbet
Sent: Friday, July 31, 2009 9:05 AM
To: [email protected]
Subject: Re: [Opensim-dev] Threads, threads, threads and more threads

If it was me starting from scratch (and I do have experience building
fast servers that handle hundreds of simultaneous client connections in
limited resources), I would have one thread per blocking resource,
driven by work queues.  What the viewer might see as a single operation
turns into a "workflow" along a series of these queues, each thread
doing its part like an assembly line.  The only thing that blocks is a
hardware interface.

So for example, there only needs to be one listening thread per UDP
port, not one per viewer.  A network interface can only deliver one
packet at a time.  The sending IP address on the packet keys you to the
correct user context to match it up with.  A disk interface on the other
hand works better with multiple requests outstanding at once, so that
the kernel seek optimization has something to work on;  you would have
perhaps 5 threads to round-robin handle disk I/O.

The per-user context is where you keep track of the state of all the
things that the viewer is doing at once, rather than spreading this
information all over the call stacks of threads.

As soon as the processing of an input packet needs to do something that
might block, the request is put on the input queue of another thread
that handles the blocking operation.

It takes a bit to get used to programming like this, but I can report
that the performance results are quite amazing regarding scalability.
It also reduces the need for locks, since it is mostly just the work
queues that are touched by more than one thread.

_______________________________________________
Opensim-dev mailing list
[email protected]
https://lists.berlios.de/mailman/listinfo/opensim-dev
_______________________________________________
Opensim-dev mailing list
[email protected]
https://lists.berlios.de/mailman/listinfo/opensim-dev

Reply via email to