When messages arrive on a client connection, ServerRpcProvider will spawn a
separate thread for each incoming message. This means that if the client
sends 100 messages in quick succession, the server will end up spawning
close to 100 threads. This poses 2 potential problems.

The first is a potential denial of service condition. If a client generates
and sends a continuous stream of 1 character deltas, and if the client can
do this faster than the server can process each delta, then the number of
threads in the server will eventually max out. I've observed the spiking
thread count when a client sends a burst of messages, but I haven't tested
the DoS theory yet.

The second problem is less obvious and doesn't effect anyone using the
server with the supplied clients. But, if you choose to use the server for
other purposes, like myself, then there is a problem. In my case wavelets
contain references to content in other wavelets. And, if the reference
creation delta arrives at a another client before the data in the referenced
wavelet has been added, then the client generates an error. I could
potentially solve this by adding a bunch of code to make the reference not
"appear" until the referenced data arrives, but this was an undesired
complication in my case. I initially solved the problem by adding a
SingleThreadExecutor to ServerRpcProvider.Connection and using that executor
to execute the controllers. This ensured that all deltas generated by a
single client would arrive in the same order at all other clients (local and
remote). A further simplification of this solution is to have the Connection
execute the controller directly. Each connection gets it's own thread, so
it's easiest to just have that thread process the message before fetching
the next from the socket.

In the case where all the messages are destined for a single wavelet, there
is nothing to be gained by processing each delta in a separate thread, it
just adds additional thread creation and context switch overhead. In the
case where each delta is destined for a separate wavelet, there is nothing
to be gained once the number of threads significantly exceeds the number of
cores on the machine. So, changing the server so that all messages from a
single client are processed by a single thread would not degrade server
concurrency across connections just within a single connection. A more
complete solution would be to have all messages placed into a single
LinkedBlockingQueue (with a reasonable capacity limit) and have that queue
processed by a thread pool. Or even better, replace the one connection per
thread model with an implementation based on Netty where the Netty based
implementation is serviced by a thread pool.

I like the Netty based solution the best because it provides the maximum
amount of concurrency while still ensuring deltas from a single client are
not re-ordered in the server before delivery.

Any thoughts?

-Tad

-- 
You received this message because you are subscribed to the Google Groups "Wave 
Protocol" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/wave-protocol?hl=en.

Reply via email to