>
> How many sessions  did you create in your load-test ?

One, but we're streaming very small protocol payloads and then
performing tasks that require more IO/Processing, therefore the
IoHandlers are proportionally heavier than the decoder, causing a
backup once the lots of messages are coming.  Our use case, it's not
inconceivable that we'd receive 20 thousand messages in a few seconds.
 The protocol decoder thread is very light weight and is able to keep
up with the messages coming in.  Also, it's a requirement that all the
messages are pushed on the same port/session, as the devices that are
connecting to us can only push messages over one session.

>
>> From the looks of it, the orderedthreadpool is basically parallelizing
>> the processing, so it probably not much different then having a thread
>> pool with one thread.  The only overhead might be the fact that tasks
>> can be submitted simultaneously and the get/put thread pool cycle is
>> parallel, though the processing is not.
>
> OrderedThreadPool is about serializing the events PER SESSION.
> This means that the order of the messageReceived events _per session_
> is guaranteed to be preserved.
> But events from multiple sessions are not serialized.

Yes, I see that now.  Though I know see some benefit to the thread
model in 1.* series of MINA, it's still not clear to me why the
restriction of only having multithreading ability per session matters.
 I don't see the huge benefit  for pushing a thread pool executor into
the pipeline before the the IoHandler, since it'll be processed
serially and the benefit of spinning of a different thread for that as
opposed to processing it in the same thread as the decoder, is
subjective.  Though the decoder can go back to processing, the
pipeline will never get ahead of the IoHandler thread.  The only
benefit I see is snappier responses to the connected client.  Either
way, I'm sure there are some apps (or more than I think), that would
benefit from such a model, but in our case we wanted the protocol
codec thread to not get too much ahead of the IoHandlers, so that
model wasn't beneficial.

Thanks for the help.  We've migrated to 2.0 and it works great.  We
went from processing 6 requests per second in IoHandler, to 150-200,
depending on IO, so we're good for now, the latency is mostly in the
IoHandler db io time, which we can't control.

Ilya

Reply via email to