Hi,

Just wanted to share an issue I'm facing for which I can't find any obvious
solutions.

I have a proxy based on MINA. It has both a NioSocketAcceptor and a
NioSocketConnector (both TCP). The flow is as follows:

1. Users arrive through the Acceptor
2. Their messages are processed by a filter chain and modified slightly
(takes 5 - 10 millseconds)
3. The messages are forwarded to their destination through the Connector.

The Connector session for a user is created immediately after the
Acceptor's (with very high probability, the Connector's sessionId = the
Acceptor's sessionId + 1)

I use the default constructor for both Acceptor and Connector. If I've read
the source correctly, this means that each of them has a
SimpleIoProcessorPool as their processor. This pool creates as many threads
as CPUs you have + 1 and assigns one of these threads to each session. The
thread a new session gets depends on a calculation done on its sessionId.

Under normal load everything works as expected. The logs show clearly that
NioProcessor-1 (an example for the thread assigned to the Acceptor's
session) is always the one processing every message from the user, and
NioProcessor-2 from the Connector's session does the same with every
response. Message order is maintained.

However, when activity picks up on the Acceptor's side and messages should
start queuing up, the Connector's session's processor will "switch sides"
and start processing messages from the user. NioProcessor-1 and
NioProcessor-2 will share the load, with the inevitable consequence that at
some point the message order will be lost.

This is currently happening on MINA 2.0.2

Has anyone on this list encountered such an issue? Could it be something in
the way my aplication uses MINA, something I should be taking care of when
running two IoServices at the same time?

Thanks,
Guillermo

Reply via email to