Hi.
We are still having problems with this.
We have noticed that, although we are creating a number of IoProcessors
on the IoConnector side equal to number of cpu's + 1 but because we only
have a single bind on this side, there is only ever one
SocketConnectorIoProcessor thread created. If I debug the code, I can
see that the correct number of processors are being created but the
others appear never to be used. Is this normal and is there any way to
force the other IoProcessors into action?
Also, under certain circumstances we have observed that the outgoing
side, the IoConnector side, will get "stuck" on a single thread and
cause a severe drop in performance. We have tested the software using
the pre-release version of mina 1.1.1 and the thread problem does not
appear to happen. We are continuing to test this. Are there any thread
management related bug fixes in mina 1.1.1? Is this due to become
stable in the near future?
Any other help or suggestions would be appreciated.
Paddy
Paddy O'Neill wrote:
Hi Trustin
I have made the changes you suggested to the spring configuration and
it hasn't made any obvious difference. We do not call future.join()
but I will look at implementing the IoFutureListener that you suggested.
We believe that we are now able to replicate the problem in-house. We
have noticed that the outgoing connection only ever works on one
thread at a time. It will switch threads every so often but never
does anything concurrently. The inbound thread pool is actively
working concurrently. Is this because we only have one
IoConnectorIoProcessor thread? We create a pool of Executors equal to
N cpus + 1 but because we only have 1 bind on the outgoing side, only
1 of these Executors gets used. If we put traffic through the system
in both directions then the outgoing pool will lock onto one thread
and constantly hammer it. This is what is causing the backlog and
degradation in performance. Any suggestions would be gratefully
received.
Sun JVM Version 1.5.05b, Mina 1.1.0.
Paddy
Trustin Lee wrote:
On 6/13/07, Paddy O'Neill <[EMAIL PROTECTED]> wrote:
Hi.
We are having a strange problem with a production server. We have set
up mina to have 2 thread pools, one for incoming and 1 for outgoing
connections. Typically there will be multiple incoming connections
(typically around 300) and 1 outgoing connection. Traffic flows both
ways through the server.
In the production environment, we are seeing that on the outgoing
connection, a single thread is being processed for long periods of time
before another thread is worked, usually seconds but this can at times
extend to several minutes. This behaviour is causing major performance
degradation on the outgoing connection. We have built a test server
which is as close to the production server as possible, same OS, JVM
version, patches etc. and we are unable to replicate this behaviour
in-house.
Do you call WriteFuture.join() after writing something in your
IoHandler implementation? If you need to do something after the write
operation is complete, please try to replace join() with adding an
IoFutureListener to make it fully asynchronous. Any further events
such as messageReceived will not be processed until your current
handler method returns.
We would be grateful if you could have a look at the following spring
configuration to see if there are any obvious mistakes, or failing
that,
if anyone has seen similar behaviour and could point to a possible
cause.
Except the following two problem, I don't see any problem with the
settings.
1) Please place protocol codec filter before the executor filter.
2) Please set the thread model to MANUAL if you added an executor
filter with a thread pool.
On a separate note, we are using the sessionIdle event in mina to
determine if external systems are still connected and to unbind if they
are idle for more than the configured amount of time. If we transmit
traffic to the external system, does this could as traffic for the
sessionIdle event and cause it to reset?
Yes. idle status is cleared when data is written out to the socket
channel.
HTH,
Trustin