Hello all,

I am trying to optimize a client-server application we develop for a 
client of ours. We use mina 1.0 for client and server and I am uncertain 
about the number of worker/processing threads for the server to service 
incoming requests. Basically mina is used "only" to perform the most basic 
kind of communication (i.e. read and write objects via serialization 
from/to the network). After an object has been received, it is put into an 
internal queue of a communication service and processed by worker threads 
outside of mina's control (i.e. without using the filter chain or 
anything). So, we need mina just to read and write data. I experimented 
with Frederic's suggestions in the thread "[MINA 1.x] IoThreadPoolFilter & 
ProtocolThreadPoolFilter". He wrote:

--- start quote
It is only my own experience, but here is my way to set
Thread Pool for several part of Mina in 1.0 :

For SocketAcceptor (or IoAcceptor) :
    Executor executor1 = Executors.newFixedThreadPool(nbThread);
    SocketAcceptor acceptor = new SocketAcceptor(nbThread, executor1);

My feeling is that it shouldn't be a newCachedThreadPool here since
it seems it is relative to the number of SocketIoProcessors that
the SocketAcceptor will use.
By default (new SocketAcceptor() without arguments), it use
a value of 16 threads and SocketIoProcessors.
--- end quote

What I do not understand is: why do I have to specify two numbers of 
threads (one during the specification of the fixed thread pool and one 
when constructing the socket acceptor)? Frederic sets both times an 
identical value, but how are they related? Or is the first argument in the 
SocketAcceptor constructor just an information how many (core) threads the 
executor will use?

The server we use is a dual-processor Opteron machine and I am wondering 
how many threads would be best to specify for the executor and socket 
acceptor - should the number match the processor count? Or should it be 
greater than that? When specifying nbThread = 2, the system showed some 
strange behavior, letting one client work normally, but when a second 
client connected he was set to wait for a minute or longer until his 
request was even patched through - after setting nbThread = 8 or higher, 
this did no longer occur.

By the way, all of the communication is SSL encrypted. My setup of the 
server looks like this:

--- start snip
SSLFilter sslFilter = new 
SSLFilter(this.sslContextFactory.getServerContext());
sslFilter.setNeedClientAuth(true);
sslFilter.setWantClientAuth(true);

IoAcceptorConfig config = new SocketAcceptorConfig();
DefaultIoFilterChainBuilder chain = config.getFilterChain();

Executor executor = Executors.newFixedThreadPool(8);
acceptor = new SocketAcceptor(8, executor);

chain.addLast("sslFilter", sslFilter);

ProtocolCodecFactory codec = new SimpleFactory();
chain.addLast("protocolFilter", new ProtocolCodecFilter(codec));

acceptor.bind(this.socketAddress, new GenericSessionHandler(this), 
config);
--- end snip

Is this configuration correct thus far, or may I run into serious 
performance problem when more than 30 clients connect simultaneously 
(estimated count is 50 simultaneous connects at peak times)?

Thanks in advance,

Sven


Information contained in this message is confidential and may be legally 
privileged. If you are not the addressee indicated in this message (or 
responsible for the delivery of the message to such person), you may not copy, 
disclose or deliver this message or any part of it to anyone, in any form. In 
such case, you should delete this message and kindly notify the sender by reply 
Email. Opinions, conclusions and other information in this message that do not 
relate to the official business of Proximity shall be understood as neither 
given nor endorsed by it.

Reply via email to