Hello everyone,

I'm benchmarking a distributed application developed with MINA 2.0.4.

I use a simple text protocol (TextLineCodecFactory over TCP) to transmit JSON 
formatted messages (Using Jackson 1.9.0) with a payload size of 472 bytes (that 
is the length of the String I write to the MINA session).

Two applications running on different machines that communicate with said 
protocol over Gigabit Ethernet achieve a maximum stable throughput of 36.000 
events/second (measured by my benchmarking component).

Assuming 20 bytes for IP and another 20 bytes for TCP headers, that is 512 
bytes per message so roughly 17.57 MBytes/second.

I was expecting *more* of our Gigabit switch (and this is without any other 
traffic in the network) so I'm looking for ways to improve performance. The 
machines have plenty of power (8 core Xeons) and RAM (24 Gb each).

I have used VisualVM to profile the applications and I see that most of the 
time is spent in NioProcessor.select() followed (quite far) by 
NioProcessor.write().

Should I understand that there's nothing left for me to do as MINA is taking 
most of the processing time?

I haven't customized any aspect of MINA (buffer size, thread pool, things like 
that...).

Any hints would be GREATLY appreciated.

I guess I'm using very small packages that have an impact in the maximum 
achieved bandwidth. Maybe using a message size closer to the MTU of 1500 bytes 
would bring total bandwidth up at the cost of a lowered throughput?

Zigor.

Reply via email to