Robert,

Thanks for sharing the information. It is very helpful as I am
currently going through the process of tuning an application that I
developed using MINA which is not matching the performance of the
older non-NIO based version of the same application.

I want to see if your test results apply to my application.

I am already trying option 1 - "don't use direct buffers".

What is the easiest way to prevent byte buffer pooling if I wanted to
run a quick test for option 2 - "don't attempt to pool buffers"?

thanks,
Srikanth

On 4/28/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:

Hi all,

Some of you may recall I have been doing some performance testing of MINA on
relatively powerful boxes connected with Gb ethernet. We have also been
testing with different JVMs. The results are very interesting and I wanted
to get some feedback and opinions on them.

We had found when we moved to Gb ethernet that JRockit was significantly
outperforming the Sun JVM (including latest 1.6 beta) and that we were
seeing almost no improvement on the Sun JVM when moving from 100Mb to Gb.
This meant that JRockit was at least two to three times faster on our tests
than the Sun JVM.

During the analysis, we had a problem on all JVMs where we were running out
of native memory (i.e. we were allocating too many direct buffers). This was
in fact a memory leak in our application, but during the testing, I modified
MINA so that it would always allocate heap buffers (never direct). All the
documentation indicates that you should use direct buffers for I/O, but when
we used heap buffers we found that the performance improved significantly!
On one of our tests the performance doubled on JRockit, and on others it
increased by at least 20%.

We also found that, when using heap buffers, using the Pooled allocator was
slower than the simple allocator - i.e. allowing the JVM to allocate buffers
was more efficient. [As a side issue I am about to raise an issue to add
dispose() to the Allocator interface - currently if you set the allocator
the pooled allocator expirer thread still runs].

I should point out that we found 32k send and receive buffers offered
significant improvements compared with e.g. 8k.

What was the impact on the Sun JVM? Well, the Sun JVM suffers an as yet
unexplained warm up period where the first batch on traffic on our test
would take seven times longer than subsequent batches. I am working with Sun
to analyse this further. Ignoring this warm up (which is not GC related nor
we think JIT related) the Sun 1.6 JVM outperforms the JRockit JVM - this is
the first time we have seen this. We tried things like pre-allocating a
large number of buffers to see if it was an allocation problem but this made
no difference. Sun 1.5 had the same characteristics but the initial spike
was not as extreme. IBM JDK 1.5 had an even larger spike than Sun 1.6.

So to summarise, our tests would indicate

1) don't use direct buffers
2) don't attempt to pool buffers
3) depending on your application the Sun JVM will outperform JRockit for
longer-running tests but if performance on shorter tests or on startup is
important then JRockit is better

Our hardware was HP DL 585 machines, with 4 dual core opterons connected
using Gb ethernet. OS was Red Hat Enterprise linux 4 (2.6 kernel).

Comments most welcome! If our analysis with Sun comes up with anything to
explain the initial spike in the Sun JVM performance I'll let you know.

Robert

 This communication is for informational purposes only. It is not intended
 as an offer or solicitation for the purchase or sale of any financial
 instrument or as an official confirmation of any transaction. All market
prices,
 data and other information are not warranted as to completeness or accuracy
and
 are subject to change without notice. Any comments or statements made
herein
 do not necessarily reflect those of JPMorgan Chase & Co., its subsidiaries
 and affiliates.

Reply via email to