Bob Scheifler wrote:
Bob Scheifler wrote:
I've just recently been testing on Solaris 10 Update 4 and JDK1.6
between two machines (2 socket x 2 core Opteron 2.2GHz) connected
via a gigabit network with jumbo frames (mtu 9000).  By setting
the rations to 0 (infinite) and the fragment sizes to 8k,
JERI TCP performance without NIO slightly exceeds JRMP performance.

With the same systems, but going across a 100Mb network with mtu 1500,
the default JERI TCP settings perform only slightly slower than JRMP,
and with NIO enabled it's nearly the same as JRMP.  Setting the rations
to 0 and increasing the fragment sizes help very little, as the
network link is already pretty much saturated.

- Bob

I'm pleased to hear you get Jini ERI at the same level as JRMP,
unfortunately I'm not able to near it on my configuration.

What was the throughput you got for the 1Gb network and the 100Mb one,
also did you test with an mtu of 1500 for gigabit ethernet? The numbers
I get for running everything on the host itself and skipping the
datalink layer are beyond the theoretical throughput one should get on a
100Mb network so I can understand that for such a configuration it has
not much use to fiddle with these parameters.

I also conducted some tests with varying ration and fragment size (NIO
enabled) when the payload is only 512 bytes in that case I see with
larger fragment sizes a very small degradation, although due to
making 100000 calls for a test there is a bit more variation in the
results. Still I was pleasantly surprised to get ~0.34 ms/call and a
throughput of ~1.5 MB/sec. For the tests it didn't really matter which
version of the JVM I was running.

Then I decided to do some tests with JRMP and it turned out it is 3
times as fast (~0.11 ms/call) for that type of call, but this is what I
expected due to the large amount of (expensive) security checks and
privileged operations made at various places in the invocation and
dispatch layer.
--
Mark


Reply via email to