Bob Scheifler (JIRA) wrote:
make JERI MUX protocol impl maxFragmentSize configurable
--------------------------------------------------------

                 Key: RIVER-280
                 URL: https://issues.apache.org/jira/browse/RIVER-280
             Project: River
          Issue Type: Improvement
          Components: net_jini_jeri
    Affects Versions: jtsk_2.1
            Reporter: Bob Scheifler
            Priority: Minor


In the current JERI MUX implementation, maxFragmentSize is hardcoded
to 1024 in the MuxClient and MuxServer constructors.  It would be
useful to make it configurable via system properties, similar to
initialInboundRation.  Applications doing large data transfers can
get better performance with larger fragments.


Hi Bob,

I noticed this issue. Today I found some time to perform some tests
myself with the code as listed in
https://issues.apache.org/jira/browse/RIVER-281 with NIO enabled.

Test are conducted on Windows XP SP2 with Pentium Mobile 1.7GHz and
during the tests I saw no excessive full GC's (max around 0.03 sec) and
I used Sun JDK 1.4.2_16.

Below the matrix of some settings for the ration (client and server) and
the maxFragmentSize and the measured throughput. Settings are in KBytes
while the average throughput is rounded in MBytes/sec.

         |             ration
fragment |---------------------------------
 size    |     4      16       32      124
---------|---------------------------------
   1     |     9      13      -13-      13
   2     |    12      16       18       18
   4     |    14      19       21       21
  16     |    13      22       25       26
  32     |    12      22       25       26

I also ran the test with the Http(Server)Endpoint and the throughput was
only 13 MBytes/sec which means it equals the throughput with the default
settings for the Tcp(Server)Endpoint. You indicates it should perform
more in line with JRMP but I'm not experiencing that.

I also ran the test with JRMP and there I got a throughout of 38
MBytes/sec which is still much better than the 26 MBytes I was able to
get from the Tcp(Server)Endpoint.

I also ran some of the tests with JDK 1.5.0 and there JRMP jumped from
38 to 44 MBytes/sec. The Http(Server)Endpoint test went from 13 to 31
MBytes/sec while the tests with the Tcp(Server)Endpoint seemed to make
no improvements at all. So here indeed the Http(Server)Endpoint performs
more in line with JRMP.

Based on the above it looks like the settings for the ration above 32
KBytes have not that much effect, most effect can be obtained with
increasing the fragment size.

As I have no clue about the internals of the Mux code what would be the
(negative) consequence in case we change the default maxFragmentSize
from 1024 to 4 or more KBytes? I realize that per session more direct
memory will be consumed, but will it also have consequences for other
calls (with less data) relative to calls with lots of data?
--
Mark

Reply via email to