Hi Lécharny:

Thanks for your wonderful reply.

I would like to further introduce my test scenario.

The performance test scenario:

The host service:
Linux sgp242 2.6.16.60-0.21-smp #1 SMP Tue May 6 12:41:02 UTC 2008 x86_64 
x86_64 x86_64 GNU/Linux
Memory: 24597956k  CPU: 8

JDK VERSION:
java version "1.6.0_24"
Java(TM) SE Runtime Environment (build 1.6.0_24-b07)
Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02, mixed mode)

MINA VERSION:
mina-2.0.4

Test Case:
1) The client test suite and the service test suite are installed on two 
different servers, they are in the same network segment;
2)The client develop by JAVA NIO, every time it send 200bytes message to the 
server, every message split by the '|', server peer
  Calculate the received message number;
3) As we mock the long protocol, so the client test suite will not close the 
connection, the connection is been open until the end of the test;
4) The number of test suite is 30;
5)We test 5 min then calculate the average throughput.

Test result:
After we edit the Receive buffer as cache buffer, the performance improve 23.3% 
in this test case.

In order to test ByteBuffer create overhead, I made the following comparison 
test:

Test host:
Windows XP SP3 Intel E2180 2.0G Memory:2.0G

JDK VERSION:
java version "1.6.0_29"
Java(TM) SE Runtime Environment (build 1.6.0_29-b11)
Java HotSpot(TM) Client VM (build 20.4-b02, mixed mode, sharing)

Test Case:
We put 500 bytes into ByteBuffer:
Case1 : every time we create a new ByteBuffer to put the test bytes, after 
looping,
Case2: We create a cache ByteBuffer, every time, we clear this buffer to 
receive new test bytes.

Test 10S, then calculate the average execution times, the result as this:
CASE1 : 1001224;
CASE2: 5563833

The test results show that frequently create byte buffer affect performance.

In fact, when we deal with big stream, such as MMS attachments, every message 
maybe 
100K, this high load services will make this frequently creating more worse.

If it is a short connection protocol, one connection one session, it is a 
little performance difference.


it's just that we have to 
carefully evaluate the advantage we will get from applying them compared 
to what we may lose on the other side...
>> I am very sorry that I do not see the advantages of each message(may be 
>> stick packages or a half package ) to create a new byte buffer. 
         
-----邮件原件-----
发件人: Emmanuel Lécharny [mailto:[email protected]] 
发送时间: 2012年7月12日 21:24
收件人: [email protected]
主题: Re: Some suggestions for the receive buffer and half package processing

Le 7/12/12 11:21 AM, Lilinfeng a écrit :
> Hi:
Hi !
>
> Good afternoon.
>
> I am a system designer at HUAWEI.
>
> In recent 3 years, I have been using the Java NIO to develop high performance 
> service gate way,
> Like SMS/MMS/WAP PUSH Gateway.
>
> This year, I do the technology selection for the industry mature NIO 
> framework, such as Mina/Netty/NioFramwork/XSoket etc.
> I found that Mina is the most outstanding architecture.
>
> After reading the source code and performance test, I found there are some 
> optimization point.
>
> Listed as follows:
>
> The receiver buffer:
>
> When we do reading operator, every time we create a new ByteBuffer for 
> reading bytes from SoketChannel, in long connection protocol,
> Such as SMPP, frequently creating and destroy byte array will reduce the 
> performance. In fact , we can create a cache buffer in the session
> As a attribute. After performance testing, this will improve performance over 
> 20%(Long connect protocol).

This is not obvious. I mean, yes, possibily, when doing some specific 
tests you will demonstrate that it can save some CPU, but on the other 
hand, you may lose something else : for instance, keeping a ByteBuffer 
in a cache means each session will keep a ByteBuffer, until the session 
is closed. If you have tens of thousand sessions, with a 10k buffer for 
instance, that means you'll keep 100Mb of memory allocated just for that 
purpose. Not sure that it's sustainable in the long term.

I would say that we have to balance the advantage such a solution brings 
with the drawbacks taht come with.
>
> Decode half package(some Frames condition):
>
> If the receive buffer is only a part of a message, we will use the 
> CumulativeProtocolDecoder class Solve it, we will cache the left buffer to
> Session. If there are large number of half package scene, this will lead to 
> frequently creating and destroy byte buffer, this is not a best choice.
You can workaround this problem by using a different decoder. The 
CumulativeProtocolDecoder is handy, as it does the job for you, but in 
some cases, it's not efficient. In LDAP (ApacheDS project), we don't use 
the CumulativeProtocolDecoder, we do process the bytes has they arrive, 
and if the message is not complete, then we stop decoding until we get 
some more bytes. We don't keep the ByteBuffer, we don't expand it, we 
just construct the resulting message while decoding the bytes.

It's a bit more complex, but it gives excellent results.


>
> In fact, before every looping, we mark the IoBuffer for record the first 
> package position, when the decode return false, we rest the IoBuffer to
> The original position then return for receiving the next package. This will 
> reduce the times of IoBuffer creating and simple the process.
But you will have to copy the first part of the message somewhere in 
memory. Likely in a byte[]. You are swapping the creation of a 
ByteBuffer for the creation of a temporary storage...

I don't know which version of MINA you are using, nor the JVM version 
you are running your test on, but one thing to recall is that with the 
latest JVMs, creating an object is an extremelly cheap operation, 
something probably cheaper than any other mechanism you may think of to 
avoid creating such objects.

I'm not saying that to put down your ideas, some of them may be 
interesting (I'm specifically thinking about caching *at least* a 
limited size ByteBuffer withing every session, so that you spare the BB 
creation for most of the received data), it's just that we have to 
carefully evaluate the advantage we will get from applying them compared 
to what we may lose on the other side...

Many thanks for those suggestions, and let's discuss them a bit further !

-- 
Regards,
Cordialement,
Emmanuel Lécharny
www.iktek.com

Reply via email to