Hi again,

I spend some time to search the internet rearding this issue... 
And I get another idea: How are the values for blocking IO / Java IO?

Result:
Socket recv buffer size = 8192
Socket send buffer size = 64512

So comparing Java IO with MINA, the receiver buffer size is 8x bigger, the
send buffer size is exactly 63x bigger.
Any idea why MINA (or NIO???) is using such small buffers compared to Java
IO?

- Alex

-------- Original Message --------
Subject: Re: DIRMINA-790: 2.0.0M6 + 2.0.0RC1: Win7 performance issue
Date: Thu, 17 Jun 2010 14:44:37 +0200
From: Alexander Christian <[email protected]>
To: Mina Mailinglist <[email protected]>

Hi again,

@Emmanuel: Sorry ... I (again) only replied to you, instead of the list
:-(
@others: please read my reply below ...

I now tried to play with different receive buffer sizes. And voilá .. Now
it works. But I still don't know why. But here are the details...

I added the following lines to my server and client (for client, I of
course used "connector" instead of "acceptor"):

----
System.out.println("read buffer size =
"+acceptor.getSessionConfig().getReadBufferSize());
System.out.println("send buffer size =
"+acceptor.getSessionConfig().getSendBufferSize());
System.out.println("recv buffer size =
"+acceptor.getSessionConfig().getReceiveBufferSize());
----

Output with Win7 for client as well as for server:

---
read buffer size = 2048
send buffer size = 1024
recv buffer size = 1024
---

The read buffer size seems to be the size that the IoProcessor tries to
read.
The receive buffer size, seems to be the size on the socket for receiving
data.

I then tried the same on WinXP. Client and server had again the same
values. So there's no difference between Win7 and WinXP.

If I increase on the server the recv buffer size to 2048, I still have
performance problems. If I use 2050 bytes for recv buffer, it's already
fast. And if I use 10240 bytes, it's extremly fast.

So, can someone explain why Win7 and WinXP behave differently with the
same buffer sizes? According to the already linked knowledgebase article,
this issue should be visible with WinXP, but not with Win7.
In my case (and it is already tested on other machines too), it's the
other way around. WinXP is fast, Win7 is slow.

My second als last question for this mail is:

Why is the IoProcessor waiting for 2048 bytes when the socket's buffer is
at 1024. My understanding is, that MINA is then always trying top read
more
from the socket as there is data available in recv buffer. Does this make
any sense?
I also tried it other way around: Let recv buffer size at 1024, but set
the read buffer of IoProcessor to 512... But with this setting, it's also
damn slow :-( Why?!

br,
Alex



On Thu, 17 Jun 2010 14:08:17 +0200, Alexander Christian <[email protected]>
wrote:
> On Thu, 17 Jun 2010 12:13:07 +0200, Emmanuel Lecharny
> <[email protected]>
> wrote:
>> On 6/17/10 11:42 AM, Jens Reimann wrote:
>>> Maybe this is not a MINA issue but a Windows issue:
>>>
>>> http://support.microsoft.com/kb/823764
>>>
>>> This seems exactly the case for my problem.
>>>    
>> 
>> Sounds like a good catch !
>> 
>> The workaround 2 (increase the send buffer size) should solve the 
>> problem here. That also means the server should send smaller buffer
than
> 
>> the configured size.
> 
> Hmm, okay. The article also says that it's related to non-blocking IO
and
> that blocking IO doesn't have this problem. That would explaint why my
> first reproducer application using Java IO works quite well. 
> But there are two big questionmarks in an orbit around my head:
> 
> 1) If you scroll to the bottom of the article, one can read that also
> WinXP is affected by the problem. Windows 7 is not mentioned at all. 
> 2) When I run my MINA reproducer application, it's only Win7 which has
the
> transfer slowdown. 
> 
> So the sympton only matches 50% to the article :-(
> I will now test with increased buffer sizes ...
> 
>> 
>> Who said that W$ sucks, btw ? :)
> 
> ;-) All I can say: I don't have that kind of problems with linux or
MacOS
> X ...
> 
> br,
> Alex

Reply via email to