On Thu, Aug 18, 2005 at 11:26:21AM +0200, Ortwin Gl?ck wrote:
> 
> 
> Oleg Kalnichevski wrote:
> >On Thu, Aug 18, 2005 at 10:39:02AM +0200, Ortwin Gl?ck wrote:
> >
> >>
> >>Oleg Kalnichevski wrote:
> >>
> >>
> >>>Odi,
> >>>
> >>>(1) Please take a closer look at the exception stack trace. The exception
> >>>is thrown when the server socket gets interrupted while blocked in the
> >>>listen method. This exception is perfectly legitimate in this context
> >>
> >>Okay.
> >>
> >>
> >>>(2) This should not really require a PhD Stanford to figure out that if
> >>>you hardware is faster that the one I used to run the test you might
> >>>want to tweak the parameters a little in order to make the numbers a
> >>>little more representative. Please increase the buffer size and retest
> >>
> >>Ok, with 10 times as much data (10 MB), I get:
> >>
> >>Old IO average time (ms): 121
> >>Blocking NIO average time (ms): 119
> >>NIO with Select average time (ms): 133
> >>
> >>The jitter is now around 10-15%. So the three values still are all in 
> >>the same statistical bucket. That means there is no notable performance 
> >>difference below 10 MB of data.
> >>
> >
> >Odi,
> >I think 10-15% performance penalty is considerable.
> 
> 10-15% is the *jitter*, not the performance penalty.

Odi,

As soon as I see NIO w/ select outperform old IO by 10-15% at least
ONCE I may believe this is just a statistical jitter. Until then old IO
consistently outperforming NIO w/select by 10-15% looks horribly like 
a performance penalty to me ;)

You are welcome to give System.nanoTime() method a shot which supposedly
is more precise than System.currentTimeMillis()

I have put WAY more effort into writing the damn NIOHttpDataReceiver
than the entire HTTP coyote connector, and it REALLY pains me to have 
found this code useless (at least in HttpCommon). At the same time I 
rather keep HttpCommon lean and mean, and use NIO where it make sense, 
that is in HttpAsynch.

Oleg


If I measure 133 ms 
> for NIO then these 133 ms are an avarage of 20 individual values with 15 
> ms of uncertainity each. That means the value 133 ms has an uncertainity 
> of 15 ms as well (which is 11%) : The "real" value is between 118 and 
> 148 ms. So the above numbers have the following meaning:
> 
> Old IO average time (ms): 106 - 136
> Blocking NIO average time (ms): 104 - 134
> NIO with Select average time (ms): 118 - 148
> 
> or graphically:
> 
>       ******************************* [IO]
> 
>     ******************************* [bIO]
> 
>           [nbIO]  *******************************
> 
> |---------|---------|---------|---------|---------|
> 100      110       120       130       140       150
> 
> Thus it may be possible that, despite the actual figures, nbIO is 
> actually faster than IO.
> 
> Sorry for being pedantic. It's just that I am a physicist and I was 
> taught how to properly perform a measurement.
> 
> > Besides, on some
> >platforms, admittedly misconfigured or having poor JRE implementation,
> >the cost of having a channel selector per channel is simply prohibitive.
> >
> >I am still of an opinion we gain absolutely nothing by using NIO for
> >API that is not specifically designed to make heavy use of non-blocking
> >IO with hundreds of channels managed by one channel selector.
> >
> >Oleg
> 
> 
> -- 
> [web]  http://www.odi.ch/
> [blog] http://www.odi.ch/weblog/
> [pgp]  key 0x81CF3416
>        finger print F2B1 B21F F056 D53E 5D79  A5AF 02BE 70F5 81CF 3416
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to