Re: svn commit: r465417 - in /tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11: Http11NioProcessor.java InternalNioInputBuffer.java

2006-10-26 Thread Remy Maucherat

Filip Hanik - Dev Lists wrote:
I get occasional phantom slow downs with APR as well, not sure where 
they come from, I might dig into this after


Let me know if you find something.

Rémy

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: svn commit: r465417 - in /tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11: Http11NioProcessor.java InternalNioInputBuffer.java

2006-10-24 Thread Filip Hanik - Dev Lists
I've been tracking down the phantom behavior on the NIO connector and 
found some results
1. Busy write, when the client is not reading fast enough ends up 
spinning the CPU, this is fixed
2. GC, the NIO connector generates a lot more garbage than I would like 
for it to, both JIO and APR have very low GC frequencies


On larger files, I get much better results with APR and JIO, probably 
explained to the fact that there really is no good way to do a blocking 
write using NIO.
I only get better results with NIO if the entire response can be sent 
down in one shot, meaning it will fit in the socket buffer.


I get occasional phantom slow downs with APR as well, not sure where 
they come from, I might dig into this after


So I'm gonna work some on the GC stuff for the NIO, should get it a 
little bit better.


thanks for testing along, and keeping me alert
Filip


Filip Hanik - Dev Lists wrote:

Remy Maucherat wrote:

Filip Hanik - Dev Lists wrote:
gentlemen, not sure if you had a chance to look this over, but it is 
pretty interesting,
after some very basic tests, I get the NIO connector to perform 
better than the blocking io connector

the peak data throughput are
NIO - 36,000KB/s
JIO - 35,000KB/s
APR - 24,000KB/s

basic connector config, with maxThreads=150,

./ab -n 50 -c 100 -k -C 
test=89012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890 
http://localhost:8080/tomcat.gif
of course, not an all encapsulating test, but the NIO connector used 
to always lag behind the JIO connector, in these simple tests. So 
lets ignore the numbers, they aren't important.


I am very skeptical about that trick.
on most OS:es, there are kernel optimizations, I believe both linux 
and windows have 1-st byte accept, meaning they wont send you the 
accept signal until the first byte has arrived, and freebsd has a 
complete httpfilter in their kernel, meaning the process wont even get 
the socket until the entire http request is loaded up.

this is awesome when you turn off keepalive,
when keepalive is turned on, I believe this trick can be fairly 
useful.in httpd, I think you can achieve the same thing using 
mpm_event, where the http request doesn't get dispatched until it is 
received on the server.


As usual, in case someone is still interested, I get opposite results 
on my toy O$, 

cool, I've not yet ran on windows.
with the result being APR  JIO  NIO although all three are fast 
enough (unfortunately, I did not try the before/after to see if the 
trick did something for me). I also do get a dose of paranormal 
activity using the NIO connector.
I do see the phantom behavior every once in a while on FC5 with JDK 
1.5.0_07 as well.
I'm suspecting it has to do with two JDK bugs that I am bypassing 
right now, I'm gonna run some more tests, to see if I can isolate it.

in longer test runs it clears itself up pretty guick.

   try {
   wakeupCounter.set(0);
   keyCount = selector.select(selectorTimeout);
   } catch ( NullPointerException x ) {
   //sun bug 5076772 on windows JDK 1.5
   if ( wakeupCounter == null || selector == null ) 
throw x;

   continue;
   } catch ( CancelledKeyException x ) {
   //sun bug 5076772 on windows JDK 1.5
   if ( wakeupCounter == null || selector == null ) 
throw x;

   continue;
   } catch (Throwable x) {
   log.error(,x);
   continue;
   }
currently also, its doing a busy write, need to fix this as for slow 
clients it affects CPU usage.



BTW, I don't know if you know about it, but the APR and JIO AJP that 
are in o.a.coyote.ajp should both be faster than the classic o.a.jk. 
Of course, it's not that hard, since the connector is much simpler 
(and just does basic AJP).

no, I didn't know, but its very useful stuff.
thanks
Filip


Rémy

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]





-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]





-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: svn commit: r465417 - in /tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11: Http11NioProcessor.java InternalNioInputBuffer.java

2006-10-20 Thread Filip Hanik - Dev Lists

Remy Maucherat wrote:

Filip Hanik - Dev Lists wrote:
gentlemen, not sure if you had a chance to look this over, but it is 
pretty interesting,
after some very basic tests, I get the NIO connector to perform 
better than the blocking io connector

the peak data throughput are
NIO - 36,000KB/s
JIO - 35,000KB/s
APR - 24,000KB/s

basic connector config, with maxThreads=150,

./ab -n 50 -c 100 -k -C 
test=89012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890 
http://localhost:8080/tomcat.gif
of course, not an all encapsulating test, but the NIO connector used 
to always lag behind the JIO connector, in these simple tests. So 
lets ignore the numbers, they aren't important.


I am very skeptical about that trick.
on most OS:es, there are kernel optimizations, I believe both linux and 
windows have 1-st byte accept, meaning they wont send you the accept 
signal until the first byte has arrived, and freebsd has a complete 
httpfilter in their kernel, meaning the process wont even get the socket 
until the entire http request is loaded up.

this is awesome when you turn off keepalive,
when keepalive is turned on, I believe this trick can be fairly 
useful.in httpd, I think you can achieve the same thing using mpm_event, 
where the http request doesn't get dispatched until it is received on 
the server.


As usual, in case someone is still interested, I get opposite results 
on my toy O$, 

cool, I've not yet ran on windows.
with the result being APR  JIO  NIO although all three are fast 
enough (unfortunately, I did not try the before/after to see if the 
trick did something for me). I also do get a dose of paranormal 
activity using the NIO connector.
I do see the phantom behavior every once in a while on FC5 with JDK 
1.5.0_07 as well.
I'm suspecting it has to do with two JDK bugs that I am bypassing right 
now, I'm gonna run some more tests, to see if I can isolate it.

in longer test runs it clears itself up pretty guick.

   try {
   wakeupCounter.set(0);
   keyCount = selector.select(selectorTimeout);
   } catch ( NullPointerException x ) {
   //sun bug 5076772 on windows JDK 1.5
   if ( wakeupCounter == null || selector == null ) 
throw x;

   continue;
   } catch ( CancelledKeyException x ) {
   //sun bug 5076772 on windows JDK 1.5
   if ( wakeupCounter == null || selector == null ) 
throw x;

   continue;
   } catch (Throwable x) {
   log.error(,x);
   continue;
   }
currently also, its doing a busy write, need to fix this as for slow 
clients it affects CPU usage.



BTW, I don't know if you know about it, but the APR and JIO AJP that 
are in o.a.coyote.ajp should both be faster than the classic o.a.jk. 
Of course, it's not that hard, since the connector is much simpler 
(and just does basic AJP).

no, I didn't know, but its very useful stuff.
thanks
Filip


Rémy

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]





-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: svn commit: r465417 - in /tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11: Http11NioProcessor.java InternalNioInputBuffer.java

2006-10-19 Thread Filip Hanik - Dev Lists
gentlemen, not sure if you had a chance to look this over, but it is 
pretty interesting,
after some very basic tests, I get the NIO connector to perform better 
than the blocking io connector

the peak data throughput are
NIO - 36,000KB/s
JIO - 35,000KB/s
APR - 24,000KB/s

basic connector config, with maxThreads=150,

./ab -n 50 -c 100 -k -C 
test=89012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890 
http://localhost:8080/tomcat.gif 

of course, not an all encapsulating test, but the NIO connector used to 
always lag behind the JIO connector, in these simple tests. So lets 
ignore the numbers, they aren't important.


Even though the APR connector does blocking read,the same optimization 
can be implemented, just reverse the logic.
In the NIO connector if a read returns 0, it puts the socket back into 
the poller. With APR, that same read would block until data was 
available, but it can still be done.
When APR wakes up from the poller, we know there is data, if we run out 
of data during the parsing of the request (request line+headers) don't 
issue the 2nd read, just register the socket back with the poller. 
chances are that if you ran out of data while parsing the request, you 
will be waiting for more data on the line.
And because the NIO code is almost copy/paste from the APR code, porting 
this optimization should be fairly straight forward.


As always, I could be wrong and it would have the reverse effect :) but 
with keepalive connections, the optimization idea is pretty good.


Filip

[EMAIL PROTECTED] wrote:

Author: fhanik
Date: Wed Oct 18 16:24:52 2006
New Revision: 465417

URL: http://svn.apache.org/viewvc?view=revrev=465417
Log:
Implement non blocking read on HTTP requests.

A common scalability problem when it comes to HTTP is the fact that there are 
slow clients, that will block a server resources while sending a HTTP request. 
Especially when you have larger request headers.

On FreeBSD the kernel has a built in http filter to not wake up the application 
socket handle until the entire request has been received, however on other 
platforms this is not available.

With the Tomcat connectors, there is an obvious problem when it comes to slow 
clients, if the client sends up a partial request, Tomcat will block the thread 
until the client has finished sending the request. For example, if the client 
has 10 headers it sends up the first 5 headers, then the next 5 in a sequential 
batch, the tomcat thread is locked in a blocking read
I've tried to fix that problem by making the NIO connector be non blocking. The 
only time the NIO connector will block now is when the servlet asks for data, 
usually the request body, as we don't have a way to suspend a thread, like 
continuations.
Once we have continuations(that can truly remember thread stack data), we can 
have a truly non blocking server, but we are not there yet.

I believe this code could be easily ported to APR connector with very little 
effort.
When you review this code, please note that I have not attemtped to rewrite the 
header parse logic, I might do that in a later stage as this got a little 
messy, but I wanted the proof of concept done first and reuse as much code as 
possible.

Please feel free to review and even flame me if needed, at least that means 
this got some attention :)
  



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



svn commit: r465417 - in /tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11: Http11NioProcessor.java InternalNioInputBuffer.java

2006-10-18 Thread fhanik
Author: fhanik
Date: Wed Oct 18 16:24:52 2006
New Revision: 465417

URL: http://svn.apache.org/viewvc?view=revrev=465417
Log:
Implement non blocking read on HTTP requests.

A common scalability problem when it comes to HTTP is the fact that there are 
slow clients, that will block a server resources while sending a HTTP request. 
Especially when you have larger request headers.

On FreeBSD the kernel has a built in http filter to not wake up the application 
socket handle until the entire request has been received, however on other 
platforms this is not available.

With the Tomcat connectors, there is an obvious problem when it comes to slow 
clients, if the client sends up a partial request, Tomcat will block the thread 
until the client has finished sending the request. For example, if the client 
has 10 headers it sends up the first 5 headers, then the next 5 in a sequential 
batch, the tomcat thread is locked in a blocking read
I've tried to fix that problem by making the NIO connector be non blocking. The 
only time the NIO connector will block now is when the servlet asks for data, 
usually the request body, as we don't have a way to suspend a thread, like 
continuations.
Once we have continuations(that can truly remember thread stack data), we can 
have a truly non blocking server, but we are not there yet.

I believe this code could be easily ported to APR connector with very little 
effort.
When you review this code, please note that I have not attemtped to rewrite the 
header parse logic, I might do that in a later stage as this got a little 
messy, but I wanted the proof of concept done first and reuse as much code as 
possible.

Please feel free to review and even flame me if needed, at least that means 
this got some attention :)


Modified:
tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/Http11NioProcessor.java

tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/InternalNioInputBuffer.java

Modified: 
tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/Http11NioProcessor.java
URL: 
http://svn.apache.org/viewvc/tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/Http11NioProcessor.java?view=diffrev=465417r1=465416r2=465417
==
--- tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/Http11NioProcessor.java 
(original)
+++ tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/Http11NioProcessor.java 
Wed Oct 18 16:24:52 2006
@@ -820,7 +820,7 @@
 
 boolean keptAlive = false;
 boolean openSocket = false;
-
+boolean recycle = true;
 while (!error  keepAlive  !comet) {
 
 // Parsing the request header
@@ -829,8 +829,7 @@
 
socket.getIOChannel().socket().setSoTimeout((int)soTimeout);
 inputBuffer.readTimeout = soTimeout;
 }
-if (!inputBuffer.parseRequestLine
-(keptAlive  (endpoint.getCurrentThreadsBusy()  
limit))) {
+if (!inputBuffer.parseRequestLine(keptAlive  
(endpoint.getCurrentThreadsBusy()  limit))) {
 // This means that no data is available right now
 // (long keepalive), so that the processor should be 
recycled
 // and the method should return true
@@ -839,13 +838,18 @@
 socket.getPoller().add(socket);
 break;
 }
-request.setStartTime(System.currentTimeMillis());
 keptAlive = true;
-if (!disableUploadTimeout) {
+if ( !inputBuffer.parseHeaders() ) {
+openSocket = true;
+socket.getPoller().add(socket);
+recycle = false;
+break;
+}
+request.setStartTime(System.currentTimeMillis());
+if (!disableUploadTimeout) { //only for body, not for request 
headers
 socket.getIOChannel().socket().setSoTimeout((int)timeout);
 inputBuffer.readTimeout = soTimeout;
 }
-inputBuffer.parseHeaders();
 } catch (IOException e) {
 error = true;
 break;
@@ -934,7 +938,7 @@
 return SocketState.LONG;
 }
 } else {
-recycle();
+if ( recycle ) recycle();
 return (openSocket) ? SocketState.OPEN : SocketState.CLOSED;
 }
 

Modified: 
tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/InternalNioInputBuffer.java
URL: 
http://svn.apache.org/viewvc/tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/InternalNioInputBuffer.java?view=diffrev=465417r1=465416r2=465417
==
--- 
tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/InternalNioInputBuffer.java 
(original)
+++