Re: Tomcat performance patch (in development) to reduce concurrency...

2005-07-27 Thread Scott Marlow
On Tue, 2005-07-26 at 16:55 +0200, Remy Maucherat wrote:
 Remy Maucherat wrote:
  Scott Marlow wrote:
  
  Anyway, my point is that this could be a worthwhile enhancement for
  applications that run on Tomcat.  What I don't understand yet is whether
  the same functionality is already in Tomcat.
 
  I should point out that some applications shouldn't limit the max number
  of concurrent requests (long running requests won't benefit but maybe
  those applications shouldn't run on the web tier anyway :-)
  
  I agree with the intent, but this is not implemented properly. I think
  the idea is to restrict concurrency in the application layer, rather at
  the low level (where, AFIK, concurrency isn't that expensive, and is
  better addressed using a little non blocking IO). The performance
  benefits for certain types of applications will be the same, but without
  introducing any unwanted limitations or incorrect behavior at the
  connector level.
  
  I think you should write a ConcurrencyValve instead, which would do
  something like:
  
  boolean shouldRelease = false;
  try {
  concurrencySemaphore.acquire();
  shouldRelease = true;
  getNext().invoke(request, response);
  } finally {
  if (shouldRelease)
  concurrencySemaphore.release();
  }
  
  As it is a valve, you can set it globally, on a host, or on an
  individual webapp, allowing to control concurrency in a fine grained
  way. In theory, you can also add it on individual servlets, but it
  requires some hacking. Since it's optional and independent, I think it
  is acceptable to use Java 5 for it.
  
  As you pointed out, some applications may run horribly with this (slow
  upload is the most glaring example).
 
 It took forever (given it's only 10 lines of code), but I added the 
 valve. The class is org.apache.cataline.valves.SemaphoreValve.
 
 So you can add it at the engine level to add a concurrency constraint 
 for the whole servlet engine, without constraining the connector (which 
 might not be low thread count friendly).
 
 Rémy
 
 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]
 
I tried SemaphoreValve today and it worked as expected. Nice job! :-)

I also tried a JDK1.4 flavor (SemaphoreValve14) which uses Doug Lea's
concurrent.jar and that worked as well (I omitted fairness support and
defaulted to fair.)

Depending on the Doug Lea concurrent jar will be a problem as that jar
is not used in Tomcat.  However, if someone wanted to build it
themselves with their own copy of concurrent.jar, that would work.

Should I post the Java 1.4 flavor of SemaphoreValue14 here?

Scott


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Tomcat performance patch (in development) to reduce concurrency...

2005-07-26 Thread Remy Maucherat

Remy Maucherat wrote:

Scott Marlow wrote:


Anyway, my point is that this could be a worthwhile enhancement for
applications that run on Tomcat.  What I don't understand yet is whether
the same functionality is already in Tomcat.

I should point out that some applications shouldn't limit the max number
of concurrent requests (long running requests won't benefit but maybe
those applications shouldn't run on the web tier anyway :-)


I agree with the intent, but this is not implemented properly. I think
the idea is to restrict concurrency in the application layer, rather at
the low level (where, AFIK, concurrency isn't that expensive, and is
better addressed using a little non blocking IO). The performance
benefits for certain types of applications will be the same, but without
introducing any unwanted limitations or incorrect behavior at the
connector level.

I think you should write a ConcurrencyValve instead, which would do
something like:

boolean shouldRelease = false;
try {
concurrencySemaphore.acquire();
shouldRelease = true;
getNext().invoke(request, response);
} finally {
if (shouldRelease)
concurrencySemaphore.release();
}

As it is a valve, you can set it globally, on a host, or on an
individual webapp, allowing to control concurrency in a fine grained
way. In theory, you can also add it on individual servlets, but it
requires some hacking. Since it's optional and independent, I think it
is acceptable to use Java 5 for it.

As you pointed out, some applications may run horribly with this (slow
upload is the most glaring example).


It took forever (given it's only 10 lines of code), but I added the 
valve. The class is org.apache.cataline.valves.SemaphoreValve.


So you can add it at the engine level to add a concurrency constraint 
for the whole servlet engine, without constraining the connector (which 
might not be low thread count friendly).


Rémy

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Tomcat performance patch (in development) to reduce concurrency...

2005-05-09 Thread Remy Maucherat
Scott Marlow wrote:
Anyway, my point is that this could be a worthwhile enhancement for
applications that run on Tomcat.  What I don't understand yet is whether
the same functionality is already in Tomcat.
I should point out that some applications shouldn't limit the max number
of concurrent requests (long running requests won't benefit but maybe
those applications shouldn't run on the web tier anyway :-)
I agree with the intent, but this is not implemented properly. I think
the idea is to restrict concurrency in the application layer, rather at
the low level (where, AFIK, concurrency isn't that expensive, and is
better addressed using a little non blocking IO). The performance
benefits for certain types of applications will be the same, but without
introducing any unwanted limitations or incorrect behavior at the
connector level.
I think you should write a ConcurrencyValve instead, which would do
something like:
boolean shouldRelease = false;
try {
concurrencySemaphore.acquire();
shouldRelease = true;
getNext().invoke(request, response);
} finally {
if (shouldRelease)
concurrencySemaphore.release();
}
As it is a valve, you can set it globally, on a host, or on an
individual webapp, allowing to control concurrency in a fine grained
way. In theory, you can also add it on individual servlets, but it
requires some hacking. Since it's optional and independent, I think it
is acceptable to use Java 5 for it.
As you pointed out, some applications may run horribly with this (slow
upload is the most glaring example).
Rmy
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Tomcat performance patch (in development) to reduce concurrency...

2005-05-05 Thread Scott Marlow
On Wed, 2005-05-04 at 16:02 +0200, Remy Maucherat wrote:
 Scott Marlow wrote:
  Hi, 
  
  I wonder if anyone has any feedback on a performance change that I am
  working on making. 
  
  One benefit of reducing concurrency in a server application is that a
  small number of requests can complete more quickly than if they had to
  compete against a large number of running threads for object locks (Java
  or externally in a database). 
  
  I would like have a Tomcat configuration option to set the max number of
  concurrent threads that can service user requests.  You might configure
  Tomcat to handle 800 http client connections but set the max concurrent
  requests to 20 (perhaps higher if you have more CPUs).  I like to refer
  to the max concurrent requests setting as the throttle size (if there is
  a better term, let me know).
  
  I modified the Tomcat Thread.run code to use Doug Lea's semaphore
  support but didn't expose a configuration option (haven't learned how to
  do that yet). My basic change is to allow users to specify the max
  number of concurrent servlet requests that can run. If an application
  has a high level of concurrency, end users may get more consistent
  response time with this change. If an application has a low level of
  concurrency, my change doesn't help as their application only has a few
  threads running concurrently anyway. 
  
  This also reduces resource use on other tiers. For example, if you are
  supporting 500 users with a Tomcat instance, you don't need a database
  connection pool size of 500, instead set the throttle size to 20 and
  create a database connection pool size of 20. 
  
  Current status of the change: 
  
  1. org.apache.tomcat.util.threads.ThreadPool.CONCURRENT_THREADS is
  hardcoded to a value of 18, should be a configurable option. 
  2. I hacked the build scripts to include Doug Lea's concurrent.jar but
  probably didn't make these changes correctly.  I could switch to using
  the Java 1.5 implementation of the Concurrent package but we would still
  need to do something for Java 1.4 compatibility.
  
  Any suggestions on completing this enhancement are appreciated.
  
  Please include my [EMAIL PROTECTED] email address in your response.
 
 I looked at this yesterday, and while it is a cool hack, it is not that 
 useful anymore (and we're also not going to use the concurrent utilities 
 in Tomcat, so it's not really an option before we require Java 5). The 
 main issue is that due to the fact keepalive is done in blocking mode, 
 actual concurrency in the servlet container is unpredictable (the amount 
 of processing threads - maxThreads - will usually be a lot higher than 
 the actual expected concurrency - let's say 100 per CPU). If that issue 
 is solved (we're trying to see if APR is a good solution for it), then 
 the problem goes away.
 
 Your patch is basically a much nicer implementation of maxThreads 
 (assuming it doesn't reduce performance) which would be useful for the 
 regular HTTP connector, so it's cool, but not worth it. Overall, I think 
 the way maxThreads is done in the APR connector is the easiest (if the 
 amount of workers is too high, wait a bit without accepting anything).
 
 However, reading the text of the message, you don't seem to realize that 
 a lot of the threads which would actually be doing processing are just 
 blocking for keepalive (hence not doing anything useful; maybe you don't 
 see it in your test). Anyway, congratulations for understanding that 
 ThreadPool code (I stopped using it for new code, since I think it has 
 some limitations and is too complex).
 
 Rmy
 
 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]
 

Thank you for all of the replies!

The benefit of reducing concurrency is for the application code more
than the web container.  I last saw the benefit in action on Novell's
IIOP container when I was working on publishing spec.org benchmark
numbers for
(http://www.spec.org/jAppServer2001/results/res2003q4/jAppServer2001-20031118-00016.html).

Prior to setting the max number of concurrent requests allowed to run at
once, I had about 800 communication threads that were also running
application requests.  The application requests would typically do some
local processing and quite a bit of database i/o (database ran on a
different tier).  With 800 application threads running at once, there
was too much contention on shared Java objects (the Java unfair
scheduler made this worse) and database contention.  Some client
requests would take 2 seconds to complete while others would take 40
seconds.

Luckily the Novell Corba orb already had the ability to set the max
number of IIOP requests allowed to run concurrently.  Setting this to 18
didn't impact the communications threads ability to send/receive but
instead restricted the number of application requests being processed at
once to 

RE: Tomcat performance patch (in development) to reduce concurrency...

2005-05-04 Thread Yoav Shapira
Hi,
Repeatable benchmarks showing a significant improvement for some use case
would be appreciated (certainly) and a prerequisite (probably) for addition
into this relatively core part of Tomcat.  I don't think this is much
different than setting the current maxThreads (and min/max Spare threads) as
opposed to acceptCount: one could set maxThreads to 20 and acceptCount to
500, for example.

Yoav Shapira
System Design and Management Fellow
MIT Sloan School of Management / School of Engineering
Cambridge, MA USA
[EMAIL PROTECTED] / [EMAIL PROTECTED]

 -Original Message-
 From: Scott Marlow [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, May 04, 2005 9:42 AM
 To: tomcat-dev@jakarta.apache.org
 Cc: [EMAIL PROTECTED]
 Subject: Tomcat performance patch (in development) to reduce
concurrency...
 
 Hi,
 
 I wonder if anyone has any feedback on a performance change that I am
 working on making.
 
 One benefit of reducing concurrency in a server application is that a
 small number of requests can complete more quickly than if they had to
 compete against a large number of running threads for object locks (Java
 or externally in a database).
 
 I would like have a Tomcat configuration option to set the max number of
 concurrent threads that can service user requests.  You might configure
 Tomcat to handle 800 http client connections but set the max concurrent
 requests to 20 (perhaps higher if you have more CPUs).  I like to refer
 to the max concurrent requests setting as the throttle size (if there is
 a better term, let me know).
 
 I modified the Tomcat Thread.run code to use Doug Lea's semaphore
 support but didn't expose a configuration option (haven't learned how to
 do that yet). My basic change is to allow users to specify the max
 number of concurrent servlet requests that can run. If an application
 has a high level of concurrency, end users may get more consistent
 response time with this change. If an application has a low level of
 concurrency, my change doesn't help as their application only has a few
 threads running concurrently anyway.
 
 This also reduces resource use on other tiers. For example, if you are
 supporting 500 users with a Tomcat instance, you don't need a database
 connection pool size of 500, instead set the throttle size to 20 and
 create a database connection pool size of 20.
 
 Current status of the change:
 
 1. org.apache.tomcat.util.threads.ThreadPool.CONCURRENT_THREADS is
 hardcoded to a value of 18, should be a configurable option.
 2. I hacked the build scripts to include Doug Lea's concurrent.jar but
 probably didn't make these changes correctly.  I could switch to using
 the Java 1.5 implementation of the Concurrent package but we would still
 need to do something for Java 1.4 compatibility.
 
 Any suggestions on completing this enhancement are appreciated.
 
 Please include my [EMAIL PROTECTED] email address in your response.
 
 Thank you,
 Scott Marlow --- Tomcat newbie


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Tomcat performance patch (in development) to reduce concurrency...

2005-05-04 Thread Remy Maucherat
Scott Marlow wrote:
Hi, 

I wonder if anyone has any feedback on a performance change that I am
working on making. 

One benefit of reducing concurrency in a server application is that a
small number of requests can complete more quickly than if they had to
compete against a large number of running threads for object locks (Java
or externally in a database). 

I would like have a Tomcat configuration option to set the max number of
concurrent threads that can service user requests.  You might configure
Tomcat to handle 800 http client connections but set the max concurrent
requests to 20 (perhaps higher if you have more CPUs).  I like to refer
to the max concurrent requests setting as the throttle size (if there is
a better term, let me know).
I modified the Tomcat Thread.run code to use Doug Lea's semaphore
support but didn't expose a configuration option (haven't learned how to
do that yet). My basic change is to allow users to specify the max
number of concurrent servlet requests that can run. If an application
has a high level of concurrency, end users may get more consistent
response time with this change. If an application has a low level of
concurrency, my change doesn't help as their application only has a few
threads running concurrently anyway. 

This also reduces resource use on other tiers. For example, if you are
supporting 500 users with a Tomcat instance, you don't need a database
connection pool size of 500, instead set the throttle size to 20 and
create a database connection pool size of 20. 

Current status of the change: 

1. org.apache.tomcat.util.threads.ThreadPool.CONCURRENT_THREADS is
hardcoded to a value of 18, should be a configurable option. 
2. I hacked the build scripts to include Doug Lea's concurrent.jar but
probably didn't make these changes correctly.  I could switch to using
the Java 1.5 implementation of the Concurrent package but we would still
need to do something for Java 1.4 compatibility.

Any suggestions on completing this enhancement are appreciated.
Please include my [EMAIL PROTECTED] email address in your response.
I looked at this yesterday, and while it is a cool hack, it is not that 
useful anymore (and we're also not going to use the concurrent utilities 
in Tomcat, so it's not really an option before we require Java 5). The 
main issue is that due to the fact keepalive is done in blocking mode, 
actual concurrency in the servlet container is unpredictable (the amount 
of processing threads - maxThreads - will usually be a lot higher than 
the actual expected concurrency - let's say 100 per CPU). If that issue 
is solved (we're trying to see if APR is a good solution for it), then 
the problem goes away.

Your patch is basically a much nicer implementation of maxThreads 
(assuming it doesn't reduce performance) which would be useful for the 
regular HTTP connector, so it's cool, but not worth it. Overall, I think 
the way maxThreads is done in the APR connector is the easiest (if the 
amount of workers is too high, wait a bit without accepting anything).

However, reading the text of the message, you don't seem to realize that 
a lot of the threads which would actually be doing processing are just 
blocking for keepalive (hence not doing anything useful; maybe you don't 
see it in your test). Anyway, congratulations for understanding that 
ThreadPool code (I stopped using it for new code, since I think it has 
some limitations and is too complex).

Rémy
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Tomcat performance patch (in development) to reduce concurrency...

2005-05-04 Thread Mladen Turk
Scott Marlow wrote:
Hi, 

I wonder if anyone has any feedback on a performance change that I am
working on making. 

Can you compare the performance of you code with the standard
implementation when the concurrency is lower then maxThreads
value?
I see no point to make patches that will deal with cases presuming
that the concurrency is always higher then the actual number of
worker threads available.
IMHO this is a bad design approach for the http applications,
and NIO performance is a proof of that.
It might help in cases where you have a very very slow clients.
In any other case the thread context switching will kill
the performance thought.
Further more I don't see how can you avoid keep-alive connection
problems without using a thread-per-connection model.
The point is that with 100 keep-alive connections you will still
have 100 busy threads.
Regards,
Mladen.
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Tomcat performance patch (in development) to reduce concurrency...

2005-05-04 Thread Costin Manolache
Mladen Turk wrote:
Scott Marlow wrote:
Hi,
I wonder if anyone has any feedback on a performance change that I am
working on making.

Can you compare the performance of you code with the standard
implementation when the concurrency is lower then maxThreads
value?
I see no point to make patches that will deal with cases presuming
that the concurrency is always higher then the actual number of
worker threads available.
IMHO this is a bad design approach for the http applications,
and NIO performance is a proof of that.
It might help in cases where you have a very very slow clients.
In any other case the thread context switching will kill
the performance thought.
Further more I don't see how can you avoid keep-alive connection
problems without using a thread-per-connection model.
The point is that with 100 keep-alive connections you will still
have 100 busy threads.
Why ? 100 keep alive connections doesn't mean 100 active requests,
in real servers there are many 'keep alive' connections that are just
waiting for the next request.
In all servers I know, concurrency was higher than the configured number 
of workers  - at peak time, at least, where performance matters.

Costin
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Tomcat performance patch (in development) to reduce concurrency...

2005-05-04 Thread Costin Manolache
Remy Maucherat wrote:
I looked at this yesterday, and while it is a cool hack, it is not that 
useful anymore (and we're also not going to use the concurrent utilities 
in Tomcat, so it's not really an option before we require Java 5). The 
main issue is that due to the fact keepalive is done in blocking mode, 
actual concurrency in the servlet container is unpredictable (the amount 
of processing threads - maxThreads - will usually be a lot higher than 
the actual expected concurrency - let's say 100 per CPU). If that issue 
is solved (we're trying to see if APR is a good solution for it), then 
the problem goes away.
I'm still trying to understand the APR connector, but from what I see it 
is still mapping one socket ( 'keep alive' connection ) per thread. 
That's how it allways worked - but it's not necesarily the best 
solution. The only thing that is required is to have a thread per active 
request - the sleepy keep alives don't need thread ( that could be 
implemented using select in the apr, or nio in java )


Your patch is basically a much nicer implementation of maxThreads 
(assuming it doesn't reduce performance) which would be useful for the 
regular HTTP connector, so it's cool, but not worth it. Overall, I think 
the way maxThreads is done in the APR connector is the easiest (if the 
amount of workers is too high, wait a bit without accepting anything).
That's a tricky issue :-) In some cases ( like load balancing ) not 
accepting is the right solution, but in other cases droping connections 
is not what people want ( in particular if most of the threads are just 
waiting on keep alives ).

( sorry if I missed some details in the new implementation :-)
Costin

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Tomcat performance patch (in development) to reduce concurrency...

2005-05-04 Thread Mladen Turk
Costin Manolache wrote:
Further more I don't see how can you avoid keep-alive connection
problems without using a thread-per-connection model.
The point is that with 100 keep-alive connections you will still
have 100 busy threads.
Why ? 100 keep alive connections doesn't mean 100 active requests,
in real servers there are many 'keep alive' connections that are just
waiting for the next request.
Where are they waiting if in blocking mode?
IIRC each is inside a separate thread.
In all servers I know, concurrency was higher than the configured number 
of workers  - at peak time, at least, where performance matters.

Sure, but this is for accepting new connections, right?
Mladen.
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Tomcat performance patch (in development) to reduce concurrency...

2005-05-04 Thread Mladen Turk
Costin Manolache wrote:
I'm still trying to understand the APR connector, but from what I see it 
is still mapping one socket ( 'keep alive' connection ) per thread. 
No it doesn't. If the connection is keep-alive, and there is no activity
for 100ms, the socket is put in the poller, and that thread is freed.
When the next data on that socket arrives, the socket is signaled and
passed to the thread pool.
Mladen.
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Tomcat performance patch (in development) to reduce concurrency...

2005-05-04 Thread Costin Manolache
Mladen Turk wrote:
Costin Manolache wrote:
I'm still trying to understand the APR connector, but from what I see 
it is still mapping one socket ( 'keep alive' connection ) per thread. 

No it doesn't. If the connection is keep-alive, and there is no activity
for 100ms, the socket is put in the poller, and that thread is freed.
When the next data on that socket arrives, the socket is signaled and
passed to the thread pool.
Mladen.
Sorry, I missed that. So we can have as many 'keep alive' idle as we 
want - only those active are taking threads ? Which file implements this 
( the 100ms timeout and poller ) ? I assume this is only done in the APR 
connector, or is it implemented in java as well ( nio ) ? .

Costin
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Tomcat performance patch (in development) to reduce concurrency...

2005-05-04 Thread Mladen Turk
Costin Manolache wrote:
No it doesn't. If the connection is keep-alive, and there is no activity
for 100ms, the socket is put in the poller, and that thread is freed.
When the next data on that socket arrives, the socket is signaled and
passed to the thread pool.
Mladen.

Sorry, I missed that. So we can have as many 'keep alive' idle as we 
want - only those active are taking threads ?
Yes. You will need APR HEAD if using WIN32 and wish more the 64 of them.
On most other platforms, the limit is controlled by ulimit.

Which file implements this 
( the 100ms timeout and poller ) ?
When the active thread finishes the response that is going to be the
keep-alive it reads with timeout for 100ms. Thus if the next keep-alive
request comes within 100ms it is handled immediately.
If not it is passed to the poller.
I assume this is only done in the APR 
connector, or is it implemented in java as well ( nio ) ? .

jakarta-tomcat-connectors/jni
We have APR and thin native JNI glue code, basically dealing with apr
types and pointers.
Regards,
Mladen.
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Tomcat performance patch (in development) to reduce concurrency...

2005-05-04 Thread Mladen Turk
Costin Manolache wrote:
Which file implements this 
( the 100ms timeout and poller ) ?
Poller is inside:
/jakarta-tomcat-connectors/util/java/org/apache/tomcat/util/net/AprEndpoint.java
100ms timeout and passing to poller is in:
/jakarta-tomcat-connectors/http11/src/java/org/apache/coyote/http11/Http11AprProcessor.java
Rest is inside:
/jakarta-tomcat-connectors/jni
Mladen.
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Tomcat performance patch (in development) to reduce concurrency...

2005-05-04 Thread Remy Maucherat
Costin Manolache wrote:
Mladen Turk wrote:
Costin Manolache wrote:
I'm still trying to understand the APR connector, but from what I see 
it is still mapping one socket ( 'keep alive' connection ) per thread. 
No it doesn't. If the connection is keep-alive, and there is no activity
for 100ms, the socket is put in the poller, and that thread is freed.
When the next data on that socket arrives, the socket is signaled and
passed to the thread pool.
Sorry, I missed that. So we can have as many 'keep alive' idle as we 
want - only those active are taking threads ? Which file implements this 
( the 100ms timeout and poller ) ? I assume this is only done in the APR 
connector, or is it implemented in java as well ( nio ) ? .
What I like is that it does it *and* it still is extremely similar to 
the regular blocking HTTP connector.

From Http11AprProcessor.process:
if (!inputBuffer.parseRequestLine()) {
// This means that no data is available right now
// (long keepalive), so that the processor should 
be recycled
// and the method should return true
rp.setStage(org.apache.coyote.Constants.STAGE_ENDED);
openSocket = true;
// Add the socket to the poller
endpoint.getPoller().add(socket, pool);
break;
}

The 100ms before going to the poller is to optimize a little the 
pipelining case (assuming it does optimize something - I don't know).

Rémy
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]