----------------------------------------------------------------
BEFORE YOU POST, search the faq at <http://java.apache.org/faq/>
WHEN YOU POST, include all relevant version numbers, log files,
and configuration files.  Don't make us guess your problem!!!
----------------------------------------------------------------

At 08:44 PM 3/14/00 -0800, you wrote:
> >
> > More effective to me would simply be a much lower timeout on the
> > Apache side - when a JServ "freezes" on a resource - Apache
> > continues to send requests to it without marking it failed even
> > though it's not getting any responses back.
>
>Uh, no. That doesn't stop any JServ threads from processing, or from
>consuming a precious connection from the pool. In fact, I believe you
>are increasing the chances that all JServ connection threads will be
>busy on any given request.

I have no idea what you're replying to.

If you have a jserv in a load balanced pool that is "stuck" (on a JDBC
resource for example) such that NO responses are coming back - JServ
will continue to send requests to this "stuck" JServ forever until
it has so many threads it just core dumps.   If Apache had a shorter
timeout waiting for a JServ response AND marked the JServ dead after
hitting a timeout problem, hundreds of requests could be redirected
to a working JServ and wouldn't be stuck in limbo forever waiting on
the "frozen" Jserv.

>Once Apache times out (or the user hits the stop button), Apache will
>not be sending any data back to the client, so why should the JServ VM
>continue to consume a resource? Or worse, a queue of timed out requests?

Apache times out on the JServ after 300 seconds, which is plenty of time
for hundreds (thousands?) of requests to all pile up on a locked Jserv.

>if the resource is frozen waiting on a JDBC response, at least the
>thread is in an efficient wait state. (OK that's a minor one)

Irrelevant if all threads hit this wait state and you end up with 5,000
threads.

>if you decide to dump content at the end of jserv processing rather than
>streaming it out, you have an architectural issue impacting scalability.
>This is precisely the problem the Cocoon group was facing with their
>first version. Essentially, they would read in a dataset and then send
>the result (there was a transformation in between, but this is the basic
>sequence). They have since moved to a more streamed approach with great
>results for scalability.

What was the scalability problem - using too much memory storing
up the response?  increased response time?



--
--------------------------------------------------------------
Please read the FAQ! <http://java.apache.org/faq/>
To subscribe:        [EMAIL PROTECTED]
To unsubscribe:      [EMAIL PROTECTED]
Archives and Other:  <http://java.apache.org/main/mail.html>
Problems?:           [EMAIL PROTECTED]

Reply via email to