Joe Hansen wrote:
Thank you for the reply, Andre.
I now understand how setting KeepAlive to On would improve the
performance of a website (The Apache manual says that a 50% increase
in throughput could be expected). So I changed the KeepAlive to On and
restarted the server.
Now wait.
You should probably then lower your setting for KeepAliveTimeout (to 3
e.g.), otherwise you may make the problem much worse.
Read conscienciously the relevant Apache doc page :
http://httpd.apache.org/docs/2.2/mod/core.html#keepalive
The point with KeepAlive is :
- the browser makes a connection and issues a first request
- the webserver dedicates a child (or thread) to this connection, and
passes it the first request
- the child/thread responds to the first request, and then waits for more
- the browser, in the response page, finds more links. Over the same TCP
connection, it sends the next request
- the same child/thread - which was waiting on that connection -
receives the new request, and responds to it. Then it waits again for
the next one.
- etc..
- until at some point, the browser does not issue any additional
requests on the connection. Then, *after the KeepAliveTimeout has
expired*, the child/thread gives up, closesthe connection, and returns
to the pool available for other requests from other browsers
So the point is, if the KeepAliveTimeout is long (like 15 seconds), it
means that a child/thread may be kept waiting, for nothing, up to that
many seconds, although there is nothing coming anymore.
I however wonder if this will fix the issue. The reason being, I
haven't changed the website code at all the past few months and there
hasn't been any increase in the website traffic too. Hence I am unable
to understand why we are suddenly seeing an increase in the number of
httpd processes. The only thing I changed is the session-timeout value
from 30 minutes to 240 minutes.
I guess that this is the Tomcat session timeout. That should have
nothing to do with the above. I don't think that for Tomcat, a
"session" is linked to a connection. It is more of a set of data saved
somewhere, linked to the Tomcat session-id (the JSESSIONID cookie for
instance). Tomcar retrieves it whenever a request comes in with the
same session-id number. But it should not matter whether it is on the
same TCP connection or not.
What may be linked together however, is that one request to httpd
results in one child/thread busy with it at the Apache httpd level. If
that request is being forwarded to Tomcat by mod_jk, then it also holds
onto one mod_jk/Tomcat connection. This connection then holds on to one
thread in Tomcat, until the Tomcat thread (+webapp) has supplied the
full response. All the while, this whole chain is unavailable for other
requests. Thus, if there are many such requests under way, many Apache
children/threads are busy, and Apache httpd will start additional ones
(up to its limit) to service new requests that come in.
So if for some reason, your Tomcat requests now take longer to be
serviced, that should also, by retro-effect, increase the number of
httpd children/threads being started.
The bottleneck would be in Tomcat, but it would show up at the httpd level.
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org