Under heavy loads the following errors appear in the mod_jk.log:

     [jk_ajp_common.c (681)]: ERROR: can't receive the response message
from tomcat, network problems or tomcat is down.
     [jk_ajp_common.c (1050)]: Error reading reply from tomcat. Tomcat
is down or network problems.
     [jk_ajp_common.c (1187)]: ERROR: Receiving from tomcat failed,
recoverable operation. err=0
     [jk_ajp_common.c (681)]: ERROR: can't receive the response message
from tomcat, network problems or tomcat is down.
     [jk_ajp_common.c (1050)]: Error reading reply from tomcat. Tomcat
is down or network problems.
     [jk_ajp_common.c (1187)]: ERROR: Receiving from tomcat failed,
recoverable operation. err=1
     [jk_ajp_common.c (681)]: ERROR: can't receive the response message
from tomcat, network problems or tomcat is down.
     [jk_ajp_common.c (1050)]: Error reading reply from tomcat. Tomcat
is down or network problems.
     [jk_ajp_common.c (1187)]: ERROR: Receiving from tomcat failed,
recoverable operation. err=2
     [jk_ajp_common.c (1198)]: Error connecting to tomcat. Tomcat is
probably not started or is listenning on the wrong port. Failed errno =
104


Some requests are handled just fine during the time when these problems
are occuring, others result in the end users recieving the "Internal
Server Error" from apache.  The problem corrects itself, with the only
tell-tale sign being the messages in the logfile.


Based on comments in other threads and on other sites, we changed the
configuration of our Connector to include connectionTimeout="-1"
(disable connection timeouts - on very fast machines apache can
apparently timeout before tomcat has a chance to respond), increase
maxProcessors to 100 (from default of 75 - by running a ps and grep-ing
for the number of tomcat processes, we can see that we are consistently
running the maximum number of processes), and increase acceptCount to
100 (because our requests tend to take a fairly long time to be turned
around by tomcat (lots of backend processing), we have a high number of
concurrent requests).



Configuration:
   Apache 2.0.44 connected to Tomcat 4.1.24 with mod_jk, loadbalancing
between 2 tomcat instances
 
   
Environment:
   Red Hat  (2.4.18-24.7.xsmp)
   Dell PowerEdge 1650
   4G RAM

>From server.xml
         <Connector className="org.apache.ajp.tomcat4.Ajp13Connector"
port="11009" 
                minProcessors="5" maxProcessors="100" acceptCount="100"
debug="0" 
                connectionTimeout="-1" enableLookups="false"/>

I've configured each tomcat instance to use 1G of memory (-Xmx),
although after running with gc logging on (-Xloggc), it appears that
each instance is using only 65M - 120M.  

It appears to me that the problem is in the communication between apache
and tomcat, and since we only see the problem during periods of heavy
usage,  I had been concentrating my efforts on tuning the parameters in
the Connector.

But my recent changes (maxProcessors from 75 to 100 and acceptCount from
10 to 100) don't seem to have alleviated the problem.  Is it simply that
they should be turned up further, or am I barking up the wrong tree?  

Any thoughts would be immensely appreciated.




---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to