Greetings all.
 
I would appreciate some advise on an issue I am experiencing.  The problem
is as follows:
 
Firstly, lets set the stage:
 
Server:
Windows 2000 Server
1GB RAM
Tomcat 5.0.19
Apache 2.0.46
mod_jk2
 
My configurations are:
 
40 VirtualHosts in Apache with the matching 40 Tomcat Hosts with 1 Context
per Tomcat host.  Each Apache VirtualHost uses the JkUriSet to map JSP and
relevant servlets through to an AJP1.3 worker in Tomcat.
 
Now, the problem is that periodically the sites stop responding due to the
AJP Threadpool reaching its maximum limit.  Now I could put the maxThread
attribute up and still use one AJP connector accross all sites - but as
additional sites (Hosts) get added to the server I would need to increase
this (periodically).
 
Now - the other option I have is to add a new AJP connector/worker for EACH
site (host).  This however could cause too many concurrent connections to be
"allowed" and thus increase the memory requirement for the server (which is
not a major problem).  I am unsure (hence this post).
 
Essentially I am trying to figure out a good balance for this problem.  Can
anyone tell me how JK2 works from a connection level.  I know that it uses a
pool - but how does this actually function.  If I have 40 virtual hosts that
use the same worker/connector when are the connections established, and how
long do they stay open for?
 
I did a netstat on my server and found over 167 established connections to
port 8009 (the AJP Connector port).
 
Do they close?  Well, this does not seem to happening as they server does
"fall over" when the maximum connection limit is reached.
 
My AJP/1.3 connector looks as follows in my server.xml:
 
<Connector port="8009" protocol="AJP/1.3" maxPostSize="0" maxThreads="500"
minSpareThreads="20" acceptCount="30" />
 
Do the maxThreads and minSpareThreads and acceptCount work for the AJP/1.3
Connector?  In the TC docs they are not included there?  Maybe I am just
confused.
 
Any assistance here would be greatly appreciated!
 
Best regards,
 
Carl

Reply via email to