>>What happen when you do the same operation with Tomcat native 
>>http connector ?
>
>I have not tried native Tomcat.  Do you suggest that this will make a
>difference?

It will help determine if it is a tomcat or 
a connector problem.

>One thing I failed to mention earlier was that when the 
>LoadRunner scenario
>fails, Apache will 
>continue to serve html, but not JSPs.  Also, the LoadRunner 
>scenario is a
>simulation of a user hitting 
>nine JSPs and running continuously 100 times. So when I 
>mentioned multiple
>users, what that equates to 
>is 60-80 users hitting nine pages 100 times.  The scenario has 
>some think
>time (delay) between pages
>to allow (I'm guessing here) sockets to become open.

Silly question, but did you use kind of JDBC pool to handle
so many JDBC connections at the same time ?

>>Did you set Apache to fork more clients to handle the load ?
>>The more httpd task you have in Apache the more connections
>>you got to Tomcat and it may help here.  
>
> I am guessing that I am overloading tomcat. I left this at 
>default, what do you suggest?  

If Tomcat alone via it's native http 1.0 connectors works fine,
I suggest you to increased Apache default value :

MinSpareServers         15     
MaxSpareServers         20    
StartServers            25        
MaxClients              250 
                                                                        
>>Could you play your config and tell us how many httpd task you
>>got and Ajp13 task ? A basic netstat will tell you how many
>>connections are open between Apache and Tomcat

>
>So I took your idea about limiting the child forks and ran the 
>scenario with 100 users.
>
>A netstat -a |grep 1521 |wc
>about 621 for 100 users
>
>netstat -a |grep 8009 |wc 
>between 490 and 677 for 100 users

Are you running the Apache and Tomcat on the same machine ?
If so we must divide by 2 ....

You could also do the test with 2 or more Tomcat servers behind the Apache.
The great power of mod_jk is LoadBalancing ;-)

Reply via email to