Re: Clustering with mod_jk

2006-09-02 Thread Rainer Jung
You should try 1.2.18 and depending on your time frame update to 1.2.19
once it's being released this month.

We improved load balancing code and with 1.2.19 also the observability
of what's happening.

Try the alternative method B (Busyness) for the load balancer in 1.2.18.

The default method tries to send equal numbers of requests to the
different balancing targets. Once a target gets slow/stuck, it will
accumulate hanging requests, until it gets detected as being in error.
So using good timeout values, using cping/cpong and recovery_options are
good ways to improve stability of load balancing.

Regards,

Rainer

Edoardo Causarano schrieb:
 Using mpm_worker gave less impressive results; I'd say about 1/2, a much
 worse load average (way more than 5), and lots of swap. Seems like
 prefork works better on linux and I'm surprised. Anyway, assuming that I
 got the maxProcessors wrong I should have seen queues building up @
 150*4 instead they start  50% that value.
 
 The thing that makes me think it's a mod_jk issue is the fact that
 suddenly all request flow locks onto a node and stays busy until I
 restart apache.
 
 e
 
 On 01/set/06, at 21:21GMT+02:00, Filip Hanik - Dev Lists wrote:
 
 since you are using prefork, you must set cachesize=1 for your
 workers.properties file.
 However, you have 4096 MaxClients, in order to serve this up in
 tomcat, your JK connector should have maxProcessors=4096.
 An alternative, and safe solution, although much less performance, is
 to set MaxRequestsPerChild 1, this way you can get away with
 MaxClients 4096 and still have a much less maxProcessor value on Tomcat

 Filip


 Edoardo Causarano wrote:
 Hello List,

 scenario:

 - 4 node tc 5.0.28 vertical cluster ( :-| same server... still
 testing, but it could have been 8) listening on ajp
 Connector address=x.x.x.x port=8009
 maxProcessors=150 minProcessors=50
 protocol=AJP/1.3
 protocolHandlerClassName=org.apache.jk.server.JkCoyoteHandler
 redirectPort=8443

 - 1 httpd 2.0.52 with mod_ajp 1.2.15 and prefork config on RH AS4,
 kernel 2.6.9-5.EL
 sticky sessions are disabled to avoid stress scripts hitting only
 one node

 IfModule prefork.c
 StartServers   40
 MinSpareServers80
 MaxSpareServers280
 ServerLimit4096
 MaxClients 4096
 MaxRequestsPerChild  4096
 /IfModule

 - 1 application where a couple of thousand users should hammer the
 app deployed on the webapp

 What happens is the app takes the stresser for a ride until 240 circa
 users then starts to die; jkmonitor sees linear increase on busy and
 max requests on only one node and pages hang; disabling the node
 moves the hung request handling to the next node.

 Where's the bottleneck? Any known bug in mod_jk? Should I increase
 threads on the tomcat nodes?

 Tnx,
 e





 -
 To start a new topic, e-mail: users@tomcat.apache.org
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]


 --No virus found in this incoming message.
 Checked by AVG Free Edition.
 Version: 7.1.405 / Virus Database: 268.11.7/435 - Release Date:
 8/31/2006




 -
 To start a new topic, e-mail: users@tomcat.apache.org
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]

 
 
 -
 To start a new topic, e-mail: users@tomcat.apache.org
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]

-
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Clustering with mod_jk

2006-09-01 Thread Edoardo Causarano

Hello List,

scenario:

- 4 node tc 5.0.28 vertical cluster ( :-| same server... still  
testing, but it could have been 8) listening on ajp

Connector address=x.x.x.x port=8009
maxProcessors=150 minProcessors=50
	protocol=AJP/1.3  
protocolHandlerClassName=org.apache.jk.server.JkCoyoteHandler

redirectPort=8443

- 1 httpd 2.0.52 with mod_ajp 1.2.15 and prefork config on RH AS4,  
kernel 2.6.9-5.EL
	sticky sessions are disabled to avoid stress scripts hitting only  
one node


IfModule prefork.c
StartServers   40
MinSpareServers80
MaxSpareServers280
ServerLimit4096
MaxClients 4096
MaxRequestsPerChild  4096
/IfModule

- 1 application where a couple of thousand users should hammer the  
app deployed on the webapp


What happens is the app takes the stresser for a ride until 240 circa  
users then starts to die; jkmonitor sees linear increase on busy and  
max requests on only one node and pages hang; disabling the node  
moves the hung request handling to the next node.


Where's the bottleneck? Any known bug in mod_jk? Should I increase  
threads on the tomcat nodes?


Tnx,
e





-
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Clustering with mod_jk

2006-09-01 Thread Filip Hanik - Dev Lists
since you are using prefork, you must set cachesize=1 for your 
workers.properties file.
However, you have 4096 MaxClients, in order to serve this up in tomcat, 
your JK connector should have maxProcessors=4096.
An alternative, and safe solution, although much less performance, is to 
set MaxRequestsPerChild 1, this way you can get away with MaxClients 
4096 and still have a much less maxProcessor value on Tomcat


Filip


Edoardo Causarano wrote:

Hello List,

scenario:

- 4 node tc 5.0.28 vertical cluster ( :-| same server... still 
testing, but it could have been 8) listening on ajp

Connector address=x.x.x.x port=8009
maxProcessors=150 minProcessors=50
protocol=AJP/1.3 
protocolHandlerClassName=org.apache.jk.server.JkCoyoteHandler

redirectPort=8443

- 1 httpd 2.0.52 with mod_ajp 1.2.15 and prefork config on RH AS4, 
kernel 2.6.9-5.EL
sticky sessions are disabled to avoid stress scripts hitting only 
one node


IfModule prefork.c
StartServers   40
MinSpareServers80
MaxSpareServers280
ServerLimit4096
MaxClients 4096
MaxRequestsPerChild  4096
/IfModule

- 1 application where a couple of thousand users should hammer the app 
deployed on the webapp


What happens is the app takes the stresser for a ride until 240 circa 
users then starts to die; jkmonitor sees linear increase on busy and 
max requests on only one node and pages hang; disabling the node moves 
the hung request handling to the next node.


Where's the bottleneck? Any known bug in mod_jk? Should I increase 
threads on the tomcat nodes?


Tnx,
e





-
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


--No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.1.405 / Virus Database: 268.11.7/435 - Release Date: 8/31/2006





-
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Clustering with mod_jk

2006-09-01 Thread Edoardo Causarano
Using mpm_worker gave less impressive results; I'd say about 1/2, a  
much worse load average (way more than 5), and lots of swap. Seems  
like prefork works better on linux and I'm surprised. Anyway,  
assuming that I got the maxProcessors wrong I should have seen queues  
building up @ 150*4 instead they start  50% that value.


The thing that makes me think it's a mod_jk issue is the fact that  
suddenly all request flow locks onto a node and stays busy until I  
restart apache.


e

On 01/set/06, at 21:21GMT+02:00, Filip Hanik - Dev Lists wrote:

since you are using prefork, you must set cachesize=1 for your  
workers.properties file.
However, you have 4096 MaxClients, in order to serve this up in  
tomcat, your JK connector should have maxProcessors=4096.
An alternative, and safe solution, although much less performance,  
is to set MaxRequestsPerChild 1, this way you can get away with  
MaxClients 4096 and still have a much less maxProcessor value on  
Tomcat


Filip


Edoardo Causarano wrote:

Hello List,

scenario:

- 4 node tc 5.0.28 vertical cluster ( :-| same server... still  
testing, but it could have been 8) listening on ajp

Connector address=x.x.x.x port=8009
maxProcessors=150 minProcessors=50
protocol=AJP/1.3  
protocolHandlerClassName=org.apache.jk.server.JkCoyoteHandler

redirectPort=8443

- 1 httpd 2.0.52 with mod_ajp 1.2.15 and prefork config on RH AS4,  
kernel 2.6.9-5.EL
sticky sessions are disabled to avoid stress scripts hitting  
only one node


IfModule prefork.c
StartServers   40
MinSpareServers80
MaxSpareServers280
ServerLimit4096
MaxClients 4096
MaxRequestsPerChild  4096
/IfModule

- 1 application where a couple of thousand users should hammer the  
app deployed on the webapp


What happens is the app takes the stresser for a ride until 240  
circa users then starts to die; jkmonitor sees linear increase on  
busy and max requests on only one node and pages hang; disabling  
the node moves the hung request handling to the next node.


Where's the bottleneck? Any known bug in mod_jk? Should I increase  
threads on the tomcat nodes?


Tnx,
e





-
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


--No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.1.405 / Virus Database: 268.11.7/435 - Release Date:  
8/31/2006






-
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




-
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Clustering with mod_jk

2006-09-01 Thread Filip Hanik - Dev Lists
it is a mod_jk issue, it uses permanent connections, that is how it was 
designed. setting MaxRequestsPerClient to 1, will kill the child, hence 
kill the mod_jk connection, this way, you can have 
maxProcessorsMaxClients otherwise, they must match


Filip


Edoardo Causarano wrote:
Using mpm_worker gave less impressive results; I'd say about 1/2, a 
much worse load average (way more than 5), and lots of swap. Seems 
like prefork works better on linux and I'm surprised. Anyway, assuming 
that I got the maxProcessors wrong I should have seen queues building 
up @ 150*4 instead they start  50% that value.


The thing that makes me think it's a mod_jk issue is the fact that 
suddenly all request flow locks onto a node and stays busy until I 
restart apache.


e

On 01/set/06, at 21:21GMT+02:00, Filip Hanik - Dev Lists wrote:

since you are using prefork, you must set cachesize=1 for your 
workers.properties file.
However, you have 4096 MaxClients, in order to serve this up in 
tomcat, your JK connector should have maxProcessors=4096.
An alternative, and safe solution, although much less performance, is 
to set MaxRequestsPerChild 1, this way you can get away with 
MaxClients 4096 and still have a much less maxProcessor value on Tomcat


Filip


Edoardo Causarano wrote:

Hello List,

scenario:

- 4 node tc 5.0.28 vertical cluster ( :-| same server... still 
testing, but it could have been 8) listening on ajp

Connector address=x.x.x.x port=8009
maxProcessors=150 minProcessors=50
protocol=AJP/1.3 
protocolHandlerClassName=org.apache.jk.server.JkCoyoteHandler

redirectPort=8443

- 1 httpd 2.0.52 with mod_ajp 1.2.15 and prefork config on RH AS4, 
kernel 2.6.9-5.EL
sticky sessions are disabled to avoid stress scripts hitting 
only one node


IfModule prefork.c
StartServers   40
MinSpareServers80
MaxSpareServers280
ServerLimit4096
MaxClients 4096
MaxRequestsPerChild  4096
/IfModule

- 1 application where a couple of thousand users should hammer the 
app deployed on the webapp


What happens is the app takes the stresser for a ride until 240 
circa users then starts to die; jkmonitor sees linear increase on 
busy and max requests on only one node and pages hang; disabling the 
node moves the hung request handling to the next node.


Where's the bottleneck? Any known bug in mod_jk? Should I increase 
threads on the tomcat nodes?


Tnx,
e





-
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


--No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.1.405 / Virus Database: 268.11.7/435 - Release Date: 
8/31/2006






-
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




-
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


--No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.1.405 / Virus Database: 268.11.7/435 - Release Date: 8/31/2006





-
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]