mod_jk: How to configure separate failover for different JkMounts?

2009-11-23 Thread Tero Karttunen
BACKGROUND INFORMATION:
I have used mod_jk to configure Apache to work as a load balancer for
two Tomcat server instances. To these Tomcat instances, I have
deployed two Web Applications, ts_core_virtual_repository and pum.
These Web Applications are actually simple servlets that DO NOT use
J2EE sessions, so even though I want to retain support for sticky
sessions for future purposes, that is not necessary yet.

I have set up failover for my Web Applications by setting the
following in worker.properties for the loadbalancer workers:

worker.template.fail_on_status=500

This effectually means that any ServletExceptions that the Web
Applications throw cause failover to happen: the worker moves to ERR
state and the request gets transparently forwarded to the next
available worker. My stateless servlets expect and are prepared for
this!

THE CONFIGURATION PROBLEM:
Should ts_core_virtual_repository application fail by throwing
ServletException, the loadbalancer also interprets pum application
as having failed and starts to forward its request to other workers. I
would like the loadbalancer to treat the applications individually for
500 Internal Servlet Error failover purposes. What would be the best
way to do this?

Although we are not short of machine resources, the solution should
not be unnecessarily wasteful and silly - for example, I would NOT
like to create a set of totally new, separate Tomcat server instances
for different applications. Who knows, there might be a third or
fourth web application in the future, so the solution should be
somewhat scalable and maintainable.

MY CURRENT CONFIGURATION:

httpd.conf:
LoadModule jk_module modules/mod_jk-1.2.28-httpd-2.2.3.so
JkWorkersFile conf/ts_tomcat-workers.properties
JkLogFile logs/mod_jk.log
JkLogLevel info
JkLogStampFormat [%a %b %d %H:%M:%S %Y]
JkMount /ts_core_virtual_repository/* loadbalancer
JkMount /jkstatus/* jkstatus
JkMount /pum/* loadbalancer

ts_tomcat-worker.properties:
worker.list=loadbalancer,jkstatus
worker.template.type=ajp13
worker.template.host=localhost
worker.template.port=8110
worker.template.lbfactor=1
worker.template.connection_pool_timeout=600
worker.template.socket_keepalive=true
worker.template.socket_timeout=10
worker.template.ping_mode=A
worker.template.ping_timeout=4000
worker.template.fail_on_status=500
worker.worker1.reference=worker.template
worker.worker1.port=8110
worker.worker2.reference=worker.template
worker.worker2.port=8111
worker.jkstatus.type=status
worker.loadbalancer.type=lb
worker.loadbalancer.balance_workers=worker1,worker2
worker.loadbalancer.sticky_session=true
worker.loadbalancer.sticky_session_force=false
worker.loadbalancer.recover_time=60
worker.loadbalancer.error_escalation_time=0

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: mod_jk: How to configure separate failover for different JkMounts?

2009-11-23 Thread Rainer Jung
On 23.11.2009 09:53, Tero Karttunen wrote:
 BACKGROUND INFORMATION:
 I have used mod_jk to configure Apache to work as a load balancer for
 two Tomcat server instances. To these Tomcat instances, I have
 deployed two Web Applications, ts_core_virtual_repository and pum.
 These Web Applications are actually simple servlets that DO NOT use
 J2EE sessions, so even though I want to retain support for sticky
 sessions for future purposes, that is not necessary yet.
 
 I have set up failover for my Web Applications by setting the
 following in worker.properties for the loadbalancer workers:
 
 worker.template.fail_on_status=500
 
 This effectually means that any ServletExceptions that the Web
 Applications throw cause failover to happen: the worker moves to ERR
 state and the request gets transparently forwarded to the next
 available worker. My stateless servlets expect and are prepared for
 this!
 
 THE CONFIGURATION PROBLEM:
 Should ts_core_virtual_repository application fail by throwing
 ServletException, the loadbalancer also interprets pum application
 as having failed and starts to forward its request to other workers. I
 would like the loadbalancer to treat the applications individually for
 500 Internal Servlet Error failover purposes. What would be the best
 way to do this?
 
 Although we are not short of machine resources, the solution should
 not be unnecessarily wasteful and silly - for example, I would NOT
 like to create a set of totally new, separate Tomcat server instances
 for different applications. Who knows, there might be a third or
 fourth web application in the future, so the solution should be
 somewhat scalable and maintainable.

There's only one thing you can do, namely create a separate connector on
the Tomcat side. For this you will also need to use a separate port.

Then you can configure different LBs pointing to the same Tomcat. In
order to make stickyness work (if you need it), you can't rely any more
on the automatism worker name = jvmRoute, bacause you now have one
jvmRoute pr Tomcat, but two workers with different names. For this
situation you can explicitely set a route for a worker differing from
its name by using the route attribute.

 MY CURRENT CONFIGURATION:
 
 httpd.conf:
 LoadModule jk_module modules/mod_jk-1.2.28-httpd-2.2.3.so
 JkWorkersFile conf/ts_tomcat-workers.properties
 JkLogFile logs/mod_jk.log

You can even use rotatelogs for the JkLogFile ...

 JkLogLevel info
 JkLogStampFormat [%a %b %d %H:%M:%S %Y]
 JkMount /ts_core_virtual_repository/* loadbalancer
 JkMount /jkstatus/* jkstatus
 JkMount /pum/* loadbalancer
 
 ts_tomcat-worker.properties:
 worker.list=loadbalancer,jkstatus
 worker.template.type=ajp13
 worker.template.host=localhost
 worker.template.port=8110
 worker.template.lbfactor=1
 worker.template.connection_pool_timeout=600
 worker.template.socket_keepalive=true
 worker.template.socket_timeout=10

Don't use socket_timeout.
Use version 1.2.28 and socket_connect_timeout.

 worker.template.ping_mode=A
 worker.template.ping_timeout=4000

Relatively small. Could be triggered by a long GC to. But since you are
prepared for failover it might not be a problem.

 worker.template.fail_on_status=500
 worker.worker1.reference=worker.template
 worker.worker1.port=8110
 worker.worker2.reference=worker.template
 worker.worker2.port=8111
 worker.jkstatus.type=status
 worker.loadbalancer.type=lb
 worker.loadbalancer.balance_workers=worker1,worker2
 worker.loadbalancer.sticky_session=true
 worker.loadbalancer.sticky_session_force=false
 worker.loadbalancer.recover_time=60
 worker.loadbalancer.error_escalation_time=0

Looks good.

For administratively disabling a worker in a single mapping, there is
also a syntax when using a uriworkermap.proprties file. We call it a
mount extension. You can find examples in the docs page for
uriworkermap.properties. But it only works, if you as an admin want to
disable individual workers in individual mappings instead of in the
whole LB. It does not work for fail_on_status or other error detection.

Regards,

Rainer

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org