No easy idea. At the moment, fail_on_status is only for workers, and not for workers in mounts.

If you want to go the worker way, there are some simple tricks to not make it to complicated:

- you can use the reference-attribute, to use worker templates. That way each worker only needs to config lines, that are individual for the worker. Worker templates also work hierarchical, so you can have a very general setup, then some additional things for alle workers belonging to some app etc. and finally the things for individual workers.

- If you use multiple workers per Tomcat (e.g. worker=webapp), then to keep stickyness, you don't use the automatic worker name = jvmRoute. Instead you add the route attribute to the workers, giving multiple workers (webapps) the same route. This could again be done in a template via a reference.

- make sure, that you use the idle timeout mechanisms of the connection pools, because each webapp-lb will have it's own connection pool to the target Tomcat (and each connection needs a thread inside the target Tomcat). Look at

http://tomcat.apache.org/connectors-doc/generic_howto/timeouts.html

- If you use fail_on_status, you want to ensure, you are using version 1.2.25. fail_on_status is relatively new, and we had some fixes in the latest versions. At the moment, there is no known bug about it.

- Unfortunately there's a bug with "reference" and debug log level for JK in 1.2.25 (the bug really only shows up with debug log level, but then the web server immediately crashes during startup). The fix is in trunk, but not yet released.

It might be nice to think deeper about how to structure the objects around lb/workers on the one side, and mounts on the other side. At the moment we can only manage mounts or lbs or workers, but not an lb or worker in a mount.

Regards,

Rainer

James Masson wrote:
Hi list,

I have a fully working mod_jk High-Availability Tomcat environment at
the moment, and I'm looking to start catching web-app failures, as well
as Tomcat server failures.

At the moment, the service looks like this:

Two Alteon hardware load balancers
feeding
Two mod_jk apache servers
feeding
Four Tomcat 5.5 servers

I have at least six applications running on each identical Tomcat
instance, with the incoming connections balanced equally between the
four Tomcat servers.

The config is set up like this.

worker.list=app1,app2,app3,app4,app5

worker.tomcat1.port=8009
...
worker.tomcat2.port=8009
...
worker.tomcat3.port=8009
...
worker.tomcat4.port=8009

worker.app1.type=lb
worker.app1.balance_workers=tomcat1,tomcat2,tomcat3,tomcat4

worker.app2.type=lb
worker.app2.balance_workers=tomcat1,tomcat2,tomcat3,tomcat4

etc.


The applications themselves will return a 500-error if they encounter an
internal failure. I want to be able to detect this, and redirect around
the failing application instance.

I'm aware I can do this for the tomcat server worker, but using the
fail_on_status directive will take an entire server out of the cluster.
Using fail_on_status , there's a possibility that one misbehaving
web-app can destroy the whole environment!

Is there a way I can use mod_jk to redirect around a failed application
only, instead of taking out an entire server? Or am I misinterpreting
something?

I think this is possible if I create an AJP worker for each web-app on
each server - but that config will be ridiculously complex, and I'll
likely have problems with jvmRoute variables, and such.

Any ideas?

thanks

James Masson

---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to