I am hoping someone has done something like this and has a solution.
We have a set of clusters, each cluster will process the same url's, but the sessions will only be replicated across to nodes on a cluster. In the configuration, I am hoping to show 2 clusters, each cluster has 2 ajp connections, and are put together using a cluster of lb's.
Everything seems to work at first, until failover to another cluster is required. The lb will attach to a server, and failover inside that cluster. But if both hosts in the cluster are down, the balancer never seems to failover to the second cluster. There were also other problems encountered.....
Here is my workers2.properties [logger] level=DEBUG
[config:] file=${serverRoot}/conf/workers2.properties debug=0 debugEnv=0
[uriMap:] info=Maps the requests. Options: debug debug=0
# Alternate file logger #[logger.file:0] #level=DEBUG #file=${serverRoot}/logs/jk2.log
[shm:]
info=Scoreboard. Required for reconfiguration and status with multiprocess servers
file=${serverRoot}/logs/jk2.shm
size=1000000
debug=0
disabled=0
[workerEnv:] info=Global server options timing=1 debug=0
[lb:lb] info=Default load balancer. debug=0
[lb:cluster_1] info=A cluster load balancer. debug=0
[lb:cluster_2] info=A second cluster load balancer. debug=0
[lb:cluster_balancer] info=A second cluster load balancer. debug=0 worker=lb:cluster_2 worker=lb:cluster_1
[channel.host1:8009] info=Ajp13 forwarding over socket debug=0 tomcatId=host1:8009 group=lb:cluster1
[channel.host2:8009] info=A second tomcat instance. debug=0 tomcatId=host2:8009 group=lb:cluster1
[channel.host3:8009] info=Ajp13 forwarding over socket debug=0 tomcatId=host3:8009 group=lb:cluster2
[channel.host4:8009] info=A second tomcat instance. debug=0 tomcatId=host4:8009 group=lb:cluster2
[channel.jni:jni] info=The jni channel, used if tomcat is started inprocess
[status:] info=Status worker, displays runtime informations
[uri:/jkstatus/*] info=Display status information and checks the config file for changes. group=status:
[uri:/test/*] info=Prefix mapping group=lb:cluster_balancer
--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]