Re: pure tomcat failover (no loadbalacing)

2011-04-28 Thread Guillaume Favier
Hi Felix,

To keep you posted, your solution is working smoothly, the error was coming
from redirect set to the cluster instead of the jvmRoute.

My point (and for now it is a pure theorical question as I don't have the
need) was If I want to add a third or fourth server (for load reason). I
will have the following :
* 1 failover on 2
* 2 failover on 3
* 3 failover on 1
vs using the lbfactor solution :
* instance 1 failover on cluster 1 which still have the 2nd and 3rd instance
with an lbfactor of 1 each
* instance 2 failover on cluster 2 ...
* ...
- in this solution, if the failover is triggered the load balancing will be
used

So both solution have Pro  Cons, i think I will need to see real life to
choose between both.

Thanks for the help
gui



On Thu, Apr 28, 2011 at 5:44 PM, Felix Schumacher 
felix.schumac...@internetallee.de wrote:



 Christopher Schultz ch...@christopherschultz.net schrieb:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 Felix and Guillaume,
 
 I think you guys are working too hard for this. Isn't it as simple as
 using a redirect from each worker to the other, with no clustering or
 anything like that? You don't even need to set jvmRoute, etc. since
 there's no cluster.
 If you are willing to run all webapps on both servers, you are right.
 But the original requirement was to have one set of webapps on one server
 and one set on the other, with failover in case of downtime of either one.

 Felix
 
 - -chris
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.10 (MingW32)
 Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
 
 iEYEARECAAYFAk25fNIACgkQ9CaO5/Lv0PBNDgCgiVLMqKNj6WEX4GrHrpNVHekL
 bn4An2SDMZPE37tkcLCxnknf2/9TUyAD
 =+X7N
 -END PGP SIGNATURE-
 
 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org



 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org




Re: pure tomcat failover (no loadbalacing)

2011-04-27 Thread Guillaume Favier
Thanks Felix, that might do the trick. I'll test it and get back to you.
nice hack BTW.
gui

On Wed, Apr 27, 2011 at 8:58 AM, Felix Schumacher 
felix.schumac...@internetallee.de wrote:

 On Tue, 26 Apr 2011 21:24:16 +0100, Guillaume Favier wrote:

 Thanks for your answer Felix,

 Well, after rethinking my original answer, I think you will have to define
 two clusters:

  worker.list=cluster1,cluster2

 where each cluster worker has two distinct members

  worker.cluster1.type=lb
  worker.cluster1.balance_workers=c1t1,c1t2

  worker.cluster2.type=lb
  worker.cluster2.balance_workers=c2t1,c2t2

 and four worker for your two tomcats

  # workers for cluster1
  worker.c1t1.route=tomcat1
  worker.c1t1.type=ajp13
  worker.c1t1.host=localhost
  worker.c1t1.port=9001
  worker.c1t1.redirect=c1t2

  worker.c1t2.route=tomcat2
  worker.c1t2.type=ajp13
  worker.c1t2.host=localhost
  worker.c1t2.port=9002
  worker.c1t2.activation=disabled

  # workers for cluster2
  worker.c2t1.route=tomcat1
  worker.c2t1.type=ajp13
  worker.c2t1.host=localhost
  worker.c2t1.port=9001
  worker.c2t1.activation=disabled

  worker.c2t2.route=tomcat2
  worker.c2t2.type=ajp13
  worker.c2t2.host=localhost
  worker.c2t2.port=9002
  worker.c2t2.redirect=c1t1

 You will have to set jvmRoute in your tomcats to tomcat1 and tomcat2.

 To mount your webapps, you can use

  JkMount /ABC* cluster1
  JkMount /DEF* cluster2

 Regards

  Felix



 On Tue, Apr 26, 2011 at 8:36 PM, Felix Schumacher 
 felix.schumac...@internetallee.de wrote:

  On Mon, 25 Apr 2011 09:40:59 +0100, Guillaume Favier wrote:

  Hi,

 I have 2 tomcat 5.5 server. Each of them handling a set (50+) of third
 party
 webapps name /ABC* and /DEF*.
 Each of these webapp is quite memory consumming when started (more than
 300M).
 I would like all connection to ABC* webapps be handled by tomcat server
 1,
 and connection to webapps DEF* to be handled by tomcat server 2.

 My objectives are :
 * server 1 to be failover of server2 and server2 failover of server1.
 * any webapp should be instanciate on only one server otherwise it might
 trigger a memory overload

 So I set up my httpd.conf as is :

 JkWorkersFile conf/worker.properties
  JkOptions +ForwardKeySize +ForwardURICompat


 and my worker.properties as is :

 worker.list = failover

 # 
 # template
 # 
 worker.template.type=ajp13
 worker.template.lbfactor=1
 worker.template.connection_pool_timeout=600
 worker.template.socket_timeout=1000
 worker.template.fail_on_status=500

 # 
 # tomcat1
 # 
 worker.tomcat1.reference=worker.template
 worker.tomcat1.port=9001
 worker.tomcat1.host=localhost
 worker.tomcat1.mount=/ABC* /ABC/*
  worker.tomcat1.redirect=failover

 # 
 # tomcat2
 # 
 worker.tomcat2.reference=worker.template
 worker.tomcat2.port=9002
 worker.tomcat2.host=localhost
 worker.tomcat1.mount=/DEF* /DEF/*

   ^ is this correct or a typo?



 Sorry for the typo, you're right : it is in fact :
 worker.tomcat2.mount=/DEF* /DEF/*


   worker.tomcat2.redirect=failover



 # 
 # failover
 # 
 worker.failover.type=lb
 worker.failover.balance_workers=tomcat1,tomcat2

 The jvmroute is set in both server.xml.

 Previously I had put the jkMount directive in httpd.conf, but I could'nt
 make the failover work. So I move it in the worker.properties.
 Tomcat doesn't seem to take into account the jkmount directive from the
 worker.properties : a webapp is started indifrently on any server.

  Tomcat starts all webapps it can find, not only those you specified by
 a jk
 mount. Servlets will
 only start, if you specify a startup order, or trigger a request to a
 servlet.


  Ok, maybe I should clarify that :
 1) tomcat starts all webapps
 2) when a users connect to a specific webapp all objects are instanciate
 and
 therefore the memory footprint drasticaly increase.
 I want to work on the second point : a webapp should be instanciate only
 on
 one server.



  So I don't think it is possible to prevent a webapp from starting in the
 failover tomcat. But it
 should be possible to limit its memory footprint.


 I have done some optimisation here and already removed all shared classes,
 jar, etc...


  That said, I find it strange, that you define a special failover worker
 instead of a direct redirect like

 worker.tomcat1.redirect=tomcat2
 worker.tomcat2.redirect=tomcat1


  But that would mean (solution already tested) : I have to declare it in
 the
 worker list, so when a server fail httpd will continue to try to contact
 it
 instead of contacting the failover worker and find a another worker
 - even if it works it would only work for 2 servers not for 3.

 thanks
 gui



 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h

Re: pure tomcat failover (no loadbalacing)

2011-04-27 Thread Guillaume Favier
Hi Felix,


That's strange, it doesn't try to connect to the c1t2 worker, here is the
log file.

[Wed Apr 27 11:45:33 2011] [4129:47406689800960] [info]
jk_open_socket::jk_connect.c (626): connect to 127.0.0.1:9001 failed
(errno=111)
[Wed Apr 27 11:45:33 2011] [4129:47406689800960] [info]
ajp_connect_to_endpoint::jk_ajp_common.c (959): Failed opening socket to (
127.0.0.1:9001) (errno=111)
[Wed Apr 27 11:45:33 2011] [4129:47406689800960] [error]
ajp_send_request::jk_ajp_common.c (1578): (c1t1) connecting to backend
failed. Tomcat is probably not started or is listening on the wrong port
(errno=111)
[Wed Apr 27 11:45:33 2011] [4129:47406689800960] [info]
ajp_service::jk_ajp_common.c (2543): (c1t1) sending request to tomcat failed
(recoverable), because of error during request sending (attempt=1)
[Wed Apr 27 11:45:34 2011] [4129:47406689800960] [info]
jk_open_socket::jk_connect.c (626): connect to 127.0.0.1:9001 failed
(errno=111)
[Wed Apr 27 11:45:34 2011] [4129:47406689800960] [info]
ajp_connect_to_endpoint::jk_ajp_common.c (959): Failed opening socket to (
127.0.0.1:9001) (errno=111)
[Wed Apr 27 11:45:34 2011] [4129:47406689800960] [error]
ajp_send_request::jk_ajp_common.c (1578): (c1t1) connecting to backend
failed. Tomcat is probably not started or is listening on the wrong port
(errno=111)
[Wed Apr 27 11:45:34 2011] [4129:47406689800960] [info]
ajp_service::jk_ajp_common.c (2543): (c1t1) sending request to tomcat failed
(recoverable), because of error during request sending (attempt=2)
[Wed Apr 27 11:45:34 2011] [4129:47406689800960] [error]
ajp_service::jk_ajp_common.c (2562): (c1t1) connecting to tomcat failed.
[Wed Apr 27 11:45:34 2011] [4129:47406689800960] [info]
service::jk_lb_worker.c (1388): service failed, worker c1t1 is in error
state
[Wed Apr 27 11:45:34 2011] [4129:47406689800960] [info]
service::jk_lb_worker.c (1440): Forcing recovery once for 1 workers
[Wed Apr 27 11:45:34 2011] [4129:47406689800960] [info]
jk_open_socket::jk_connect.c (626): connect to 127.0.0.1:9001 failed
(errno=111)
[Wed Apr 27 11:45:34 2011] [4129:47406689800960] [info]
ajp_connect_to_endpoint::jk_ajp_common.c (959): Failed opening socket to (
127.0.0.1:9001) (errno=111)
[Wed Apr 27 11:45:34 2011] [4129:47406689800960] [error]
ajp_send_request::jk_ajp_common.c (1578): (c1t1) connecting to backend
failed. Tomcat is probably not started or is listening on the wrong port
(errno=111)
[Wed Apr 27 11:45:34 2011] [4129:47406689800960] [info]
ajp_service::jk_ajp_common.c (2543): (c1t1) sending request to tomcat failed
(recoverable), because of error during request sending (attempt=1)
[Wed Apr 27 11:45:34 2011] [4129:47406689800960] [info]
jk_open_socket::jk_connect.c (626): connect to 127.0.0.1:9001 failed
(errno=111)
[Wed Apr 27 11:45:34 2011] [4129:47406689800960] [info]
ajp_connect_to_endpoint::jk_ajp_common.c (959): Failed opening socket to (
127.0.0.1:9001) (errno=111)
[Wed Apr 27 11:45:34 2011] [4129:47406689800960] [error]
ajp_send_request::jk_ajp_common.c (1578): (c1t1) connecting to backend
failed. Tomcat is probably not started or is listening on the wrong port
(errno=111)
[Wed Apr 27 11:45:34 2011] [4129:47406689800960] [info]
ajp_service::jk_ajp_common.c (2543): (c1t1) sending request to tomcat failed
(recoverable), because of error during request sending (attempt=2)
[Wed Apr 27 11:45:34 2011] [4129:47406689800960] [error]
ajp_service::jk_ajp_common.c (2562): (c1t1) connecting to tomcat failed.
[Wed Apr 27 11:45:34 2011] [4129:47406689800960] [info]
service::jk_lb_worker.c (1388): service failed, worker c1t1 is in error
state
[Wed Apr 27 11:45:34 2011] [4129:47406689800960] [info]
service::jk_lb_worker.c (1457): All tomcat instances failed, no more workers
left (attempt=0, retry=1)
[Wed Apr 27 11:45:34 2011] [4129:47406689800960] [info]
service::jk_lb_worker.c (1457): All tomcat instances failed, no more workers
left (attempt=1, retry=1)
[Wed Apr 27 11:45:34 2011] [4129:47406689800960] [info]
service::jk_lb_worker.c (1468): All tomcat instances are busy or in error
state
[Wed Apr 27 11:45:34 2011] [4129:47406689800960] [error]
service::jk_lb_worker.c (1473): All tomcat instances failed, no more workers
left
[Wed Apr 27 11:45:34 2011] [4129:47406689800960] [info] jk_handler::mod_jk.c
(2627): Service error=0 for worker=cluster1


I implement a workaround by dealing with lbfactor :
worker.c1t1.lbfactor=100
worker.c1t1.redirect=cluster1

worker.c1t2.lbfactor=1
worker.c1t2.redirect=cluster1
#worker.c1t2.activation=disabled

It is very unlikely that i get 100 request on one server.
This does looks good but a pretty complex configuration if we move up to
three server. And complexity will increase with the number of server.
Seems that load balancing is easier than failover.

gui



On Wed, Apr 27, 2011 at 9:21 AM, Felix Schumacher 
felix.schumac...@internetallee.de wrote:

 On Wed, 27 Apr 2011 09:58:45 +0200, Felix Schumacher wrote:

 On Tue, 26 Apr 2011 21:24:16 +0100, Guillaume Favier wrote:

 Thanks for your answer

Re: pure tomcat failover (no loadbalacing)

2011-04-27 Thread Guillaume Favier
Felix,

Dis you check my workaround ?


On Wed, Apr 27, 2011 at 7:01 PM, Felix Schumacher 
felix.schumac...@internetallee.de wrote:

 Am Mittwoch, den 27.04.2011, 10:21 +0200 schrieb Felix Schumacher:
  On Wed, 27 Apr 2011 09:58:45 +0200, Felix Schumacher wrote:
   On Tue, 26 Apr 2011 21:24:16 +0100, Guillaume Favier wrote:
   Thanks for your answer Felix,
   Well, after rethinking my original answer, I think you will have to
   define two clusters:
  
 worker.list=cluster1,cluster2
  
   ...
 worker.c2t2.type=ajp13
 worker.c2t2.host=localhost
 worker.c2t2.port=9002
 worker.c2t2.redirect=c1t1
   Aargh, this should be
 worker.c2t2.redirect=c2t1
 Ok, last correction. redirect takes the name of the jvmRoute, not that
 of the worker. So those two configuration entries should be

 worker.c2t2.redirect=tomcat1
 worker.c1t1.redirect=tomcat2


argh you're right, but with my work around you can avoid dealing with the
route, it is a bit more scalable.

I implement a workaround by dealing with lbfactor :
worker.c1t1.lbfactor=100
worker.c1t1.redirect=cluster1

worker.c1t2.lbfactor=1
worker.c1t2.redirect=cluster1
#worker.c1t2.activation=disabled

It is very unlikely that i get 100 request on one server.
This does looks good but a pretty complex configuration if we move up to
three server. And complexity will increase with the number of server.
Seems that load balancing is easier than failover.

gui




 Regards
  Felix
 
   Bye
Felix
  
   You will have to set jvmRoute in your tomcats to tomcat1 and
   tomcat2.
  
   To mount your webapps, you can use
  
 JkMount /ABC* cluster1
 JkMount /DEF* cluster2
  
   Regards
Felix
  
  
  
   On Tue, Apr 26, 2011 at 8:36 PM, Felix Schumacher 
   felix.schumac...@internetallee.de wrote:
  
   On Mon, 25 Apr 2011 09:40:59 +0100, Guillaume Favier wrote:
  
   Hi,
  
   I have 2 tomcat 5.5 server. Each of them handling a set (50+) of
   third
   party
   webapps name /ABC* and /DEF*.
   Each of these webapp is quite memory consumming when started (more
   than
   300M).
   I would like all connection to ABC* webapps be handled by tomcat
   server 1,
   and connection to webapps DEF* to be handled by tomcat server 2.
  
   My objectives are :
   * server 1 to be failover of server2 and server2 failover of
   server1.
   * any webapp should be instanciate on only one server otherwise it
   might
   trigger a memory overload
  
   So I set up my httpd.conf as is :
  
   JkWorkersFile conf/worker.properties
JkOptions +ForwardKeySize +ForwardURICompat
  
  
   and my worker.properties as is :
  
   worker.list = failover
  
   # 
   # template
   # 
   worker.template.type=ajp13
   worker.template.lbfactor=1
   worker.template.connection_pool_timeout=600
   worker.template.socket_timeout=1000
   worker.template.fail_on_status=500
  
   # 
   # tomcat1
   # 
   worker.tomcat1.reference=worker.template
   worker.tomcat1.port=9001
   worker.tomcat1.host=localhost
   worker.tomcat1.mount=/ABC* /ABC/*
worker.tomcat1.redirect=failover
  
   # 
   # tomcat2
   # 
   worker.tomcat2.reference=worker.template
   worker.tomcat2.port=9002
   worker.tomcat2.host=localhost
   worker.tomcat1.mount=/DEF* /DEF/*
  
 ^ is this correct or a typo?
  
  
   Sorry for the typo, you're right : it is in fact :
   worker.tomcat2.mount=/DEF* /DEF/*
  
  
worker.tomcat2.redirect=failover
  
  
   # 
   # failover
   # 
   worker.failover.type=lb
   worker.failover.balance_workers=tomcat1,tomcat2
  
   The jvmroute is set in both server.xml.
  
   Previously I had put the jkMount directive in httpd.conf, but I
   could'nt
   make the failover work. So I move it in the worker.properties.
   Tomcat doesn't seem to take into account the jkmount directive
   from the
   worker.properties : a webapp is started indifrently on any server.
  
   Tomcat starts all webapps it can find, not only those you specified
   by a jk
   mount. Servlets will
   only start, if you specify a startup order, or trigger a request to
   a
   servlet.
  
  
   Ok, maybe I should clarify that :
   1) tomcat starts all webapps
   2) when a users connect to a specific webapp all objects are
   instanciate and
   therefore the memory footprint drasticaly increase.
   I want to work on the second point : a webapp should be instanciate
   only on
   one server.
  
  
  
   So I don't think it is possible to prevent a webapp from starting
   in the
   failover tomcat. But it
   should be possible to limit its memory footprint.
  
  
   I have done some optimisation here and already removed all shared
   classes,
   jar, etc...
  
  
   That said, I find it strange, that you define a special failover
   worker
   instead of a direct redirect like
  
   worker.tomcat1.redirect=tomcat2

Re: pure tomcat failover (no loadbalacing)

2011-04-27 Thread Guillaume Favier
On Wed, Apr 27, 2011 at 8:55 PM, Felix Schumacher 
felix.schumac...@internetallee.de wrote:

 Am Mittwoch, den 27.04.2011, 19:20 +0100 schrieb Guillaume Favier:
  Felix,
 
  Dis you check my workaround ?
 
 
  On Wed, Apr 27, 2011 at 7:01 PM, Felix Schumacher 
  felix.schumac...@internetallee.de wrote:
 
   Am Mittwoch, den 27.04.2011, 10:21 +0200 schrieb Felix Schumacher:
On Wed, 27 Apr 2011 09:58:45 +0200, Felix Schumacher wrote:
 On Tue, 26 Apr 2011 21:24:16 +0100, Guillaume Favier wrote:
 Thanks for your answer Felix,
 Well, after rethinking my original answer, I think you will have to
 define two clusters:

   worker.list=cluster1,cluster2

 ...
   worker.c2t2.type=ajp13
   worker.c2t2.host=localhost
   worker.c2t2.port=9002
   worker.c2t2.redirect=c1t1
 Aargh, this should be
   worker.c2t2.redirect=c2t1
   Ok, last correction. redirect takes the name of the jvmRoute, not that
   of the worker. So those two configuration entries should be
  
   worker.c2t2.redirect=tomcat1
   worker.c1t1.redirect=tomcat2
  
  
  argh you're right, but with my work around you can avoid dealing with the
  route, it is a bit more scalable.
 
  I implement a workaround by dealing with lbfactor :
  worker.c1t1.lbfactor=100
  worker.c1t1.redirect=cluster1
 
  worker.c1t2.lbfactor=1
  worker.c1t2.redirect=cluster1
  #worker.c1t2.activation=disabled
 
  It is very unlikely that i get 100 request on one server.
  This does looks good but a pretty complex configuration if we move up to
  three server. And complexity will increase with the number of server.
  Seems that load balancing is easier than failover.
 I don't think lbfactor is the right solution for your problem, but I
 haven't checked it. I think your setup will pass 100 requests to worker
 c1t1 and then 1 request to worker c1t2 (probably simplified it quite a
 lot). That will trigger your servlets from your failover instance,
 which you wanted to circumvent.


I am not convinced either by my workaround of your solution but for now that
is the best solution. Still looking for a better one.
I will put an lbfactor of 10 that will prevent any request on c1t2.
And if c1t1 faill, c1t2 will take all request.
- I have a defacto working failover. And scalable because i can have a c1t3
with lbfactor of 1.


 As stated in my correction above, redirect takes the name of the
 jvmRoute and I doubt, that your tomcat instance is called cluster1, so
 that statement will be wrong.

I got it, and thanks for pointing that out, if I had rtfm correctly earlier,
I might have spare quite a lot of time.


 You are right that loadbalancing is simpler than my example with two
 clusters, but that is because your original requirement were more
 complex then simple loadbalancing.


 If you have the memory resources for simple loadbalancing I would go for
 it.


I can't afford it, as I spotted in my original mail : 1 webapp is around
400M, at any time I have 6-8webapp (and increasing) started on each server.
I would go from 5-6gig to potentialy 10-12 : i am pretty sure some people
might disagree with that (me, for one), with this solution almost all the
memory is used (few spare), if something failed : I have enough time to
react.

Regard
gui


 Regards
  Felix
 
  gui
 
 
 
 
   Regards
Felix
   
 Bye
  Felix

 You will have to set jvmRoute in your tomcats to tomcat1 and
 tomcat2.

 To mount your webapps, you can use

   JkMount /ABC* cluster1
   JkMount /DEF* cluster2

 Regards
  Felix



 On Tue, Apr 26, 2011 at 8:36 PM, Felix Schumacher 
 felix.schumac...@internetallee.de wrote:

 On Mon, 25 Apr 2011 09:40:59 +0100, Guillaume Favier wrote:

 Hi,

 I have 2 tomcat 5.5 server. Each of them handling a set (50+) of
 third
 party
 webapps name /ABC* and /DEF*.
 Each of these webapp is quite memory consumming when started
 (more
 than
 300M).
 I would like all connection to ABC* webapps be handled by tomcat
 server 1,
 and connection to webapps DEF* to be handled by tomcat server 2.

 My objectives are :
 * server 1 to be failover of server2 and server2 failover of
 server1.
 * any webapp should be instanciate on only one server otherwise
 it
 might
 trigger a memory overload

 So I set up my httpd.conf as is :

 JkWorkersFile conf/worker.properties
  JkOptions +ForwardKeySize +ForwardURICompat


 and my worker.properties as is :

 worker.list = failover

 # 
 # template
 # 
 worker.template.type=ajp13
 worker.template.lbfactor=1
 worker.template.connection_pool_timeout=600
 worker.template.socket_timeout=1000
 worker.template.fail_on_status=500

 # 
 # tomcat1
 # 
 worker.tomcat1

Re: pure tomcat failover (no loadbalacing)

2011-04-26 Thread Guillaume Favier
Sorry for the double post, but I didn't see any remarks on this thread.
This a tricky question (at least for me), and I am a bit stuck here.

thanks
gui


Hi,

 I have 2 tomcat 5.5 server. Each of them handling a set (50+) of third
 party webapps name /ABC* and /DEF*.
 Each of these webapp is quite memory consumming when started (more than
 300M).
 I would like all connection to ABC* webapps be handled by tomcat server 1,
 and connection to webapps DEF* to be handled by tomcat server 2.

 My objectives are :
 * server 1 to be failover of server2 and server2 failover of server1.
 * any webapp should be instanciate on only one server otherwise it might
 trigger a memory overload

 So I set up my httpd.conf as is :

 JkWorkersFile conf/worker.properties
   JkOptions +ForwardKeySize +ForwardURICompat


 and my worker.properties as is :

 worker.list = failover

 # 
 # template
 # 
 worker.template.type=ajp13
 worker.template.lbfactor=1
 worker.template.connection_pool_timeout=600
 worker.template.socket_timeout=1000
 worker.template.fail_on_status=500

 # 
 # tomcat1
 # 
 worker.tomcat1.reference=worker.template
 worker.tomcat1.port=9001
 worker.tomcat1.host=localhost
 worker.tomcat1.mount=/ABC* /ABC/*
  worker.tomcat1.redirect=failover

 # 
 # tomcat2
 # 
 worker.tomcat2.reference=worker.template
 worker.tomcat2.port=9002
 worker.tomcat2.host=localhost
 worker.tomcat1.mount=/DEF* /DEF/*
 worker.tomcat2.redirect=failover


 # 
 # failover
 # 
 worker.failover.type=lb
 worker.failover.balance_workers=tomcat1,tomcat2

 The jvmroute is set in both server.xml.

 Previously I had put the jkMount directive in httpd.conf, but I could'nt
 make the failover work. So I move it in the worker.properties.
 Tomcat doesn't seem to take into account the jkmount directive from the
 worker.properties : a webapp is started indifrently on any server.

 I must say i am quite stuck here. Would anyone get an idea ?

 regards
 gui



Re: pure tomcat failover (no loadbalacing)

2011-04-26 Thread Guillaume Favier
Thanks for your answer Felix,


On Tue, Apr 26, 2011 at 8:36 PM, Felix Schumacher 
felix.schumac...@internetallee.de wrote:

 On Mon, 25 Apr 2011 09:40:59 +0100, Guillaume Favier wrote:

 Hi,

 I have 2 tomcat 5.5 server. Each of them handling a set (50+) of third
 party
 webapps name /ABC* and /DEF*.
 Each of these webapp is quite memory consumming when started (more than
 300M).
 I would like all connection to ABC* webapps be handled by tomcat server 1,
 and connection to webapps DEF* to be handled by tomcat server 2.

 My objectives are :
 * server 1 to be failover of server2 and server2 failover of server1.
 * any webapp should be instanciate on only one server otherwise it might
 trigger a memory overload

 So I set up my httpd.conf as is :

 JkWorkersFile conf/worker.properties
  JkOptions +ForwardKeySize +ForwardURICompat


 and my worker.properties as is :

 worker.list = failover

 # 
 # template
 # 
 worker.template.type=ajp13
 worker.template.lbfactor=1
 worker.template.connection_pool_timeout=600
 worker.template.socket_timeout=1000
 worker.template.fail_on_status=500

 # 
 # tomcat1
 # 
 worker.tomcat1.reference=worker.template
 worker.tomcat1.port=9001
 worker.tomcat1.host=localhost
 worker.tomcat1.mount=/ABC* /ABC/*
  worker.tomcat1.redirect=failover

 # 
 # tomcat2
 # 
 worker.tomcat2.reference=worker.template
 worker.tomcat2.port=9002
 worker.tomcat2.host=localhost
 worker.tomcat1.mount=/DEF* /DEF/*

   ^ is this correct or a typo?


Sorry for the typo, you're right : it is in fact :
worker.tomcat2.mount=/DEF* /DEF/*


  worker.tomcat2.redirect=failover


 # 
 # failover
 # 
 worker.failover.type=lb
 worker.failover.balance_workers=tomcat1,tomcat2

 The jvmroute is set in both server.xml.

 Previously I had put the jkMount directive in httpd.conf, but I could'nt
 make the failover work. So I move it in the worker.properties.
 Tomcat doesn't seem to take into account the jkmount directive from the
 worker.properties : a webapp is started indifrently on any server.

 Tomcat starts all webapps it can find, not only those you specified by a jk
 mount. Servlets will
 only start, if you specify a startup order, or trigger a request to a
 servlet.


Ok, maybe I should clarify that :
1) tomcat starts all webapps
2) when a users connect to a specific webapp all objects are instanciate and
therefore the memory footprint drasticaly increase.
I want to work on the second point : a webapp should be instanciate only on
one server.



 So I don't think it is possible to prevent a webapp from starting in the
 failover tomcat. But it
 should be possible to limit its memory footprint.


I have done some optimisation here and already removed all shared classes,
jar, etc...


 That said, I find it strange, that you define a special failover worker
 instead of a direct redirect like

 worker.tomcat1.redirect=tomcat2
 worker.tomcat2.redirect=tomcat1


But that would mean (solution already tested) : I have to declare it in the
worker list, so when a server fail httpd will continue to try to contact it
instead of contacting the failover worker and find a another worker
- even if it works it would only work for 2 servers not for 3.

thanks
gui


pure tomcat failover (no loadbalacing)

2011-04-25 Thread Guillaume Favier
Hi,

I have 2 tomcat 5.5 server. Each of them handling a set (50+) of third party
webapps name /ABC* and /DEF*.
Each of these webapp is quite memory consumming when started (more than
300M).
I would like all connection to ABC* webapps be handled by tomcat server 1,
and connection to webapps DEF* to be handled by tomcat server 2.

My objectives are :
* server 1 to be failover of server2 and server2 failover of server1.
* any webapp should be instanciate on only one server otherwise it might
trigger a memory overload

So I set up my httpd.conf as is :

JkWorkersFile conf/worker.properties
  JkOptions +ForwardKeySize +ForwardURICompat


and my worker.properties as is :

worker.list = failover

# 
# template
# 
worker.template.type=ajp13
worker.template.lbfactor=1
worker.template.connection_pool_timeout=600
worker.template.socket_timeout=1000
worker.template.fail_on_status=500

# 
# tomcat1
# 
worker.tomcat1.reference=worker.template
worker.tomcat1.port=9001
worker.tomcat1.host=localhost
worker.tomcat1.mount=/ABC* /ABC/*
 worker.tomcat1.redirect=failover

# 
# tomcat2
# 
worker.tomcat2.reference=worker.template
worker.tomcat2.port=9002
worker.tomcat2.host=localhost
worker.tomcat1.mount=/DEF* /DEF/*
worker.tomcat2.redirect=failover


# 
# failover
# 
worker.failover.type=lb
worker.failover.balance_workers=tomcat1,tomcat2

The jvmroute is set in both server.xml.

Previously I had put the jkMount directive in httpd.conf, but I could'nt
make the failover work. So I move it in the worker.properties.
Tomcat doesn't seem to take into account the jkmount directive from the
worker.properties : a webapp is started indifrently on any server.

I must say i am quite stuck here. Would anyone get an idea ?

regards
gui