Re: jk Status not showing errors
Unfortunately I'm not seeing that. What I did was start both Tomcats in my LB pair, start Apache, then I take the second Tomcat down to see if it will detect it being failed. Unfortunately it never seems to, it just shows the second as OK/IDLE, and happily directs all requests to the first. This concerns me, because if the second were to fail, then later the first, everything would die and I'd have no advance warning. I can't seem to make it ping and detect a dead Tomcat. I am using the latest version of mod_jk, I upgraded that before I began playing with the load balancer settings. I'd appreciate any feedback on what I might be doing wrong. Thanks. workers.properties: worker.list=production,development,old,jkstatus worker.production.type=lb worker.production.balance_workers=production1,production2 worker.production.sticky_session=True worker.production.method=S worker.lbbasic.type=ajp13 worker.lbbasic.connect_timeout=1 worker.lbbasic.recovery_options=7 worker.lbbasic.socket_keepalive=1 worker.lbbasic.socket_timeout=60 worker.lbbasic.ping_mode=CI worker.production1.reference=worker.lbbasic worker.production1.port=8009 worker.production1.host=localhost worker.production2.reference=worker.lbbasic worker.production2.port=8012 worker.production2.host=localhost worker.development.port=8010 worker.development.host=localhost worker.development.type=ajp13 worker.old.port=8011 worker.old.host=localhost worker.old.type=ajp13 worker.jkstatus.type=status Lawrence Lamprecht wrote: I do not know if this is relevant or not, but I have just installed the latest version of mod_jk and the jkstatus is very much better than it used to be. I had the same issue with loadbalancers not showing when they are offline or broken. With the latest version, jksataus has the possibility to auto refresh itself. This now shouws when load balancers go down without a request being send to it. It is pretty dynamic as well. I ran several tests where I took one of the balancers down, and left jkstatus refreshing every 10 seconds and that told me that the worker was in error. It also shows you that the work is OK - IDLE when the worker is not being used but is good. As soon as it receives a request the status then changes to OK. Hope this helps. Kind regards / Met vriendelijke groet, Lawrence Lamprecht Application Content Manager QUADREM Netherlands B.V. Kabelweg 61, 1014 BA Amsterdam Post Office Box 20672, 1001 NR Amsterdam Office: +31 20 880 41 16 Mobile: +31 6 13 14 26 31 Fax: +31 20 880 41 02 Read our blog: Intelligent Supply Management - Your advantage -Original Message- From: Rainer Jung [mailto:rainer.j...@kippdata.de] Sent: Saturday, May 30, 2009 2:46 PM To: Tomcat Users List Subject: Re: jk Status not showing errors On 29.05.2009 22:50, Matthew Laird wrote: Good afternoon, I've been trying to get the jkstatus component of mod_jk running, and I'm not quite sure what I'm doing wrong in trying to have it report dead Tomcat instances. I have two tomcat instances setup in a load balancer, as a test I've taken down one of them. However the jkstatus screen still shows both of them as OK. I'm not sure what I'm missing from my workers.properties file to make it test the Tomcat and report a failed instance, so I can set Nagios to monitor this page and report problems. My workers.properties is: worker.list=production,development,old,jkstatus worker.production.type=lb worker.production.balance_workers=production1,production2 worker.production.sticky_session=True worker.production.method=S worker.lbbasic.type=ajp13 worker.lbbasic.connect_timeout=1 worker.lbbasic.recovery_options=7 worker.lbbasic.socket_keepalive=1 worker.lbbasic.socket_timeout=60 worker.production1.reference=worker.lbbasic worker.production1.port=8009 worker.production1.host=localhost #worker.production1.redirect=production2 worker.production2.reference=worker.lbbasic worker.production2.port=8012 worker.production2.host=localhost #worker.production2.activation=disabled worker.development.port=8010 worker.development.host=localhost worker.development.type=ajp13 worker.old.port=8011 worker.old.host=localhost worker.old.type=ajp13 worker.jkstatus.type=status Any advice on extra options to make jkstatus check and report when one of the Tomcat instances isn't responding would be appreciated. I assume, that the actual error detection works and you are really only asking about display in status worker. I also assume your are using a recent mod_jk. Nevertheless do yourself a favor and look at the Timeouts documentation page to improve your configuration. Until recently, only workers used via a load balancing worker had good manageability with jkstatus. Very recently also pure AJP workers without any load balancer got more useful information in their display. So let's talk about your worker production. Whenever a request comes in the lb first checks whether it already carries a session for one
RE: jk Status not showing errors
What you could do is tail -f mod_jk.log file. Then take down the tomcat, see if the errors appear. You should see something like the following. Good Entries to Track Attempting to map context URI '/search-engine*' ajp_unmarshal_response::jk_ajp_common.c (621): status = 302 Maintaining worker loadbalancer1 Maintaining worker prod_se1 Maintaining worker prod_se2 Maintaining worker prod_sea Maintaining worker prod_seb service::jk_lb_worker.c (612): service worker=prod_sea jvm_route=prod_sea service::jk_lb_worker.c (612): service worker=prod_seb jvm_route=prod_seb service::jk_lb_worker.c (612): service worker=prod_sea jvm_route=prod_se1 service::jk_lb_worker.c (612): service worker=prod_seb jvm_route=prod_se2 Possible Error Entries Error connecting to tomcat. Tomcat is probably not started or is listening on the wrong port. worker=prod_se1 failed Error connecting to tomcat. Tomcat is probably not started or is listening on the wrong port. worker=prod_se2 failed You should be able to trace where your config is problematic. Kind regards / Met vriendelijke groet, Lawrence Lamprecht -Original Message- From: Matthew Laird [mailto:lai...@sfu.ca] Sent: Tuesday, June 02, 2009 8:53 PM To: Tomcat Users List Subject: Re: jk Status not showing errors Unfortunately I'm not seeing that. What I did was start both Tomcats in my LB pair, start Apache, then I take the second Tomcat down to see if it will detect it being failed. Unfortunately it never seems to, it just shows the second as OK/IDLE, and happily directs all requests to the first. This concerns me, because if the second were to fail, then later the first, everything would die and I'd have no advance warning. I can't seem to make it ping and detect a dead Tomcat. I am using the latest version of mod_jk, I upgraded that before I began playing with the load balancer settings. I'd appreciate any feedback on what I might be doing wrong. Thanks. workers.properties: worker.list=production,development,old,jkstatus worker.production.type=lb worker.production.balance_workers=production1,production2 worker.production.sticky_session=True worker.production.method=S worker.lbbasic.type=ajp13 worker.lbbasic.connect_timeout=1 worker.lbbasic.recovery_options=7 worker.lbbasic.socket_keepalive=1 worker.lbbasic.socket_timeout=60 worker.lbbasic.ping_mode=CI worker.production1.reference=worker.lbbasic worker.production1.port=8009 worker.production1.host=localhost worker.production2.reference=worker.lbbasic worker.production2.port=8012 worker.production2.host=localhost worker.development.port=8010 worker.development.host=localhost worker.development.type=ajp13 worker.old.port=8011 worker.old.host=localhost worker.old.type=ajp13 worker.jkstatus.type=status Lawrence Lamprecht wrote: I do not know if this is relevant or not, but I have just installed the latest version of mod_jk and the jkstatus is very much better than it used to be. I had the same issue with loadbalancers not showing when they are offline or broken. With the latest version, jksataus has the possibility to auto refresh itself. This now shouws when load balancers go down without a request being send to it. It is pretty dynamic as well. I ran several tests where I took one of the balancers down, and left jkstatus refreshing every 10 seconds and that told me that the worker was in error. It also shows you that the work is OK - IDLE when the worker is not being used but is good. As soon as it receives a request the status then changes to OK. Hope this helps. Kind regards / Met vriendelijke groet, Lawrence Lamprecht Application Content Manager QUADREM Netherlands B.V. Kabelweg 61, 1014 BA Amsterdam Post Office Box 20672, 1001 NR Amsterdam Office: +31 20 880 41 16 Mobile: +31 6 13 14 26 31 Fax: +31 20 880 41 02 Read our blog: Intelligent Supply Management - Your advantage -Original Message- From: Rainer Jung [mailto:rainer.j...@kippdata.de] Sent: Saturday, May 30, 2009 2:46 PM To: Tomcat Users List Subject: Re: jk Status not showing errors On 29.05.2009 22:50, Matthew Laird wrote: Good afternoon, I've been trying to get the jkstatus component of mod_jk running, and I'm not quite sure what I'm doing wrong in trying to have it report dead Tomcat instances. I have two tomcat instances setup in a load balancer, as a test I've taken down one of them. However the jkstatus screen still shows both of them as OK. I'm not sure what I'm missing from my workers.properties file to make it test the Tomcat and report a failed instance, so I can set Nagios to monitor this page and report problems. My workers.properties is: worker.list=production,development,old,jkstatus worker.production.type=lb worker.production.balance_workers=production1,production2 worker.production.sticky_session=True worker.production.method=S worker.lbbasic.type=ajp13 worker.lbbasic.connect_timeout=1
Re: jk Status not showing errors
I'm not seeing anything like that. I just took both Tomcats down, I instantly get the 503 from Apache when I try to load the application. However tailing the mod_jk.log, I just see entries like this: [Tue Jun 02 12:36:23 2009] jkstatus www.innatedb.ca 0.000360 [Tue Jun 02 12:36:26 2009] jkstatus www.innatedb.ca 0.000263 [Tue Jun 02 12:36:39 2009] production www.innatedb.ca 0.498998 [Tue Jun 02 12:36:40 2009] jkstatus www.innatedb.ca 0.000282 mod_jk seems happy sending the requests to Tomcat, and doesn't seem to notice there's no actual Tomcat responding. Only after a few minutes does the JK Status screen go to ERR/REC for both. I would think this is the kind of thing mod_jk should notice instantly, when there's no Tomcat where there should be one. Or am I missing something? Thanks. Lawrence Lamprecht wrote: What you could do is tail -f mod_jk.log file. Then take down the tomcat, see if the errors appear. You should see something like the following. Good Entries to Track Attempting to map context URI '/search-engine*' ajp_unmarshal_response::jk_ajp_common.c (621): status = 302 Maintaining worker loadbalancer1 Maintaining worker prod_se1 Maintaining worker prod_se2 Maintaining worker prod_sea Maintaining worker prod_seb service::jk_lb_worker.c (612): service worker=prod_sea jvm_route=prod_sea service::jk_lb_worker.c (612): service worker=prod_seb jvm_route=prod_seb service::jk_lb_worker.c (612): service worker=prod_sea jvm_route=prod_se1 service::jk_lb_worker.c (612): service worker=prod_seb jvm_route=prod_se2 Possible Error Entries Error connecting to tomcat. Tomcat is probably not started or is listening on the wrong port. worker=prod_se1 failed Error connecting to tomcat. Tomcat is probably not started or is listening on the wrong port. worker=prod_se2 failed You should be able to trace where your config is problematic. Kind regards / Met vriendelijke groet, Lawrence Lamprecht -Original Message- From: Matthew Laird [mailto:lai...@sfu.ca] Sent: Tuesday, June 02, 2009 8:53 PM To: Tomcat Users List Subject: Re: jk Status not showing errors Unfortunately I'm not seeing that. What I did was start both Tomcats in my LB pair, start Apache, then I take the second Tomcat down to see if it will detect it being failed. Unfortunately it never seems to, it just shows the second as OK/IDLE, and happily directs all requests to the first. This concerns me, because if the second were to fail, then later the first, everything would die and I'd have no advance warning. I can't seem to make it ping and detect a dead Tomcat. I am using the latest version of mod_jk, I upgraded that before I began playing with the load balancer settings. I'd appreciate any feedback on what I might be doing wrong. Thanks. workers.properties: worker.list=production,development,old,jkstatus worker.production.type=lb worker.production.balance_workers=production1,production2 worker.production.sticky_session=True worker.production.method=S worker.lbbasic.type=ajp13 worker.lbbasic.connect_timeout=1 worker.lbbasic.recovery_options=7 worker.lbbasic.socket_keepalive=1 worker.lbbasic.socket_timeout=60 worker.lbbasic.ping_mode=CI worker.production1.reference=worker.lbbasic worker.production1.port=8009 worker.production1.host=localhost worker.production2.reference=worker.lbbasic worker.production2.port=8012 worker.production2.host=localhost worker.development.port=8010 worker.development.host=localhost worker.development.type=ajp13 worker.old.port=8011 worker.old.host=localhost worker.old.type=ajp13 worker.jkstatus.type=status Lawrence Lamprecht wrote: I do not know if this is relevant or not, but I have just installed the latest version of mod_jk and the jkstatus is very much better than it used to be. I had the same issue with loadbalancers not showing when they are offline or broken. With the latest version, jksataus has the possibility to auto refresh itself. This now shouws when load balancers go down without a request being send to it. It is pretty dynamic as well. I ran several tests where I took one of the balancers down, and left jkstatus refreshing every 10 seconds and that told me that the worker was in error. It also shows you that the work is OK - IDLE when the worker is not being used but is good. As soon as it receives a request the status then changes to OK. Hope this helps. Kind regards / Met vriendelijke groet, Lawrence Lamprecht Application Content Manager QUADREM Netherlands B.V. Kabelweg 61, 1014 BA Amsterdam Post Office Box 20672, 1001 NR Amsterdam Office: +31 20 880 41 16 Mobile: +31 6 13 14 26 31 Fax: +31 20 880 41 02 Read our blog: Intelligent Supply Management - Your advantage -Original Message- From: Rainer Jung [mailto:rainer.j...@kippdata.de] Sent: Saturday, May 30, 2009 2:46 PM To: Tomcat Users List Subject: Re: jk Status not showing errors On 29.05.2009 22:50, Matthew Laird wrote: Good afternoon, I've been trying to get
Re: jk Status not showing errors
On 02.06.2009 20:53, Matthew Laird wrote: Unfortunately I'm not seeing that. What I did was start both Tomcats in my LB pair, start Apache, then I take the second Tomcat down to see if it will detect it being failed. Unfortunately it never seems to, it just shows the second as OK/IDLE, and happily directs all requests to the first. This concerns me, because if the second were to fail, then later the first, everything would die and I'd have no advance warning. I can't seem to make it ping and detect a dead Tomcat. Assuming that you did refresh the jkstatus display: what is your test client? The fact that you see OK/IDLE, but all requests go to the other node indicates, that you are using requests with associated session, so the balancer is not allowed to send them to the other node and thus does not detect the down node. Check to remove the JSESSIONID cookie before sending requests, or use a client which allows cookie disabling (like curl). I am using the latest version of mod_jk, I upgraded that before I began playing with the load balancer settings. I'd appreciate any feedback on what I might be doing wrong. Thanks. workers.properties: worker.list=production,development,old,jkstatus worker.production.type=lb worker.production.balance_workers=production1,production2 worker.production.sticky_session=True worker.production.method=S worker.lbbasic.type=ajp13 worker.lbbasic.connect_timeout=1 worker.lbbasic.recovery_options=7 worker.lbbasic.socket_keepalive=1 worker.lbbasic.socket_timeout=60 worker.lbbasic.ping_mode=CI worker.production1.reference=worker.lbbasic worker.production1.port=8009 worker.production1.host=localhost worker.production2.reference=worker.lbbasic worker.production2.port=8012 worker.production2.host=localhost worker.development.port=8010 worker.development.host=localhost worker.development.type=ajp13 worker.old.port=8011 worker.old.host=localhost worker.old.type=ajp13 worker.jkstatus.type=status Looks OK. Rainer Lawrence Lamprecht wrote: I do not know if this is relevant or not, but I have just installed the latest version of mod_jk and the jkstatus is very much better than it used to be. I had the same issue with loadbalancers not showing when they are offline or broken. With the latest version, jksataus has the possibility to auto refresh itself. This now shouws when load balancers go down without a request being send to it. It is pretty dynamic as well. I ran several tests where I took one of the balancers down, and left jkstatus refreshing every 10 seconds and that told me that the worker was in error. It also shows you that the work is OK - IDLE when the worker is not being used but is good. As soon as it receives a request the status then changes to OK. Hope this helps. Kind regards / Met vriendelijke groet, Lawrence Lamprecht Application Content Manager QUADREM Netherlands B.V. Kabelweg 61, 1014 BA Amsterdam Post Office Box 20672, 1001 NR Amsterdam Office: +31 20 880 41 16 Mobile: +31 6 13 14 26 31 Fax: +31 20 880 41 02 Read our blog: Intelligent Supply Management - Your advantage -Original Message- From: Rainer Jung [mailto:rainer.j...@kippdata.de] Sent: Saturday, May 30, 2009 2:46 PM To: Tomcat Users List Subject: Re: jk Status not showing errors On 29.05.2009 22:50, Matthew Laird wrote: Good afternoon, I've been trying to get the jkstatus component of mod_jk running, and I'm not quite sure what I'm doing wrong in trying to have it report dead Tomcat instances. I have two tomcat instances setup in a load balancer, as a test I've taken down one of them. However the jkstatus screen still shows both of them as OK. I'm not sure what I'm missing from my workers.properties file to make it test the Tomcat and report a failed instance, so I can set Nagios to monitor this page and report problems. My workers.properties is: worker.list=production,development,old,jkstatus worker.production.type=lb worker.production.balance_workers=production1,production2 worker.production.sticky_session=True worker.production.method=S worker.lbbasic.type=ajp13 worker.lbbasic.connect_timeout=1 worker.lbbasic.recovery_options=7 worker.lbbasic.socket_keepalive=1 worker.lbbasic.socket_timeout=60 worker.production1.reference=worker.lbbasic worker.production1.port=8009 worker.production1.host=localhost #worker.production1.redirect=production2 worker.production2.reference=worker.lbbasic worker.production2.port=8012 worker.production2.host=localhost #worker.production2.activation=disabled worker.development.port=8010 worker.development.host=localhost worker.development.type=ajp13 worker.old.port=8011 worker.old.host=localhost worker.old.type=ajp13 worker.jkstatus.type=status Any advice on extra options to make jkstatus check and report when one of the Tomcat instances isn't responding would be appreciated. I assume, that the actual error
Re: jk Status not showing errors
On 02.06.2009 21:40, Matthew Laird wrote: I'm not seeing anything like that. I just took both Tomcats down, I instantly get the 503 from Apache when I try to load the application. Assuming that there is no mod_proxy in the game. When there is a 503, you will have [error] log lines in the JK log. However tailing the mod_jk.log, I just see entries like this: [Tue Jun 02 12:36:23 2009] jkstatus www.innatedb.ca 0.000360 [Tue Jun 02 12:36:26 2009] jkstatus www.innatedb.ca 0.000263 [Tue Jun 02 12:36:39 2009] production www.innatedb.ca 0.498998 [Tue Jun 02 12:36:40 2009] jkstatus www.innatedb.ca 0.000282 Note that these lines only contain a single request to the backend, the one with production. mod_jk seems happy sending the requests to Tomcat, and doesn't seem to notice there's no actual Tomcat responding. Only after a few minutes does the JK Status screen go to ERR/REC for both. I would think this is the kind of thing mod_jk should notice instantly, when there's no Tomcat where there should be one. Or am I missing something? Show us your full access and jk log. Regards, Rainer - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: jk Status not showing errors
Rainer Jung wrote: Assuming that you did refresh the jkstatus display: what is your test client? The fact that you see OK/IDLE, but all requests go to the other node indicates, that you are using requests with associated session, so the balancer is not allowed to send them to the other node and thus does not detect the down node. Check to remove the JSESSIONID cookie before sending requests, or use a client which allows cookie disabling (like curl). Is there any way to make it ping and detect a dead Tomcat without a request coming in? I thought I was doing that with the worker.lbbasic.ping_mode=CI setting. Thanks. - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: jk Status not showing errors
On 02.06.2009 21:46, Matthew Laird wrote: Rainer Jung wrote: Assuming that you did refresh the jkstatus display: what is your test client? The fact that you see OK/IDLE, but all requests go to the other node indicates, that you are using requests with associated session, so the balancer is not allowed to send them to the other node and thus does not detect the down node. Check to remove the JSESSIONID cookie before sending requests, or use a client which allows cookie disabling (like curl). Is there any way to make it ping and detect a dead Tomcat without a request coming in? I thought I was doing that with the worker.lbbasic.ping_mode=CI setting. Yes, but: - you need to activate a watchdog thread by using JkWatchdogInterval - you might need to tune connection_ping_interval - finally you will end up not getting what you want, because all of that doesn't give you real time monitoring The idea of this type of ping is to detect connections that are no longer usable, because they were idle for to long and have been cut e.g. by a firewall. Broken backends usually are detected reliable, e.g. via prepost ping. Yes, that only happens when you start to use them. There is no backend probing in the watchdog thread at the moment. Yes that's one of the enhancements on the list for a future release. Usually detecting on demand is enough. You can have your own probing: regularly probe your service, using a dummy session cookie of the form JSESSIONID=X.NODE where you replace NODE with the jvmRoute of the backend you want to probe. Regards, Rainer - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
RE: jk Status not showing errors
Below is the config that I have and this works. I have looked at your workers.properties file. There are few entries that I am not sure of. So I would suggest trying to simplify your config until you get a functional system. Once you reach that stage, then you can add more complication to it until you are happy with a final config. All the while tailing the mod_jk.log file to monitor the changes and see the effct of the system. Hop this helps. **Workers.properties file** worker.list= loadbalancer1, loadbalancer2, prod_se1, prod_se2, prod_sea, prod_seb worker.prod_se1.port=8009 worker.prod_se1.host=10.16.6.166 worker.prod_se1.type=ajp13 worker.prod_se1.lbfactor=1 worker.prod_se2.port=8009 worker.prod_se2.host=10.16.6.167 worker.prod_se2.type=ajp13 worker.prod_se2.lbfactor=1 worker.prod_sea.port=8210 worker.prod_sea.host=10.16.6.166 worker.prod_sea.type=ajp13 worker.prod_sea.lbfactor=1 worker.prod_seb.port=8210 worker.prod_seb.host=10.16.6.167 worker.prod_seb.type=ajp13 worker.prod_seb.lbfactor=1 worker.loadbalancer1.type=lb worker.loadbalancer1.balance_workers=prod_se1,prod_se2 worker.loadbalancer2.type=lb worker.loadbalancer2.balance_workers=prod_sea,prod_seb **jk.conf file** JkWorkersFile /etc/httpd/conf/workers.properties JkLogFile /etc/httpd/logs/mod_jk.log JkLogLevel debug ErrorLog /etc/httpd/logs/jk_error_log CustomLog /etc/httpd/logs/jk_access_log common JkMount /search-engine* loadbalancer1 JkMount /2-search-engine* loadbalancer2 Kind regards / Met vriendelijke groet, Lawrence Lamprecht -Original Message- From: Matthew Laird [mailto:lai...@sfu.ca] Sent: Tuesday, June 02, 2009 9:41 PM To: Tomcat Users List Subject: Re: jk Status not showing errors I'm not seeing anything like that. I just took both Tomcats down, I instantly get the 503 from Apache when I try to load the application. However tailing the mod_jk.log, I just see entries like this: [Tue Jun 02 12:36:23 2009] jkstatus www.innatedb.ca 0.000360 [Tue Jun 02 12:36:26 2009] jkstatus www.innatedb.ca 0.000263 [Tue Jun 02 12:36:39 2009] production www.innatedb.ca 0.498998 [Tue Jun 02 12:36:40 2009] jkstatus www.innatedb.ca 0.000282 mod_jk seems happy sending the requests to Tomcat, and doesn't seem to notice there's no actual Tomcat responding. Only after a few minutes does the JK Status screen go to ERR/REC for both. I would think this is the kind of thing mod_jk should notice instantly, when there's no Tomcat where there should be one. Or am I missing something? Thanks. Lawrence Lamprecht wrote: What you could do is tail -f mod_jk.log file. Then take down the tomcat, see if the errors appear. You should see something like the following. Good Entries to Track Attempting to map context URI '/search-engine*' ajp_unmarshal_response::jk_ajp_common.c (621): status = 302 Maintaining worker loadbalancer1 Maintaining worker prod_se1 Maintaining worker prod_se2 Maintaining worker prod_sea Maintaining worker prod_seb service::jk_lb_worker.c (612): service worker=prod_sea jvm_route=prod_sea service::jk_lb_worker.c (612): service worker=prod_seb jvm_route=prod_seb service::jk_lb_worker.c (612): service worker=prod_sea jvm_route=prod_se1 service::jk_lb_worker.c (612): service worker=prod_seb jvm_route=prod_se2 Possible Error Entries Error connecting to tomcat. Tomcat is probably not started or is listening on the wrong port. worker=prod_se1 failed Error connecting to tomcat. Tomcat is probably not started or is listening on the wrong port. worker=prod_se2 failed You should be able to trace where your config is problematic. Kind regards / Met vriendelijke groet, Lawrence Lamprecht -Original Message- From: Matthew Laird [mailto:lai...@sfu.ca] Sent: Tuesday, June 02, 2009 8:53 PM To: Tomcat Users List Subject: Re: jk Status not showing errors Unfortunately I'm not seeing that. What I did was start both Tomcats in my LB pair, start Apache, then I take the second Tomcat down to see if it will detect it being failed. Unfortunately it never seems to, it just shows the second as OK/IDLE, and happily directs all requests to the first. This concerns me, because if the second were to fail, then later the first, everything would die and I'd have no advance warning. I can't seem to make it ping and detect a dead Tomcat. I am using the latest version of mod_jk, I upgraded that before I began playing with the load balancer settings. I'd appreciate any feedback on what I might be doing wrong. Thanks. workers.properties: worker.list=production,development,old,jkstatus worker.production.type=lb worker.production.balance_workers=production1,production2 worker.production.sticky_session=True worker.production.method=S worker.lbbasic.type=ajp13 worker.lbbasic.connect_timeout=1 worker.lbbasic.recovery_options=7 worker.lbbasic.socket_keepalive=1 worker.lbbasic.socket_timeout=60 worker.lbbasic.ping_mode=CI
RE: jk Status not showing errors
I do not know if this is relevant or not, but I have just installed the latest version of mod_jk and the jkstatus is very much better than it used to be. I had the same issue with loadbalancers not showing when they are offline or broken. With the latest version, jksataus has the possibility to auto refresh itself. This now shouws when load balancers go down without a request being send to it. It is pretty dynamic as well. I ran several tests where I took one of the balancers down, and left jkstatus refreshing every 10 seconds and that told me that the worker was in error. It also shows you that the work is OK - IDLE when the worker is not being used but is good. As soon as it receives a request the status then changes to OK. Hope this helps. Kind regards / Met vriendelijke groet, Lawrence Lamprecht Application Content Manager QUADREM Netherlands B.V. Kabelweg 61, 1014 BA Amsterdam Post Office Box 20672, 1001 NR Amsterdam Office: +31 20 880 41 16 Mobile: +31 6 13 14 26 31 Fax: +31 20 880 41 02 Read our blog: Intelligent Supply Management - Your advantage -Original Message- From: Rainer Jung [mailto:rainer.j...@kippdata.de] Sent: Saturday, May 30, 2009 2:46 PM To: Tomcat Users List Subject: Re: jk Status not showing errors On 29.05.2009 22:50, Matthew Laird wrote: Good afternoon, I've been trying to get the jkstatus component of mod_jk running, and I'm not quite sure what I'm doing wrong in trying to have it report dead Tomcat instances. I have two tomcat instances setup in a load balancer, as a test I've taken down one of them. However the jkstatus screen still shows both of them as OK. I'm not sure what I'm missing from my workers.properties file to make it test the Tomcat and report a failed instance, so I can set Nagios to monitor this page and report problems. My workers.properties is: worker.list=production,development,old,jkstatus worker.production.type=lb worker.production.balance_workers=production1,production2 worker.production.sticky_session=True worker.production.method=S worker.lbbasic.type=ajp13 worker.lbbasic.connect_timeout=1 worker.lbbasic.recovery_options=7 worker.lbbasic.socket_keepalive=1 worker.lbbasic.socket_timeout=60 worker.production1.reference=worker.lbbasic worker.production1.port=8009 worker.production1.host=localhost #worker.production1.redirect=production2 worker.production2.reference=worker.lbbasic worker.production2.port=8012 worker.production2.host=localhost #worker.production2.activation=disabled worker.development.port=8010 worker.development.host=localhost worker.development.type=ajp13 worker.old.port=8011 worker.old.host=localhost worker.old.type=ajp13 worker.jkstatus.type=status Any advice on extra options to make jkstatus check and report when one of the Tomcat instances isn't responding would be appreciated. I assume, that the actual error detection works and you are really only asking about display in status worker. I also assume your are using a recent mod_jk. Nevertheless do yourself a favor and look at the Timeouts documentation page to improve your configuration. Until recently, only workers used via a load balancing worker had good manageability with jkstatus. Very recently also pure AJP workers without any load balancer got more useful information in their display. So let's talk about your worker production. Whenever a request comes in the lb first checks whether it already carries a session for one of the nodes 1 or 2, or whether the request can be freely balanced. The status of a worker (node) in jkstatus can only change, if a request is been sent to the worker. So if all your requests belong say to node 2, you'll never notice anything is wrong with 1. But if 1 is broken, and a request for one comes in, or a request that is freely balanceable and the lb decides to send it to 1, then JK will detect the problem and display it. The display will switch from OK to ERR. If you want to parse the info, do not choose the html format, instead choose a different output format, like XML or the properties format (line oriented). Regards, Rainer - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: jk Status not showing errors
On 29.05.2009 22:50, Matthew Laird wrote: Good afternoon, I've been trying to get the jkstatus component of mod_jk running, and I'm not quite sure what I'm doing wrong in trying to have it report dead Tomcat instances. I have two tomcat instances setup in a load balancer, as a test I've taken down one of them. However the jkstatus screen still shows both of them as OK. I'm not sure what I'm missing from my workers.properties file to make it test the Tomcat and report a failed instance, so I can set Nagios to monitor this page and report problems. My workers.properties is: worker.list=production,development,old,jkstatus worker.production.type=lb worker.production.balance_workers=production1,production2 worker.production.sticky_session=True worker.production.method=S worker.lbbasic.type=ajp13 worker.lbbasic.connect_timeout=1 worker.lbbasic.recovery_options=7 worker.lbbasic.socket_keepalive=1 worker.lbbasic.socket_timeout=60 worker.production1.reference=worker.lbbasic worker.production1.port=8009 worker.production1.host=localhost #worker.production1.redirect=production2 worker.production2.reference=worker.lbbasic worker.production2.port=8012 worker.production2.host=localhost #worker.production2.activation=disabled worker.development.port=8010 worker.development.host=localhost worker.development.type=ajp13 worker.old.port=8011 worker.old.host=localhost worker.old.type=ajp13 worker.jkstatus.type=status Any advice on extra options to make jkstatus check and report when one of the Tomcat instances isn't responding would be appreciated. I assume, that the actual error detection works and you are really only asking about display in status worker. I also assume your are using a recent mod_jk. Nevertheless do yourself a favor and look at the Timeouts documentation page to improve your configuration. Until recently, only workers used via a load balancing worker had good manageability with jkstatus. Very recently also pure AJP workers without any load balancer got more useful information in their display. So let's talk about your worker production. Whenever a request comes in the lb first checks whether it already carries a session for one of the nodes 1 or 2, or whether the request can be freely balanced. The status of a worker (node) in jkstatus can only change, if a request is been sent to the worker. So if all your requests belong say to node 2, you'll never notice anything is wrong with 1. But if 1 is broken, and a request for one comes in, or a request that is freely balanceable and the lb decides to send it to 1, then JK will detect the problem and display it. The display will switch from OK to ERR. If you want to parse the info, do not choose the html format, instead choose a different output format, like XML or the properties format (line oriented). Regards, Rainer - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: jk Status not showing errors
Matthew Laird wrote: Good afternoon, I've been trying to get the jkstatus component of mod_jk running, and I'm not quite sure what I'm doing wrong in trying to have it report dead Tomcat instances. I have two tomcat instances setup in a load balancer, as a test I've taken down one of them. However the jkstatus screen still shows both of them as OK. I'm not sure what I'm missing from my workers.properties file to make it test the Tomcat and report a failed instance, so I can set Nagios to monitor this page and report problems. My workers.properties is: worker.list=production,development,old,jkstatus worker.production.type=lb worker.production.balance_workers=production1,production2 worker.production.sticky_session=True worker.production.method=S worker.lbbasic.type=ajp13 worker.lbbasic.connect_timeout=1 worker.lbbasic.recovery_options=7 worker.lbbasic.socket_keepalive=1 worker.lbbasic.socket_timeout=60 worker.production1.reference=worker.lbbasic worker.production1.port=8009 worker.production1.host=localhost #worker.production1.redirect=production2 worker.production2.reference=worker.lbbasic worker.production2.port=8012 worker.production2.host=localhost #worker.production2.activation=disabled worker.development.port=8010 worker.development.host=localhost worker.development.type=ajp13 worker.old.port=8011 worker.old.host=localhost worker.old.type=ajp13 worker.jkstatus.type=status Any advice on extra options to make jkstatus check and report when one of the Tomcat instances isn't responding would be appreciated. While you are waiting for a real expert to comment, I believe I remember from a previous thread, that there needs to be an actual request sent to the misbehaving Tomcat, before the jkstatus will notice that it is down. You may also want to consult the cping/cpong functionality of mod_jk, I believe that would also work. - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
RE: jk Status not showing errors
Bonjour- Then we define a URL, which should be mapped to this worker, i.e. the URL we use to reach the functionality of the status worker. You can use any method mod_jk supports for the web server of your choice. Possibilities are maps inside uriworkermap.properties, an additional mount attribute in workers.properties, or in Apache JkMount. Here's an example for a uriworkermap.properties line: /private/admin/mystatus=mystatus http://tomcat.apache.org/connectors-doc/reference/status.html Bon Chance! Martin __ Note de déni et de confidentialité Ce message est confidentiel et peut être privilégié. Si vous n'êtes pas le destinataire prévu, nous te demandons avec bonté que pour satisfaire informez l'expéditeur. N'importe quelle diffusion non autorisée ou la copie de ceci est interdite. Ce message sert à l'information seulement et n'aura pas n'importe quel effet légalement obligatoire. Étant donné que les email peuvent facilement être sujets à la manipulation, nous ne pouvons accepter aucune responsabilité pour le contenu fourni. Date: Fri, 29 May 2009 13:50:17 -0700 From: lai...@sfu.ca To: users@tomcat.apache.org Subject: jk Status not showing errors Good afternoon, I've been trying to get the jkstatus component of mod_jk running, and I'm not quite sure what I'm doing wrong in trying to have it report dead Tomcat instances. I have two tomcat instances setup in a load balancer, as a test I've taken down one of them. However the jkstatus screen still shows both of them as OK. I'm not sure what I'm missing from my workers.properties file to make it test the Tomcat and report a failed instance, so I can set Nagios to monitor this page and report problems. My workers.properties is: worker.list=production,development,old,jkstatus worker.production.type=lb worker.production.balance_workers=production1,production2 worker.production.sticky_session=True worker.production.method=S worker.lbbasic.type=ajp13 worker.lbbasic.connect_timeout=1 worker.lbbasic.recovery_options=7 worker.lbbasic.socket_keepalive=1 worker.lbbasic.socket_timeout=60 worker.production1.reference=worker.lbbasic worker.production1.port=8009 worker.production1.host=localhost #worker.production1.redirect=production2 worker.production2.reference=worker.lbbasic worker.production2.port=8012 worker.production2.host=localhost #worker.production2.activation=disabled worker.development.port=8010 worker.development.host=localhost worker.development.type=ajp13 worker.old.port=8011 worker.old.host=localhost worker.old.type=ajp13 worker.jkstatus.type=status Any advice on extra options to make jkstatus check and report when one of the Tomcat instances isn't responding would be appreciated. Thanks. - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org _ Insert movie times and more without leaving Hotmail®. http://windowslive.com/Tutorial/Hotmail/QuickAdd?ocid=TXT_TAGLM_WL_HM_Tutorial_QuickAdd1_052009