Hey Bryan, That's exactly what I want, thanks! Looks like usage of track is pretty straightforward too.
Re: Davids response, port 6060 only returns an auth prompt and depends on the application on port 80 working. If something weird happens to the application on port 80 that auth prompt will still occur (and possibly accept valid credentials). Using track would allow me to guarantee that users will always been using a good node on both ports. -Ahmed ________________________________ From: Bryan Talbot [[email protected]] Sent: Friday, April 26, 2013 5:34 PM To: Ahmed Osman Cc: [email protected] Subject: Re: Keeping LB pools status in sync It sounds like you're asking how to use a server's health state in one backend as the health state in another. If so you can use the "track" option on the servers backend pool1 server server1 1.1.1.1:6060<http://1.1.1.1:6060> track pool2/server1 server server2 1.1.1.2:6060<http://1.1.1.2:6060> track pool2/server2 backend pool2 server server1 1.1.1.1:80<http://1.1.1.1:80> check server server2 1.1.1.2:80<http://1.1.1.2:80> check Is that what you want? -Bryan On Fri, Apr 26, 2013 at 5:09 PM, Ahmed Osman <[email protected]<mailto:[email protected]>> wrote: Hello Everyone, I’m wondering if anyone is able to tell me if this is default behavior or if I need to configure this. In a nutshell I have this setup: LB_Pool1 Server1:6060 Server2:6060 LB_Pool2 Server1:80 Server2:80 I can do a check pretty easily on LB_Pool2 however I don’t have a method for doing so on LB_Pool1. If something goes wrong with Server1 then the check in LB_Pool2 will detect it immediately and remove it from the pool until it’s back up. Will Server1 be removed from LB_Pool1 at the same time? And if not, how would I set it up so that happens? Ahmed Osman DevOps Engineer Infrastructure Support Services TIBCO Spotfire

