On Thu, Jan 30, 2014 at 07:39:29PM +0100, PiBa-NL wrote:
> This should (i expect) work with any number of backup servers, as
> long as you only need 1 active.

Yes, it appears this is exactly what I want. A quick test shows that
once failback is still occurring. Not sure why. Once my primary fails,
the first backup gets the traffic as expected. Once the primary comes
back online, it services all requests again.

I'm using 1.4 and my configuration is nearly identical to the example
shown in the blow, sans the peers.

Ryan



> Ryan O'Hara schreef op 30-1-2014 19:34:
> >On Thu, Jan 30, 2014 at 07:14:30PM +0100, PiBa-NL wrote:
> >>Im not 100% sure but if i remember something i read correctly it was
> >>like using a "stick on dst" stick-table.
> >>
> >>That way the sticktable will make sure all traffic go's to a single
> >>server, and only when it fails another server will be put in the
> >>sticktable that will only have 1 entry.
> >Yes. That sounds accurate.
> >
> >>You might want to test what happens when haproxy configuration is
> >>reloaded.. But if you configure 'peers' the new haproxy process
> >>should still have the same 'active' backend..
> >>
> >>p.s. That is if im not mixing stuff up...
> >This blog has something very close to what I'd like to deploy:
> >
> >http://blog.exceliance.fr/2014/01/17/emulating-activepassing-application-clustering-with-haproxy/
> >
> >The only difference is that I'd like to have more than just one
> >backup. I'll try to find some time to experiment in the next few days.
> >
> >Thanks.
> >Ryan
> >
> >
> >>Ryan O'Hara schreef op 30-1-2014 17:42:
> >>>I'd like to define a proxy (tcp mode) that has multiple backend
> >>>servers yet only uses one at a time. In other words, traffic comes
> >>>into the frontend and is redirected to one backend server. Should that
> >>>server fail, another is chosen.
> >>>
> >>>I realize this might be an odd thing to do with haproxy, and if you're
> >>>thinking that simple VIP failover (ie. keepalived) is better suited
> >>>for this, you are correct. Long story.
> >>>
> >>>I've gotten fairly close to achieving this behavior by having all my
> >>>backend servers declared 'backup' and not using 'allbackups'. The only
> >>>caveat is that these "backup" servers have a preference based on the
> >>>order they are defined. Say my severs are defined in the backend like
> >>>this:
> >>>
> >>>   server foo-01 ... backup
> >>>   server foo-02 ... backup
> >>>   server foo-03 ... backup
> >>>
> >>>If foo-01 is up, all traffic will go to it. When foo-0t is down, all
> >>>traffic will go to foo-02. When foo-01 comes back online, traffic goes
> >>>back to foo-01. Ideally the backend servers would change only when it
> >>>failed. Beside, this solution is rather ugly.
> >>>
> >>>Is there a better way?
> >>>
> >>>Ryan
> >>>
> >>
> 
> 

Reply via email to