On Sat, Mar 31, 2012 at 08:06:23PM -0700, David Birdsong wrote:
> On Sat, Mar 31, 2012 at 7:55 PM, Kevin Heatwole <ke...@heatwoles.us> wrote:
> > I am just investigating use of haproxy for the first time.
> >
> > I'd like the balancing algorithm to send http request to the first server 
> > in the list until the number of requests hits a configurable number.  When 
> > the request limit for a server is hit, I then want new requests to go to 
> > the next server until that server hits its configurable limit.  So, instead 
> > of RR, I want to load down a server before overflowing to the next server.
> >
> > What I think I want to do is to always have the last server in the farm not 
> > have any requests.  If it does, I will activate another server to ensure I 
> > have enough capacity to handle the load spike.  But, when the last two 
> > servers go completely idle again, I can deactivate the last idle server.
> >
> > My servers are "in the cloud" and I pay for each one that is activated so I 
> > think this type of load balancing would help me activate only servers I 
> > need (saving me money).
> >
> > I would plan to automate this by having all servers included in the haproxy 
> > config but only the first server would initially be UP and all others DOWN. 
> >  When a server handles a request, it makes sure that its next server is 
> > activated.  When a server doesn't handle any requests for some time, it 
> > deactivates its next server (if any).
> 
> You could implement this by monitoring your available slots on a
> backend, once the slots decrease %N of total slots, spin up new
> instances. Apply the same logic in reverse to turn off nodes.

Or you can download latest snapshot and use "balance first", which
was proposed by Steen Larsen one month ago :-)

Willy


Reply via email to