2012/10/23 Thomas Heil <[email protected]>:
> Hi,
>
> On 23.10.2012 13:55, Finn Arne Gangstad wrote:
>>
>> Each request is a reasonably simple GET request that typically takes
>> 10-20ms to process. This works great until a server needs to GC, then
>> the query will hang for a few seconds.
> Iam not quit sure, but I think you can play with timeout server and
> option redispatch and retries, so that when GC occours the request would be
> redispatched to the next server in the backend.
>
Try using "balance leastconn", if server will slow down/halt because
of GC his queue will quickly be higher than rest of servers and new
request will hit non-GCing ones, only disadvantage is that servers
which respond faster will on average get more requests but that can be
a good thing, if for any reason (backup, system update etc.) one of
servers will start answering slower it will automatically get less
requests.

-- 
Mariusz Gronczewski

Reply via email to