Hi,

On Tue, Feb 09, 2010 at 09:09:31PM -0700, Neal Richter wrote:
> Hi all,
> 
>   We've been happily using haproxy as a load balancer between a pool
> of apache boxes and a pool or tomcat boxes for 3 years in AWS.
> Amazing piece of software that gives us zero issues.
> 
>   Question.  Has anyone made/altered haproxy to the following:
> 
>   1) take an incoming request for a resource
>   2) farm that request out to N backend servers - yes the same request
> to N servers
>   3) Assemble the N responses and push them back up to the requester
 
No, because this is 100% application-specific and has nothing to do
with load balancing nor high availability. This looks more like RAID0
with servers, with no redundancy at all. Only an application will know
how to assemble those responses in a way that means something for it.
People are doing that frequently with memcache or such things, except
that they won't necessarily send the same request everywhere, they
will just know how their data are distributed across multiple servers
and send multiple parallel queries to those servers.

>   There are several scenarios where this seems interesting.  One is
> implementing a distributed reads to a backend pool of key-value
> stores, where the requester wants the N results (minus the ones that
> timeout) and will decide which one is most recent version.  (assume a
> consistent hashing alg is used to spread the writes across the pool of
> N kv stores).

Then if you want the application to get multiple responses and find the
best one, make it send multiple requests in the first place, and you're
done. Also, it would be faster because all responses will be read in
parallel, instead of having to wait for every server to finish sending
its response before concatenating the next one.

>   The second is a general variant of the first one. Distributed
> multi-request of any HTTP resource with same path where each of the N
> servers in the pool may have a different response, it's up to the
> requester to decide what to do with the N responses.

Same as above.

>   Assume the requester is smart enough to deal with concatenated HTTP
> responses that would come back from the multi-proxy process.
> 
>   I have a strong hunch that harpoxy's event driven architecture and
> solid connection handling will support this better than the naive
> approach of starting N threads to each made a different HTTP request.
> ;-)

It would be useless. If your application is able to deal with N responses,
it has to support the implied load. So make it send the N requests and you
don't need a fast proxy in the middle.

Willy


Reply via email to