Hello Claudio,

On Fri, Jul 06, 2012 at 10:17:15PM +0200, Claudio Poli wrote:
> Hello,
> we have a special requirement that we made it work with nginx but we are
> trying to accomplish the same with haproxy since nginx buffers uploads to
> disk and this behavior is not desiderable in this case.
> 
> We need to stream directly to the backend, however the backed ip address and
> port must be dictated by a client header (or other means).

Well then you need a proxy, not a reverse-proxy nor a load balancer.

> I don't care about the security right now; in nginx I was able to construct
> the backend server address dinamically using header variables but here I'm a
> stuck.
> 
> I'm using haproxy stable.
> 
> However with the 'peer' concept in mind I tried to move steps in this 
> direction:
> 
> backend nodejs_daemoner_http_specific_cluster
>   balance hdr(X-Destination)
>   hash-type consistent
>   no option redispatch
> 
>   server  node_daemoner_http_5005 127.0.0.1:5005 id 5005 check
>   server  node_daemoner_http_5006 127.0.0.1:5006 id 5006 check
>   server  node_daemoner_http_5007 127.0.0.1:5007 id 5007 check
> 
> the problem is that even if we are sending the custom header with a hashed
> value md5(dst ip:port) it seems to fall back to round robin again for
> subsequential requests with the same hash. I'm not sure at this point if I
> got it totally wrong.

There is no reason for the requests to round-robin. It's possible that
your header is not always sent or is not always properly updated.

> Plus this doesn't satisfy the requirement of being able to connect to
> specific backends: the very first connection should go directly to the
> backend specified by the client, without letting haproxy learning about the
> hash first.

Some people have been doing things like this in an ugly way. Basically
the principle is to transform the X-Destination header into a cookie and
enabling persistence on it. For instance :

 frontend www
   reqirep ^X-Destination:[\ ]*(.*)  Cookie:\ x-dest=\1
   use_backend nodejs_daemoner_http_specific_cluster

 backend nodejs_daemoner_http_specific_cluster
   cookie x-dest
   server  node_daemoner_http_5005 127.0.0.1:5005 cookie 127.0.0.1:5005 check
   server  node_daemoner_http_5006 127.0.0.1:5006 cookie 127.0.0.1:5006 check
   server  node_daemoner_http_5007 127.0.0.1:5007 cookie 127.0.0.1:5007 check

> As said the request can also be routed to other ip addresses outside the
> 127.0.0.1, and it will be awesome to not have to define them all on every
> instance since haproxy is the frontend of our web stack on an ec2 image,
> which is then scaled horizontally under an elastic load balancer with
> hundreds of instances.
>
> It's a particular requirement, but I wonder if it's possible at all.

Then you really need a proxy, because what you're describing is a proxy.
There is a "secret" http_proxy option in haproxy, but it will not be
usable for what you want, as it is only able to parse IP:ports in the
URI (eg: "GET http://127.0.0.1:5005/ HTTP/1.1"). And I really don't see
how to combine a header with this. You see, haproxy's goal is to guarantee
that the traffic cannot come to a host that is not referenced in the conf.
And you're precisely trying to overcome this, there's too little intersection
between your need and what haproxy is aimed at.

Regards,
Willy


Reply via email to