On Fri, Jun 04, 2010 at 11:47:22AM -0600, Jason J. W. Williams wrote:
> Hi Willy,
> 
> Currently, we're moving away from appliances that do everything we
> need (A10 Networks) due to migrating to a cloud environment. We've got
> 6 years of experience with SLB (Alteon->Nauticus->A10), so to some
> degree yes no solution can be everything.

If you've known the Alteon, then you have the best example of a product
that does extremely well (and fast) a small set of things and extremely
badly a large set of other things.

> But inbound SLB l3/l4 TCP &
> UDP + l7 TCP is what we've come to expect as the toolset for our load
> balancers.

In clouds you generally have another issue which is related to source
IP address. You generally can't emit packets with an IP you don't own,
which means that can't reroute client requests between servers without
proxying or translating the source. Many UDP-based services are not
very much NAT-compliant. In fact, LVS supports one very nice feature
which is the tunnel mode. It is able to reroute packets with a valid
source and destination envelope while the packet inside is untouched.
It makes it an ideal solution for clouds.

DNS supports many fantaisies so that it could be proxied, but it was
designed to run without any need for load balancing since the client
is responsible for querying multiple servers. The only real reason
I see to load balance it is when you're an ISP who needs to always
give the same DNS IP address to millions of customers and who need
several servers for that task. But for this case, it looks rather
counter-productive to move that to clouds, considering that the DNS
part of the work is low compared to the per-packet processing which
will basically be doubled in a cloud environment.

> To adapt our infrastructure design to the cloud environment, we're
> moving away from a pair of SLB appliances handling SLB between every
> tier. Instead, we're putting (in this case) HAProxy on each server on
> a localhost address thereby handling SLB locally and avoiding
> necessitating the more complex network design we have now. Given the
> number of HAProxy instances needed for that setup, I'd prefer not to
> have to run both HAProxy and LVS. More moving parts...more to
> break...more to health check.

>From your description, it appears complex precisely because you're
multiplying the number of load balancers. Running many LBs on many
hosts is often complex to manage and troubleshoot. It only saves
two servers at the beginning, and none once you start to scale
because normally you keep your two servers for load balancing and
the other ones have more spare resources to do their job.

> Frankly, HAProxy appears to be better written and more solid than LVS
> which is the reason we're going forward with it.

Having put my nose in LVS a few times, I can understand this point.
But the fact is that processing datagrams is very different from
processing streams, and if UDP would have to be brought to haproxy,
I'm pretty sure there would be so many changes that we'd have to
fork it into two completely different products.

I've already looked for simple UDP load balancers several times but
never found anything usable. Most of the time, the need is just DNS
which is finally covered natively. That may be one of the reason why
it's hard to find anything doing this.

Regards,
Willy


Reply via email to