Per Buer wrote:
On Fri, Jun 18, 2010 at 7:02 AM, Don Faulkner <[email protected]> wrote:
I like the setup. But for some reason I think it needs to be:
web server -> load balancer -> cache -> load balancer -> ssl endpoint
One thing to consider; almost any server that is still within warranty
can deliver at least 1Gbps of traffic through Varnish, on new hardware
reaching 10Gbps shouldn't be that big a deal (is there someone out
there with 10Gbps hardware that would like to help us test? :-). So,
you should ask yourself - do you really need a load balancer in front
of Varnish? Having more Varnish server than you need will decrease
your hit rate (unless you're hashing on the URL) and will increase
your response time. It will also add to the complexity of the setup.
Relying on a simple cluster of just two servere where just the IP
address moves in case of failure will in a lot of scenarios lead to
better performance and better uptime.
The reason we want the double loadbalancer setup is that with 2 active
varnishes one can fail without having 90+ % increase of hits to the
backend servers for as long as it takes to get the backup-varnish loaded
up.
We don't have a lot of traffic but during peakhours having a slow
website will scare away potential paying customers instantly.
The setup we plan is simple enough protocol/application wise.
The complexity is in the number of SSL entrances we need for all of our
brands, not in the network setup.
I can understand what it should do which automatically means it's quite
simple. :)
Regards,
Martin
_______________________________________________
varnish-misc mailing list
[email protected]
http://lists.varnish-cache.org/mailman/listinfo/varnish-misc