> At first glance, this is doing something that you can more cheaply and 
> efficiently do at a higher level, with >software dedicated to that purpose.  
> It's interesting, but I'm not sure it's more than just a restatement of the 
> >same solution with it's own problems.

Varnish performs very well.  Extending this to have a cluster
functionality within Varnish I think just makes sense.  The workaround
solutions so far seem to involve quite a bit of hardware as well as
having a miss rate of 50% in example of 2 Varnish instances.  Sure it
can hot populate fast, but it's two stacks of memory wasted for the
same data per se.  I suppose a custom solution to hash the inbound
requests somehow and determine which Varnish should have the data can
be done, but unsure if anyone now is doing that.

> F5/NetScaler is quite expensive, but they have significant functionality, too.
>
> The hardware required to run LVS/haproxy (for example) can be very cheap -- 
> Small RAM, 1-2 CPU cores > per ethernet interface.  When you're already 
> talking about scaling out to lots of big-RAM/disk Varnish
> boxes, the cost of a second load balancer is tiny, and the benefit of 
> redundancy is huge.

F5 has always made good gear.  The price point limits adoption to deep
pockets.  I am not convinced that most people need a hardware
balancing solution.  They have their limited adoption, and the N+1
purchase amounts - 2 minimum, 3 more optimally = $$$$$$.

> Squid has a peering feature; I think if you had ever tried it you would know 
> why it's not a fabulous idea. :) > It scales terribly.  Also, Memcache 
> pooling that I've seen scale involves logic in the app (a higher level).

Squid is a total disaster.  If it wasn't none of us would be here
using Varnish now would we :)  It's amazing Squid even works at this
point.

The memcached pooling is a simple formula really - it's microsecond
fast - yes typically done on the client:

Most standard client hashing within memcache clients uses a simple
modulus calculation on the value against the number of configured
memcached servers. You can summarize the process in pseudocode as:

@memcservers = ['a.memc','b.memc','c.memc'];
$value = hash($key);
$chosen = $value % length(@memcservers);
Replacing the above with values:

@memcservers = ['a.memc','b.memc','c.memc'];
$value = hash('myid');
$chosen = 7009 % 3;
In the above example, the client hashing algorithm will choose the
server at index 1 (7009 % 3 = 1), and store or retrieve the key and
value with that server.

> Varnish as a pool/cluster also doesn't provide redundancy to the client 
> interface.
>
> A distributed Varnish cache (or perhaps a memcache storage option in 
> Varnish?) is really interesting; it might be scalable, but not obviously.  It 
> also doesn't eliminate the need for a higher-level balancer.
>

Well, in this instance, Varnish can do the modulus math versus the not
Varnish servers in config pool. Wouldn't take any sort of time and the
logic already seems to exist in the VCL config to work around when a
backend server can be reached.  Same logic could be adapted to the
"front side" to try connecting to other Varnish instances and doing
the failover dance as needed.

I put in a feature request this evening for this functionality.  We'll
see what the official development folks think.  If it can't be
included in the core, then perhaps a front-end Varnish proxy is in
order developmentally.  I'll say this akin to Moxi in front of
memcached instances: http://labs.northscale.com/moxi/

I think tying Varnish into Memcached is fairly interesting as it
appears the market is allocating many resources towards memcached.  At
some point I believe memcached will become at least an unofficial
standard for fast memory based storage. There a number of
manufacturers making custom higher performance memcached solutions -
Gear6 and Schooner come to mind foremost.

That's my $1 worth :)
_______________________________________________
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc

Reply via email to