Re: Strange different behavior

2010-01-15 Thread Poul-Henning Kamp
In message 20100114215025.gb9...@kjeks.kristian.int, Kristian Lyngstol writes : Vary on User-Agent is generally bad, and you should Just Fix That [tm]. Apart from the compatibility issue, a secondary reason it is a bad idea, is that User-Agent is practically unique for every single PC in the

Re: Purging multiple requests

2010-01-15 Thread Laurence Rowe
2010/1/12 John Norman j...@7fff.com: Scenario: -- We would prefer not to leverage checking a lot of paths. -- Many pages are cached for GET's. -- In vcl_recv, we want to remove cookies and check the cache: if (req.request == GET) {     unset req.http.cookie;     unset

Re: Strange different behavior

2010-01-15 Thread John Norman
OK. But if your application backend really doesn't do anything different for different user agents, then one should probably remove the user-agent? On Fri, Jan 15, 2010 at 7:52 AM, Poul-Henning Kamp p...@phk.freebsd.dk wrote: In message

Re: Strange different behavior

2010-01-15 Thread Rob S
Poul-Henning Kamp wrote: You really need to find out what bit of user-agent your backend cares about. We are talking a multiplication factor of 100-1000 here Very slightly off-topic, but is it possible to vary based on a cookie? I'd rather leave one of our applications to process the

Re: Strange different behavior

2010-01-15 Thread Poul-Henning Kamp
In message b6b8b6b71001150646w7f3ba876y30401d85f1813...@mail.gmail.com, John Norman writes: OK. But if your application backend really doesn't do anything different for different user agents, then one should probably remove the user-agent? yes, by all means do so. -- Poul-Henning Kamp

Strategies for splitting load across varnish instances? And avoiding single-point-of-failure?

2010-01-15 Thread John Norman
Folks, A couple more questions: (1) Are they any good strategies for splitting load across Varnish front-ends? Or is the common practice to have just one Varnish server? (2) How do people avoid single-point-of-failure for Varnish? Do people run Varnish on two servers, amassing similar local

Re: Strange different behavior

2010-01-15 Thread Laurence Rowe
2010/1/15 Rob S rtshils...@gmail.com: Poul-Henning Kamp wrote: You really need to find out what bit of user-agent your backend cares about.  We are talking a multiplication factor of 100-1000 here Very slightly off-topic, but is it possible to vary based on a cookie? I'd rather leave one of

Re: Strategies for splitting load across varnish instances? And avoiding single-point-of-failure?

2010-01-15 Thread Rob S
John Norman wrote: Folks, A couple more questions: (1) Are they any good strategies for splitting load across Varnish front-ends? Or is the common practice to have just one Varnish server? (2) How do people avoid single-point-of-failure for Varnish? Do people run Varnish on two servers,

sess_timeout not working in 2.0.6?

2010-01-15 Thread Simon Effenberg
I have a question about v2.0.6: after upgrading from 2.0.3 the sess_timeout first appear in the varnishlog after a minimum of 1 character was sent to varnish. so a client connection seems to never timedout (maybe sometime but not in the first 10 minutes). with 2.0.3 this wasn't any problem (the

Re: Strategies for splitting load across varnish instances? And avoiding single-point-of-failure?

2010-01-15 Thread Bendik Heltne
A couple more questions: (1) Are they any good strategies for splitting load across Varnish front-ends? Or is the common practice to have just one Varnish server? We have 3 servers. A bit overkill, but then we have redundancy even if one fail. I guess 2 is the minimum option if you have an

Re: Strategies for splitting load across varnish instances? And avoiding single-point-of-failure?

2010-01-15 Thread Rodrigo Benzaquen
HA PROXY is open spurce and works pretty well. Also you can do load balance based on HAS URL if you want. On Fri, Jan 15, 2010 at 3:09 PM, Bendik Heltne bhel...@gmail.com wrote: A couple more questions: (1) Are they any good strategies for splitting load across Varnish front-ends? Or is

Re: Strategies for splitting load across varnish instances? And avoiding single-point-of-failure?

2010-01-15 Thread David Birdsong
On Fri, Jan 15, 2010 at 10:11 AM, Rodrigo Benzaquen rodr...@mercadolibre.com wrote: HA PROXY is open spurce and works pretty well. Also you can do load balance based on HAS URL if you want. aye, the development is pretty active also. i asked for a consistent hash option in haproxy and got one

tool for dumping contents of cache

2010-01-15 Thread David Birdsong
I know this would be a huge performance problem, but I'd really like a tool that could examine the storage file(s) of a running varnish instance that could dump out url's and hit counts. I'd load balance traffic away from varnish while doing this, so it would be ok for this tool to pummel the

Re: tool for dumping contents of cache

2010-01-15 Thread Poul-Henning Kamp
In message dcccdf791001151324l5b15909br954a438738b2...@mail.gmail.com, David Birdsong writes: I know this would be a huge performance problem, but I'd really like a tool that could examine the storage file(s) of a running varnish instance that could dump out url's and hit counts. Play around

Re: Strategies for splitting load across varnish instances? And avoiding single-point-of-failure?

2010-01-15 Thread pub crawler
Have we considered adding pooling functionality to Varnish much like what they have in memcached? Run multiple Varnish(es) and load distributed amongst the identified Varnish server pool So an element in Varnish gets hashed and the hash identifies the server in the pool it's on. If the

Re: Strategies for splitting load across varnish instances? And avoiding single-point-of-failure?

2010-01-15 Thread Michael Fischer
On Fri, Jan 15, 2010 at 3:39 PM, pub crawler pubcrawler@gmail.comwrote: The recommendation of load balancers in front on Varnish to facilitate this feature seems costly when talking about F5 gear. The open source solutions require at least two severs dedicated to this load balancing

Re: Strategies for splitting load across varnish instances? And avoiding single-point-of-failure?

2010-01-15 Thread Ken Brownfield
On Jan 15, 2010, at 3:39 PM, pub crawler wrote: Have we considered adding pooling functionality to Varnish much like what they have in memcached? Run multiple Varnish(es) and load distributed amongst the identified Varnish server pool So an element in Varnish gets hashed and the hash

Re: Strategies for splitting load across varnish instances? Andavoiding single-point-of-failure?

2010-01-15 Thread rodrigo
we have F5 GTM on our main datacenter and some servers with varnish there, also we have have ha proxy with 3 varnish servers on local sites and use F5 gtm with geoip to server always the content from the local site. On each local datacenter we have 400mbts so ha proxy works great for us. Also

Re: Strategies for splitting load across varnish instances? And avoiding single-point-of-failure?

2010-01-15 Thread pub crawler
At first glance, this is doing something that you can more cheaply and efficiently do at a higher level, with software dedicated to that purpose.   It's interesting, but I'm not sure it's more than just a restatement of the same solution with it's own problems. Varnish performs very well.

Re: Strategies for splitting load across varnish instances? And avoiding single-point-of-failure?

2010-01-15 Thread Michael Fischer
On Fri, Jan 15, 2010 at 6:14 PM, Michael Fischer mich...@dynamine.netwrote: I'm all for putting backend hashing into Varnish for the purpose of routing requests to backends based on a consistent hash of the request parameters -- and there's no reason why the backend can't be another Varnish

how to purge via http

2010-01-15 Thread David Birdsong
I curl and the hit count is 3 curl -I http://localhost:6081/lru.10.cache.buster HTTP/1.1 200 OK Server: nginx/0.7.64 Content-Type: application/octet-stream Last-Modified: Sat, 16 Jan 2010 03:03:10 GMT X-Varnish-IP: 127.0.0.1 X-Varnish-Port: 6081 Content-Length: 104857600 Date: Sat, 16 Jan 2010

Re: how to purge via http

2010-01-15 Thread David Birdsong
On Fri, Jan 15, 2010 at 7:46 PM, David Birdsong david.birds...@gmail.com wrote: I curl and the hit count is 3 curl -I http://localhost:6081/lru.10.cache.buster HTTP/1.1 200 OK Server: nginx/0.7.64 Content-Type: application/octet-stream Last-Modified: Sat, 16 Jan 2010 03:03:10 GMT