Re: [varnish] Re: [varnish] Re: Handling of cache-control

2010-01-19 Thread Ricardo Newbery

On Jan 19, 2010, at 2:03 PM, Michael Fischer wrote:

> On Tue, Jan 19, 2010 at 1:48 PM, Ricardo Newbery  > wrote:
>
> Other than the private token, the other thing I used to do to tell
> Varnish and clients to cache differently is to attach a special header
> like X-CacheInVarnishOnly or some such (support in Varnish for
> Surrogate-Control would be a better solution).  But recently, I came
> across another strategy.  As far as I can tell, there is no good
> usecase for a non-zero s-maxage token outside your reverse-proxy.  So
> now I just use the s-maxage token to tell Varnish how to cache and
> then strip it from the response headers (or reset to s-maxage=0) to
> avoid contaminating any forward proxies downstream.
>
> This seems logical to me.  Are there any drawbacks to using  
> Surrogate-Control?
>
> --Michael


Not that I'm aware of.  Except that only Squid 3.x supports it right  
now  ;-)

Cheers,
Ricardo Newbery
http://digitalmarbles.com

___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: [varnish] Re: Handling of cache-control

2010-01-19 Thread Ricardo Newbery

On Jan 19, 2010, at 10:26 AM, Michael Fischer wrote:

> Cache-Control: private certainly meets the goal you stated, at least  
> insofar as making Varnish behave differently than the client -- it  
> states that the client can cache, but Varnish (as an intermediate  
> cache) cannot.


I'm being pedantic but... technically I believe private is just  
ignored by browsers, which amounts to the same thing  :-)


> I assume, however, that some engineers want a way to do the opposite  
> - to inform Varnish that it can cache, but inform the client that it  
> cannot.  Ordinarily I'd think this is not a very good idea, since  
> you almost always want to keep the cached copy as close to the user  
> as possible.  But I guess there are some circumstances where an  
> engineer would want to preload a cache with prerendered data that is  
> expensive to generate, and, also asynchronously force updates by  
> flushing stale objects with a PURGE or equivalent.  In that case the  
> cache TTL would be very high, but not necessarily meaningful.
>
> I'm not sure it makes sense to extend the Cache-Control: header  
> here, because there could be secondary intermediate caches  
> downstream that are not under the engineer's control; so we need a  
> way to inform only authorized intermediate caches that they should  
> cache the response with the specified TTL.
>
> One way I've seen to accomplish this goal is to inject a custom  
> header in the response, but we need to ensure it is either encrypted  
> (so that non-authorized caches can't see it -- but this could be  
> costly in terms of CPU) or removed by the last authorized  
> intermediate cache as the response is passed back downstream.


Storing responses only in your reverse-proxy and out of the browser  
cache is a common usecase for a CMS.  Otherwise, a content change may  
not propagate to your users unless you force them all to do  
conditional requests to your backend.

A custom header works.  So would the Surrogate-Control header if  
Varnish supported it -- this is exactly the usecase this header was  
intended for.  But these days, I've begun using s-maxage as a  
surrogate for Surrogate-Control and just stripping it from the final  
response -- not as flexible as Surrogate-Control but it does  
everything I need right now.

Regards,
Ricardo Newbery
http://digitalmarbles.com


___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: [varnish] Re: Handling of cache-control

2010-01-19 Thread Michael Fischer
On Tue, Jan 19, 2010 at 1:48 PM, Ricardo Newbery wrote:

Other than the private token, the other thing I used to do to tell
> Varnish and clients to cache differently is to attach a special header
> like X-CacheInVarnishOnly or some such (support in Varnish for
> Surrogate-Control would be a better solution).  But recently, I came
> across another strategy.  As far as I can tell, there is no good
> usecase for a non-zero s-maxage token outside your reverse-proxy.  So
> now I just use the s-maxage token to tell Varnish how to cache and
> then strip it from the response headers (or reset to s-maxage=0) to
> avoid contaminating any forward proxies downstream.


This seems logical to me.  Are there any drawbacks to using
Surrogate-Control?

--Michael
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Health check -- just connect, or full response?

2010-01-19 Thread Poul-Henning Kamp
In message , John No
rman writes:
>Folks,
>
>For the health check (or, ahem, "backend probe," as the docs has it --
>ouch!), does "health" constitute ability to connect?
>
>Or does it check for a 200?

It checks 200

-- 
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
p...@freebsd.org | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Time to replace the hit ratio with something more intuitive?

2010-01-19 Thread Michael Fischer
On Tue, Jan 19, 2010 at 1:40 PM, Rob S  wrote:

> Michael Fischer wrote:
>
>  On Tue, Jan 19, 2010 at 12:09 PM, Nils Goroll > sl...@schokola.de>> wrote:
>>
>>I am suggesting to amend (or replace ?) this figure by a ratio of
>>client
>>requests being handled by the cache by total number of requests.
>>In other words,
>>a measure for how many of the client requests do not result in a
>>backend request.
>>
>>
>> I vote for the replacement option.  In my view, the ratio should be (total
>> requests)/(requests satisfied from cache).
>>
> That'd give odd figures (eg 1.25), when you'd expect to see 0.8.  Can we
> flip it the other way up?
>

Oops!  Yes.

I'd also caution against replacing, as people may have monitoring against
> the old figures...
>

Well, under the current regime, the figures may lead to a false sense of
complacency since the hit ratio may be falsely high.  If changing it causes
additional alerts to be raised, they probably needed to know all along. :)

--Michael
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: [varnish] Re: Handling of cache-control

2010-01-19 Thread Ricardo Newbery

On Jan 18, 2010, at 4:37 PM, Poul-Henning Kamp wrote:

> In message ,  
> "Michael S. Fis
> cher" writes:
>> On Jan 18, 2010, at 5:20 AM, Tollef Fog Heen wrote:
>
>>> My suggestion is to also look at Cache-control: no-cache, possibly  
>>> also
>>> private and no-store and obey those.
>>
>> Why wasn't it doing it all along?
>
> Because we wanted to give the backend a chance to tell Varnish one
> thing with respect to caching, and the client another.
>
> I'm not saying we hit the right decision, and welcome any consistent,
> easily explainable policy you guys can agree on.



IMHO, the private token should be added to the list that Varnish  
supports out-of-the-box as there is probably a very good reason why  
the backend wants to keep private responses out of any shared caches.   
I'm ambivalent about the others.  The no-store and no-cache tokens can  
be a problem for IE in certain usecases so I try to discourage their  
use.  Instead I just set maxage=0 with no etag/lastmodified which for  
most practical cases is pretty much equivalent.  In practice, I  
usually add all three tokens (private, no-store, no-cache) to vcl  
anyway just to cover my bases.

Other than the private token, the other thing I used to do to tell  
Varnish and clients to cache differently is to attach a special header  
like X-CacheInVarnishOnly or some such (support in Varnish for  
Surrogate-Control would be a better solution).  But recently, I came  
across another strategy.  As far as I can tell, there is no good  
usecase for a non-zero s-maxage token outside your reverse-proxy.  So  
now I just use the s-maxage token to tell Varnish how to cache and  
then strip it from the response headers (or reset to s-maxage=0) to  
avoid contaminating any forward proxies downstream.

Cheers,
Ricardo Newbery
http://digitalmarbles.com





___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Time to replace the hit ratio with something more intuitive?

2010-01-19 Thread Rob S
Michael Fischer wrote:
> On Tue, Jan 19, 2010 at 12:09 PM, Nils Goroll  > wrote:
>
> I am suggesting to amend (or replace ?) this figure by a ratio of
> client
> requests being handled by the cache by total number of requests.
> In other words,
> a measure for how many of the client requests do not result in a
> backend request.
>
>
> I vote for the replacement option.  In my view, the ratio should be 
> (total requests)/(requests satisfied from cache).
That'd give odd figures (eg 1.25), when you'd expect to see 0.8.  Can we 
flip it the other way up?

I'd also caution against replacing, as people may have monitoring 
against the old figures...

Rob
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Time to replace the hit ratio with something more intuitive?

2010-01-19 Thread Michael Fischer
On Tue, Jan 19, 2010 at 12:09 PM, Nils Goroll  wrote:

> The varnishstat cache hit rate basically gives a ratio for how many
> requests
> being directed to the cache component of varnish have been answered from
> it. It
> does not say anything about the number of requests being passed onto the
> backend
> for whatever reason. So it is possible to see cache hit rates of 0.
> (99.99%)
> but still 99% of the client requests hit your backend, if only 1% of the
> requests qualify for being served from the cache.


> I am suggesting to amend (or replace ?) this figure by a ratio of client
> requests being handled by the cache by total number of requests. In other
> words,
> a measure for how many of the client requests do not result in a backend
> request.


I vote for the replacement option.  In my view, the ratio should be (total
requests)/(requests satisfied from cache).

--Michael
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Time to replace the hit ratio with something more intuitive?

2010-01-19 Thread Darryl Dixon - Winterhouse Consulting
> Hi,
>
> in http://varnish.projects.linpro.no/ticket/613 I have suggested to add a
> measure to varnishstat which I thought could be called the "efficiency
> ratio".
>
> Here's how the two figures look like on a production system:
>
> Hitrate ratio:  10  100 1000
> Hitrate avg:0.9721   0.9721   0.9731
> Efficiency ratio:  10  100 1000
> Efficiency avg:0.9505   0.9522   0.9533
>
>  55697963   200.97   256.93 Client connections accepted
> 402992210  1518.81  1858.98 Client requests received
> 390022582  1471.82  1799.15 Cache hits
>  1549 0.00 0.01 Cache hits for pass
>   905363722.0041.76 Cache misses
>
>
> Now it's up to you, what do you think about this?

+1

regards,
Darryl Dixon
Winterhouse Consulting Ltd
http://www.winterhouseconsulting.com
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Handling of cache-control

2010-01-19 Thread Rob S
Michael Fischer wrote:
> On Mon, Jan 18, 2010 at 4:37 PM, Poul-Henning Kamp  > wrote:
>
> In message  >,
> "Michael S. Fis
> cher" writes:
> >On Jan 18, 2010, at 5:20 AM, Tollef Fog Heen wrote:
>
> >> My suggestion is to also look at Cache-control: no-cache,
> possibly also
> >> private and no-store and obey those.
> >
> >Why wasn't it doing it all along?
>
> Because we wanted to give the backend a chance to tell Varnish one
> thing with respect to caching, and the client another.
>
> I'm not saying we hit the right decision, and welcome any consistent,
> easily explainable policy you guys can agree on.
>
>
> Well, the problem is that application engineers who understand what 
> that header does have a reasonable expectation that the caches will 
> obey them, and so I think Vanish should honor them as Squid does. 
>  Otherwise surprising results will occur when the caching platform is 
> changed.
>
> Cache-Control: private certainly meets the goal you stated, at least 
> insofar as making Varnish behave differently than the client -- it 
> states that the client can cache, but Varnish (as an intermediate 
> cache) cannot.  
>
> I assume, however, that some engineers want a way to do the opposite - 
> to inform Varnish that it can cache, but inform the client that it 
> cannot.  Ordinarily I'd think this is not a very good idea, since you 
> almost always want to keep the cached copy as close to the user as 
> possible.  But I guess there are some circumstances where an engineer 
> would want to preload a cache with prerendered data that is expensive 
> to generate, and, also asynchronously force updates by flushing stale 
> objects with a PURGE or equivalent.  In that case the cache TTL would 
> be very high, but not necessarily meaningful. 
>
> I'm not sure it makes sense to extend the Cache-Control: header here, 
> because there could be secondary intermediate caches downstream that 
> are not under the engineer's control; so we need a way to inform only 
> authorized intermediate caches that they should cache the response 
> with the specified TTL.  
>
> One way I've seen to accomplish this goal is to inject a custom header 
> in the response, but we need to ensure it is either encrypted (so that 
> non-authorized caches can't see it -- but this could be costly in 
> terms of CPU) or removed by the last authorized intermediate cache as 
> the response is passed back downstream.
>
> --Michael

Michael,

You've obviously got some strong views about varnish, as we've all seen 
from the mailing list over the past few days!

When we deployed varnish, we did so in front of applications that 
weren't prepared to have a cache in front of them.  Accordingly, we 
disabled all caching on HTML and RSS type content in Varnish, and 
instead just cached CSS / JS / images.  This was a good outcome because 
we could stop using round robin DNS (which is a bit questionable, imho, 
if it includes more than two or three hosts) to the web servers, and 
instead just point 2 A records at Varnish.  We elected to use 
X-External-Cache-Control AND X-Internal-TTL as a headers that we'd set 
in Varnish-aware applications.  So, old apps that emit cache-control 
headers are completely uncached by Varnish), and new-apps can benefit to 
a certain degree of caching by Varnish.

PHK's plans for 2010 will enable us to fully exploit our X-Internal-TTL 
headers because it'll be able to parse TTL values out of headers.  In 
the meantime, these are hard-set in Varnish to a value that's 
appropriate for our apps.

The X-External-Cache-Control is then presented as Cache-Control to 
public HTTP requests.

This describes how we've chosen to deploy varnish, without causing our 
application developers huge headaches.  In parallel, we've changed many 
of our sites to use local cookies+javascript to add personalisation to 
the most popular pages.  Overall, deploying Varnish has seen a big 
reduction in back end requests, PLUS the ability to load balance over a 
large pool whilst still implementing sticky-sessions where our apps 
still need them.  Varnish is, as the name suggests, a lovely layer in 
front of our platform which makes it perform better.

Now, to answer your points: 

1) Application developers to be aware of caching headers:  I'd disagree 
here.  Our approach is to use code libraries to deliver functionality to 
the developers which the sysadmins can maintain.  There's always some 
overlap here, but we're comfortable with our position.  We're a PHP 
company, and so we've a class that's used statically, with methods such 
as Cacheability::noCache(), Cacheability::setExternalExpiryTime($secs), 
and Cacheability::setInternalExpiryTime($secs), as well as 
Cacheability::purgeCache($path).  Just as, I'm sure, your developers are 
using abstraction layers for database access, then they could use a 
simila

In management port: vcl.discard

2010-01-19 Thread John Norman
Folks,

I've been loading new VCL files with a timestamp on the name (e.g.,
cfg100119151756).

vcl.discard is great if you know the name.

But it could be very useful to have a command such as "vcl.purge" to
get rid of all configs except for the active one.

John
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Time to replace the hit ratio with something more intuitive?

2010-01-19 Thread Nils Goroll
Hi,

in http://varnish.projects.linpro.no/ticket/613 I have suggested to add a 
measure to varnishstat which I thought could be called the "efficiency ratio".

Tollef has commented that we'd need the community's (YOUR) opinion on this:

The varnishstat cache hit rate basically gives a ratio for how many requests 
being directed to the cache component of varnish have been answered from it. It 
does not say anything about the number of requests being passed onto the 
backend 
for whatever reason. So it is possible to see cache hit rates of 0. 
(99.99%) 
but still 99% of the client requests hit your backend, if only 1% of the 
requests qualify for being served from the cache.

I am suggesting to amend (or replace ?) this figure by a ratio of client 
requests being handled by the cache by total number of requests. In other 
words, 
a measure for how many of the client requests do not result in a backend 
request.

My experience is that this figure is far more important, because cache users 
will mostly be interested in saving backend requests. The cache hit rate is 
probably of secondary importance, and it can be confusing to get a high cache 
hit rate while still (too) many requests are hitting the backend.

Here's how the two figures look like on a production system:

Hitrate ratio:  10  100 1000
Hitrate avg:0.9721   0.9721   0.9731
Efficiency ratio:  10  100 1000
Efficiency avg:0.9505   0.9522   0.9533

 55697963   200.97   256.93 Client connections accepted
402992210  1518.81  1858.98 Client requests received
390022582  1471.82  1799.15 Cache hits
 1549 0.00 0.01 Cache hits for pass
  905363722.0041.76 Cache misses


Now it's up to you, what do you think about this?

Nils
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Health check -- just connect, or full response?

2010-01-19 Thread John Norman
Folks,

For the health check (or, ahem, "backend probe," as the docs has it --
ouch!), does "health" constitute ability to connect?

Or does it check for a 200?

Or get an entire page and verify that it's the right number of bytes . . . ?

Or . . . ?

In short, what constitutes a successful probe?

I'm using .url not .request

John
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Handling of cache-control

2010-01-19 Thread Michael Fischer
On Mon, Jan 18, 2010 at 4:37 PM, Poul-Henning Kamp wrote:

> In message , "Michael
> S. Fis
> cher" writes:
> >On Jan 18, 2010, at 5:20 AM, Tollef Fog Heen wrote:
>
> >> My suggestion is to also look at Cache-control: no-cache, possibly also
> >> private and no-store and obey those.
> >
> >Why wasn't it doing it all along?
>
> Because we wanted to give the backend a chance to tell Varnish one
> thing with respect to caching, and the client another.
>
> I'm not saying we hit the right decision, and welcome any consistent,
> easily explainable policy you guys can agree on.


Well, the problem is that application engineers who understand what that
header does have a reasonable expectation that the caches will obey them,
and so I think Vanish should honor them as Squid does.  Otherwise surprising
results will occur when the caching platform is changed.

Cache-Control: private certainly meets the goal you stated, at least insofar
as making Varnish behave differently than the client -- it states that the
client can cache, but Varnish (as an intermediate cache) cannot.

I assume, however, that some engineers want a way to do the opposite - to
inform Varnish that it can cache, but inform the client that it cannot.
 Ordinarily I'd think this is not a very good idea, since you almost always
want to keep the cached copy as close to the user as possible.  But I guess
there are some circumstances where an engineer would want to preload a cache
with prerendered data that is expensive to generate, and, also
asynchronously force updates by flushing stale objects with a PURGE or
equivalent.  In that case the cache TTL would be very high, but not
necessarily meaningful.

I'm not sure it makes sense to extend the Cache-Control: header here,
because there could be secondary intermediate caches downstream that are not
under the engineer's control; so we need a way to inform only authorized
intermediate caches that they should cache the response with the specified
TTL.

One way I've seen to accomplish this goal is to inject a custom header in
the response, but we need to ensure it is either encrypted (so that
non-authorized caches can't see it -- but this could be costly in terms of
CPU) or removed by the last authorized intermediate cache as the response is
passed back downstream.

--Michael
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


RE: Feature REQ: Match header value against acl

2010-01-19 Thread Henry Paulissen
Nice

When will this be in trunk?

Regards,




@Paul, sorry... forgot to include varnish-misc

-Oorspronkelijk bericht-
Van: p...@critter.freebsd.dk [mailto:p...@critter.freebsd.dk] Namens
Poul-Henning Kamp
Verzonden: dinsdag 19 januari 2010 18:24
Aan: Henry Paulissen
CC: varnish-misc@projects.linpro.no
Onderwerp: Re: Feature REQ: Match header value against acl 

In message <002501ca9918$aa519aa0$fef4cf...@paulissen@qbell.nl>, "Henry
Pauliss
en" writes:

>What I tried to do is as follow:
>
>if ( !req.http.X-Forwarded-For ~ purge ) {

I have decided what the syntax for this will be, but I have still
not implemented it.

In general all type conversions, except to string, will be explicit
and provide a default, so the above would become:


if (!IP(req.http.X-Forwarded-For, 127.0.0.2) ~ purge) {
...

If the X-F-F header is not there, or does not contain an IP#,
127.0.0.2 will be used instead.



-- 
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
p...@freebsd.org | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.

___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Feature REQ: Match header value against acl

2010-01-19 Thread Poul-Henning Kamp
In message <002501ca9918$aa519aa0$fef4cf...@paulissen@qbell.nl>, "Henry Pauliss
en" writes:

>What I tried to do is as follow:
>
>if ( !req.http.X-Forwarded-For ~ purge ) {

I have decided what the syntax for this will be, but I have still
not implemented it.

In general all type conversions, except to string, will be explicit
and provide a default, so the above would become:


if (!IP(req.http.X-Forwarded-For, 127.0.0.2) ~ purge) {
...

If the X-F-F header is not there, or does not contain an IP#,
127.0.0.2 will be used instead.



-- 
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
p...@freebsd.org | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Varnish use for purely binary files

2010-01-19 Thread Michael S. Fischer
On Jan 19, 2010, at 12:46 AM, Poul-Henning Kamp wrote:

> In message , "Michael S. 
> Fis
> cher" writes:
> 
>> Does Varnish already try to utilize CPU caches efficiently by employing =
>> some sort of LIFO thread reuse policy or by pinning thread pools to =
>> specific CPUs?  If not, there might be some opportunity for optimization =
>> there.
> 
> You should really read the varnish_perf.pdf slides I linked to yesteday...

They appear to only briefly mention the LIFO issue (in one bullet point toward 
the end), and do not discuss the CPU affinity issue.

--Michael
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Feature REQ: Match header value against acl

2010-01-19 Thread Henry Paulissen
I noticed it is impossible to match a header value against a acl.

 

What I tried to do is as follow:

if ( !req.http.X-Forwarded-For ~ purge ) {

remove req.http.Cache-Control;

}

 

This is to reduce the number of forced refreshes due to bots.

And normally you would use client.ip (what works with acl's), but I have a
load balancer in front of varnish. So all client ip addresses are in the
X-Forwarded-For header.

 

A dirty quick fix for now is to use regex, but this gives a lot of extra
code (as I have to match against serval ip's).

 

Current version: varnish-trunk SVN 

 

 

Regards,

Henry

 

___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Strategies for splitting load across varnish instances? And avoiding single-point-of-failure?

2010-01-19 Thread Rob Ayres
2010/1/15 Rob S 

> John Norman wrote:
> > Folks,
> >
> > A couple more questions:
> >
> > (1) Are they any good strategies for splitting load across Varnish
> > front-ends? Or is the common practice to have just one Varnish server?
> >
> > (2) How do people avoid single-point-of-failure for Varnish? Do people
> > run Varnish on two servers, amassing similar local caches, but put
> > something in front of the two Varnishes? Or round-robin-DNS?
> >
> We're running with two instances and round-robin DNS.  The varnish
> servers are massively underused, and splitting the traffic also means we
> get half the hit rate.  But it avoids the SPOF.
>
> Is anyone running LVS or similar in front of Varnish and can share their
> experience?
>

We run two varnish servers behind a netscaler load balancer to eliminate
SPOF. Works fine, as the previous poster mentions you lower your hit rate -
but not as much as I expected.

As far as load is concerned, we could easily use just one server and it
would probably still be 99% idle.
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Cache invalidation (not PURGE)

2010-01-19 Thread Antoni Villalonga
Hi!

I'm looking for a method to invalidate some url.
http://example.com/invalidate.html, for example.

If I PURGE using 'purge()' function and 2 diferent users GET 
"http://example.com/invalidate.html"; at time, varnish ask for the url to the 
backend server twice.

When a url cache expires, and 2 diferent users GET the url at time, varnish ask 
only one time to the backend and serve an expired version to the "second" user. 
(yes, we use "grace time").

In short, I also want to use "grace time" when urls are "purged".

Thanks!

___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


RE: feature request cache refresh

2010-01-19 Thread Henry Paulissen
As far as I know, varnish does this by default?

To expire content you have to serve proper expire and last-modified headers.
Some (dynamic) applications sets inproper or even non of those headers at
all. 


===
@Martin Boer (DTCH)
Neem ff contact met mij op via email.
Ik heb redelijk wat ervarting opgeboewd met varnish en kan je mogelijk van
dienst zijn.
===


Reagrds,
Henry
-Original Message-
From: varnish-misc-boun...@projects.linpro.no
[mailto:varnish-misc-boun...@projects.linpro.no] On Behalf Of Rob S
Sent: dinsdag 19 januari 2010 9:23
To: Martin Boer
Cc: Varnish misc
Subject: Re: feature request cache refresh

Martin Boer wrote:
> I would like to see the following feature in varnish;
> during the grace period varnish will serve requests from the cache but 
> simultaniously does a backend request and stores the new object.
>   
This would also be of interest to us.  I'm not sure if it's best to have 
a parameter to vary the behaviour of 'grace', or to have an additional 
parameter for "max age of stale content to serve".
 
> If anyone has a workable workaround to achieve the same results I'm very 
> interested.
>   
Anyone?



Rob
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc

___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Varnish use for purely binary files

2010-01-19 Thread Poul-Henning Kamp
In message , "Michael S. Fis
cher" writes:

>Does Varnish already try to utilize CPU caches efficiently by employing =
>some sort of LIFO thread reuse policy or by pinning thread pools to =
>specific CPUs?  If not, there might be some opportunity for optimization =
>there.

You should really read the varnish_perf.pdf slides I linked to yesteday...

-- 
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
p...@freebsd.org | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: feature request cache refresh

2010-01-19 Thread Rob S
Martin Boer wrote:
> I would like to see the following feature in varnish;
> during the grace period varnish will serve requests from the cache but 
> simultaniously does a backend request and stores the new object.
>   
This would also be of interest to us.  I'm not sure if it's best to have 
a parameter to vary the behaviour of 'grace', or to have an additional 
parameter for "max age of stale content to serve".
 
> If anyone has a workable workaround to achieve the same results I'm very 
> interested.
>   
Anyone?



Rob
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


feature request cache refresh

2010-01-19 Thread Martin Boer
Hi all,

I would like to see the following feature in varnish;
during the grace period varnish will serve requests from the cache but 
simultaniously does a backend request and stores the new object.

As varnish is much faster than backend servers this will give the end 
user the fastest experience possible and at the same time dynamic web pages
will be both dynamicish and cached at the same time.

If anyone has a workable workaround to achieve the same results I'm very 
interested.

The reason I would like this feature is because our webshop has almost 
hourly changing prices. This means we can't cache all related pages (or 
not for very long) and at the same time that the backends receive a 
requests they have to rebuild data from several databases which is slow.


Regards,
Martin



 
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc