Re: Two New HTTP Caching Extensions

2008-04-08 Thread Ricardo Newbery

On Apr 8, 2008, at 4:26 PM, Michael S. Fischer wrote:

> On Tue, Apr 8, 2008 at 4:25 PM, Michael S. Fischer <[EMAIL PROTECTED] 
> > wrote:
>> On Tue, Apr 8, 2008 at 4:18 PM, Ricardo Newbery <[EMAIL PROTECTED] 
>> > wrote:
>>> +1 on stale-while-revalidate.  I found this one to be real handy.
>>
>> Another +1
>
> I should add a qualifier to my vote, that stale-while-revalidate
> generally is used to mask suboptimal backend performance and so I
> discourage it in favor of fixing the backend.
>
> --Michael


Of course the main premise of a reverse-proxy cache is to mask  
suboptimal backend performance.  :-)

Ric


___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Two New HTTP Caching Extensions

2008-04-08 Thread Michael S. Fischer
On Tue, Apr 8, 2008 at 4:25 PM, Michael S. Fischer <[EMAIL PROTECTED]> wrote:
> On Tue, Apr 8, 2008 at 4:18 PM, Ricardo Newbery <[EMAIL PROTECTED]> wrote:
>  >  +1 on stale-while-revalidate.  I found this one to be real handy.
>
>  Another +1

I should add a qualifier to my vote, that stale-while-revalidate
generally is used to mask suboptimal backend performance and so I
discourage it in favor of fixing the backend.

--Michael
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Two New HTTP Caching Extensions

2008-04-08 Thread Michael S. Fischer
On Tue, Apr 8, 2008 at 4:18 PM, Ricardo Newbery <[EMAIL PROTECTED]> wrote:
>  +1 on stale-while-revalidate.  I found this one to be real handy.

Another +1

--Michael
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Two New HTTP Caching Extensions

2008-04-08 Thread Ricardo Newbery

On Apr 7, 2008, at 3:18 PM, Jon Drukman wrote:

> Poul-Henning Kamp wrote:
>> In message <[EMAIL PROTECTED]>, Sam  
>> Quigley writ
>> es:
>>> ...just thought I'd point out another seemingly-nifty thing the  
>>> Squid
>>> folks are working on:
>>>
>>> http://www.mnot.net/cache_channels/
>>> and
>>> http://www.mnot.net/blog/2008/01/04/cache_channels
>>
>> Interesting to see what hoops they try to jump through these days...
>>
>
> I just got through working at Yahoo and they have valid reasons to  
> want
> all these behaviors.  The thing I didn't like about the cache channel
> implementation is it involves squid polling an RSS feed every few
> seconds to determine which bits of the cache to invalidate.
>
> I'm looking at launching a small site for a client and the
> stale-while-revalidate/stale-on-error functionality is fairly  
> critical.
>  I want to go with varnish, though. Front end cache server in India,
> pulling content from the USA... lots of potential for slow/dead
> connections back to the origin, so it would be great if Varnish would
> serve stale content in this eventuality.
>
> -jsd-


+1 on stale-while-revalidate.  I found this one to be real handy.

Ric


___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Two New HTTP Caching Extensions

2008-04-08 Thread Jon Drukman
Poul-Henning Kamp wrote:
> In message <[EMAIL PROTECTED]>, Sam Quigley writ
> es:
>> ...just thought I'd point out another seemingly-nifty thing the Squid  
>> folks are working on:
>>
>> http://www.mnot.net/cache_channels/
>> and
>> http://www.mnot.net/blog/2008/01/04/cache_channels
> 
> Interesting to see what hoops they try to jump through these days...
> 

I just got through working at Yahoo and they have valid reasons to want 
all these behaviors.  The thing I didn't like about the cache channel 
implementation is it involves squid polling an RSS feed every few 
seconds to determine which bits of the cache to invalidate.

I'm looking at launching a small site for a client and the 
stale-while-revalidate/stale-on-error functionality is fairly critical. 
  I want to go with varnish, though. Front end cache server in India, 
pulling content from the USA... lots of potential for slow/dead 
connections back to the origin, so it would be great if Varnish would 
serve stale content in this eventuality.

-jsd-

___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: cache empties itself?

2008-04-08 Thread Ricardo Newbery

On Apr 8, 2008, at 8:26 AM, DHF wrote:

> Ricardo Newbery wrote:
>> Regarding the potential management overhead... this is not relevant  
>> to the question of whether this strategy would increase your site's  
>> performance.  Management overhead is a separate question, and not  
>> an easy one to answer in the general case.  The overhead might be a  
>> problem for some.  But I know in my own case, the overhead required  
>> to manage this sort of thing is actually pretty trivial.
> How do you manage the split ttl's?  Do you send a purge after a page  
> has changed or have you crafted another way to force a revalidation  
> of cached objects?


Yes, a purge is sent after the page has changed.  For Plone, all of  
this is easy to automate with the CacheFu add-on.  Although support  
for adding a Surrogate-Control header (or whatever you use to  
communicate the local ttl) requires some minor customization (about 5  
lines of code).

Ric


___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: cache empties itself?

2008-04-08 Thread DHF
Ricardo Newbery wrote:
>
> On Apr 7, 2008, at 10:30 PM, DHF wrote:
>
>> Ricardo Newbery wrote:
>>> On Apr 7, 2008, at 5:22 PM, Michael S. Fischer wrote:
>>>
>>>
 Sure, but this is also the sort of content that can be cached back
 upstream using ordinary HTTP headers.

>>>
>>>
>>> No, it cannot.  Again, the use case is dynamically-generated 
>>> content  that is subject to change at unpredictable intervals but 
>>> which is  otherwise fairly "static" for some length of time, and 
>>> where serving  stale content after a change is unacceptable.  
>>> "Ordinary" HTTP headers  just don't solve that use case without 
>>> unnecessary loading of the  backend.
>>>
>> Isn't this what if-modified-since requests are for?  304 not modified 
>> is a pretty small request/response, though I can understand the 
>> tendency to want to push it out to the frontend caches.  I would 
>> think the management overhead of maintaining two seperate expirations 
>> wouldn't be worth the extra hassle just to save yourself some ims 
>> requests to a backend.  Unless of course varnish doesn't support ims 
>> requests in a usable way, I haven't actually tested it myself.
>
>
> Unless things have changed recently, Varnish support for IMS is 
> mixed.  Varnish supports IMS for cache hits but not for cache misses 
> unless you tweak the vcl to pass them in vcl_miss.  Varnish will not 
> generate an IMS to revalidate it's own cache.
Good to know.
>
> Also it is not necessarily true that generating a 304 response is 
> always light impact.  I'm not sure about the Drupal case, but at least 
> for Plone there can be a significant performance hit even when just 
> calculating the Last-Modified date.  The hit is usually lighter than 
> that required for generating the full response but for high-traffic 
> sites, it's still a significant consideration.
>
> But the most significant issue is that IMS doesn't help in the 
> slightest to lighten the load of *new* requests to your backend.  IMS 
> requests are only helpful if you already have the content in your own 
> browser cache -- or in an intermediate proxy cache server (for proxies 
> that support IMS to revalidate their own cache).
The intermediate proxy was the case I was thinking about, but you are 
correct, if there is no intermediate proxy and varnish frontends don't 
revalidate with ims requests then the whole plan is screwed.
> Regarding the potential management overhead... this is not relevant to 
> the question of whether this strategy would increase your site's 
> performance.  Management overhead is a separate question, and not an 
> easy one to answer in the general case.  The overhead might be a 
> problem for some.  But I know in my own case, the overhead required to 
> manage this sort of thing is actually pretty trivial.
How do you manage the split ttl's?  Do you send a purge after a page has 
changed or have you crafted another way to force a revalidation of 
cached objects?

--Dave
>
> Ric
>
>
>
>

___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


AW: caching directories

2008-04-08 Thread Martin Abt
>>Hi,
>>
>>i am new to varnish and i am wondering, if it is possible to exclude 
>>everything in a directory (including subdirectories) from caching.
>>
>>It works with single files, like:
>>
>>if (req.url ~ "/test/README.txt") {
>>pass;
>>}
>>
>
>   if (req.url ~ "^/test/") {
>   pass;
>   }
>
>?

Thanks, it works. 

So i probably should get in to learning regular expressions.

Best wishes,
martin

___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: caching directories

2008-04-08 Thread Poul-Henning Kamp
In message <[EMAIL PROTECTED]>, "Mart
in Abt" writes:
>Hi,
>
>i am new to varnish and i am wondering, if it is possible to exclude
>everything in a directory (including subdirectories) from caching.
>
>It works with single files, like:
>
>if (req.url ~ "/test/README.txt") {
>pass;
>}
>

if (req.url ~ "^/test/") {
pass;
}

?

-- 
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED] | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


caching directories

2008-04-08 Thread Martin Abt
Hi,

i am new to varnish and i am wondering, if it is possible to exclude
everything in a directory (including subdirectories) from caching.

It works with single files, like:

if (req.url ~ "/test/README.txt") {
pass;
}


How can I do this with directories ?


Best wishes,

martin

___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


[no subject]

2008-04-08 Thread Poul-Henning Kamp
In message <[EMAIL PROTECTED]>, [EMAIL PROTECTED] writes:

>Im trying to figure out some ways to extend the response headers
>with some info of the request. What I want for now is if it was a
>hit or miss and which backend it used.

Hit/Miss status is already in the X-Varnish header, if it has two
numbers it is a hit.

>I cant figure out how to know which backend it used. The only way
>i know of is if the backend would deliver a header with host name
>or something similar. Is there any way to do this in VCL?

You can set your own header in vcl_recv along with the backend.

Then in vcl_fetch, copy that header from req.foobar to obj.foobar
and you should be all set.


-- 
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED] | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


[no subject]

2008-04-08 Thread duja
Im trying to figure out some ways to extend the response headers with some info 
of the request. What I want for now is if it was a hit or miss and which 
backend it used.

I cant figure out how to know which backend it used. The only way i know of is 
if the backend would deliver a header with host name or something similar. Is 
there any way to do this in VCL?

I thought I could do like this to see if it was a miss or not but it didnt 
work. Im not even sure if the Age-header is always 0 on misses or if it could 
be 0 on hits too?

sub vcl_deliver {
if(resp.http.Age > 0) {
set resp.http.X-Cache = "HIT";
} else {
set resp.http.X-Cache = "MISS";
}
}


/ Erik

___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Varnishtop hangs (again)

2008-04-08 Thread duja
When I run varnishtop with "varnishtop -i BackendReuse" it hangs and I cant do 
anything to return from the program. CTRL-C gives me nothing :(

It hangs exactly when the first request hits the server. Here is the strace on 
varnishtop when the request hits:

rt_sigaction(SIGTSTP, {SIG_IGN}, {0xb7ed61a0, [], SA_RESTART}, 8) = 0
poll([{fd=0, events=POLLIN}], 1, 0) = 0
poll([{fd=0, events=POLLIN}], 1, 0) = 0
rt_sigaction(SIGTSTP, {0xb7ed61a0, [], SA_RESTART}, NULL, 8) = 0
gettimeofday({1207662136, 378806}, NULL) = 0
poll([{fd=0, events=POLLIN}], 1, 1000)  = 0
gettimeofday({1207662137, 379658}, NULL) = 0
time(NULL)  = 1207662137
rt_sigaction(SIGTSTP, {SIG_IGN}, {0xb7ed61a0, [], SA_RESTART}, 8) = 0
poll([{fd=0, events=POLLIN}], 1, 0) = 0
poll([{fd=0, events=POLLIN}], 1, 0) = 0
rt_sigaction(SIGTSTP, {0xb7ed61a0, [], SA_RESTART}, NULL, 8) = 0
gettimeofday({1207662137, 379837}, NULL) = 0
poll([{fd=0, events=POLLIN}], 1, 1000)  = 0
gettimeofday({1207662138, 380725}, NULL) = 0
time(NULL)  = 1207662138
rt_sigaction(SIGTSTP, {SIG_IGN}, {0xb7ed61a0, [], SA_RESTART}, 8) = 0
poll([{fd=0, events=POLLIN}], 1, 0) = 0
poll([{fd=0, events=POLLIN}], 1, 0) = 0
rt_sigaction(SIGTSTP, {0xb7ed61a0, [], SA_RESTART}, NULL, 8) = 0
gettimeofday({1207662138, 380904}, NULL) = 0
poll([{fd=0, events=POLLIN}], 1, 1000)  = 0
gettimeofday({1207662139, 384322}, NULL) = 0
futex(0x804af0c, FUTEX_WAIT, 2, NULL

Varnishstat:
client_conn19 0.02 Client connections accepted
client_req123 0.14 Client requests received
cache_hit   0 0.00 Cache hits
cache_hitpass   0 0.00 Cache hits for pass
cache_miss100 0.12 Cache misses
backend_conn  123 0.14 Backend connections success
backend_fail0 0.00 Backend connections failures
backend_reuse 115 0.13 Backend connections reuses
backend_recycle   123 0.14 Backend connections recycles
backend_unused  0 0.00 Backend connections unused

# uname -a
Linux varnish06 2.6.18-6-686 #1 SMP Sun Feb 10 22:11:31 UTC 2008 i686 GNU/Linux
Debian Etch

I have reported this before and I know there is a ticket for the problem. 

http://varnish.projects.linpro.no/ticket/217

Hope you'll get some where with the info.

// Erik

___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: recommendation for swap space?

2008-04-08 Thread Pablo García
Sacha, try to modify the /proc/sys/vm/swappiness it's on 60 (default),
I reduce it to 20 or even 0, on my oracle cluster, to prevent
important process from being swapped.

Regards, Pablo

On Tue, Apr 8, 2008 at 5:51 AM, Sascha Ottolski <[EMAIL PROTECTED]> wrote:
> Am Montag 07 April 2008 18:00:14 schrieb Dag-Erling Smørgrav:
>
> > Sascha Ottolski <[EMAIL PROTECTED]> writes:
>  > > now that my varnish processes start to reach the RAM size, I'm
>  > > wondering what a dimension of swap would be wise? I currently have
>  > > about 30 GB swap space for 32 GB RAM, but am wondering if it could
>  > > even make sense to have no swap at all? My cache file is 517 GB in
>  > > size.
>  >
>  > Varnish does not use swap.
>  >
>  > DES
>
>  hmm, then I'm wondering why my machines do swap quite a bit. It's a
>  almost naked linux, the only processes really doing some work are
>  varnishd and varnishlog.
>
>  I have 32 GB of RAM, 30 GB of swap, and 517 GB of cache file. according
>  to "top", varnishd has a resident size of 25 GB, and almost 1,5 GB of
>  swap is in use. kswapd often shows up in "top".
>
>
>  # free
>  total   used   free sharedbuffers cached
>  Mem:  32969244   32874908  94336  0 108648   29129752
>  -/+ buffers/cache:3636508   29332736
>  Swap: 290454801473200   27572280
>
>
>  it's not worrying me, performance is brilliant, I'm just curious :-)
>
>
>  Thanks, Sascha
>
>
> ___
>  varnish-misc mailing list
>  varnish-misc@projects.linpro.no
>  http://projects.linpro.no/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: recommendation for swap space?

2008-04-08 Thread Sascha Ottolski
Am Montag 07 April 2008 18:00:14 schrieb Dag-Erling Smørgrav:
> Sascha Ottolski <[EMAIL PROTECTED]> writes:
> > now that my varnish processes start to reach the RAM size, I'm
> > wondering what a dimension of swap would be wise? I currently have
> > about 30 GB swap space for 32 GB RAM, but am wondering if it could
> > even make sense to have no swap at all? My cache file is 517 GB in
> > size.
>
> Varnish does not use swap.
>
> DES

hmm, then I'm wondering why my machines do swap quite a bit. It's a
almost naked linux, the only processes really doing some work are
varnishd and varnishlog.

I have 32 GB of RAM, 30 GB of swap, and 517 GB of cache file. according
to "top", varnishd has a resident size of 25 GB, and almost 1,5 GB of
swap is in use. kswapd often shows up in "top".


# free
 total   used   free sharedbuffers cached
Mem:  32969244   32874908  94336  0 108648   29129752
-/+ buffers/cache:3636508   29332736
Swap: 290454801473200   27572280


it's not worrying me, performance is brilliant, I'm just curious :-)


Thanks, Sascha
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Cookies in VCL

2008-04-08 Thread Poul-Henning Kamp
In message <[EMAIL PROTECTED]>, [EMAIL PROTECTED] writes:
>A question about cookies in VCL.
>
>Is there a way of handling cookies in VCL?

Not yet, but it's on our list.

>
>Like: 
>if(req.http.Cookies[userid] == "1234")
>
>or
>
>set req.http.Cookies[language] = "sv"
>
>Thanks
>Erik
>
>___
>varnish-misc mailing list
>varnish-misc@projects.linpro.no
>http://projects.linpro.no/mailman/listinfo/varnish-misc
>

-- 
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED] | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Cookies in VCL

2008-04-08 Thread duja
A question about cookies in VCL.

Is there a way of handling cookies in VCL?

Like: 
if(req.http.Cookies[userid] == "1234")

or

set req.http.Cookies[language] = "sv"

Thanks
Erik

___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Management console

2008-04-08 Thread duja
Nice, the CR LF did the thing, spank you ;)

___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: recommendation for swap space?

2008-04-08 Thread C. Handel
On Mon, Apr 7, 2008 at 6:18 PM, Michael S. Fischer <[EMAIL PROTECTED]> wrote:
>  >  > now that my varnish processes start to reach the RAM size, I'm wondering
>  >  > what a dimension of swap would be wise? I currently have about 30 GB
>  >  > swap space for 32 GB RAM, but am wondering if it could even make sense
>  >  > to have no swap at all? My cache file is 517 GB in size.
>  >
>  >  Varnish does not use swap.
>
>  That said, it wouldn't make sense to entirely deallocate your swap
>  space, since the kernel may decide to page or swap out processes other
>  than Varnish.

You also need swap if a huge process tries to fork. Having a huge
Process forking a child, means, that the child process initially is a
copy of the parent. It is a copy on write memory (so it doesn't realy
use memory) and in most cases the child will release all memory and do
something else, but the virtual memory needs to be big enough during
the fork.

Greetings
   Christoph
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: cache empties itself?

2008-04-08 Thread Ricardo Newbery

On Apr 7, 2008, at 10:30 PM, DHF wrote:

> Ricardo Newbery wrote:
>> On Apr 7, 2008, at 5:22 PM, Michael S. Fischer wrote:
>>
>>
>>> Sure, but this is also the sort of content that can be cached back
>>> upstream using ordinary HTTP headers.
>>>
>>
>>
>> No, it cannot.  Again, the use case is dynamically-generated  
>> content  that is subject to change at unpredictable intervals but  
>> which is  otherwise fairly "static" for some length of time, and  
>> where serving  stale content after a change is unacceptable.   
>> "Ordinary" HTTP headers  just don't solve that use case without  
>> unnecessary loading of the  backend.
>>
> Isn't this what if-modified-since requests are for?  304 not  
> modified is a pretty small request/response, though I can understand  
> the tendency to want to push it out to the frontend caches.  I would  
> think the management overhead of maintaining two seperate  
> expirations wouldn't be worth the extra hassle just to save yourself  
> some ims requests to a backend.  Unless of course varnish doesn't  
> support ims requests in a usable way, I haven't actually tested it  
> myself.


Unless things have changed recently, Varnish support for IMS is  
mixed.  Varnish supports IMS for cache hits but not for cache misses  
unless you tweak the vcl to pass them in vcl_miss.  Varnish will not  
generate an IMS to revalidate it's own cache.

Also it is not necessarily true that generating a 304 response is  
always light impact.  I'm not sure about the Drupal case, but at least  
for Plone there can be a significant performance hit even when just  
calculating the Last-Modified date.  The hit is usually lighter than  
that required for generating the full response but for high-traffic  
sites, it's still a significant consideration.

But the most significant issue is that IMS doesn't help in the  
slightest to lighten the load of *new* requests to your backend.  IMS  
requests are only helpful if you already have the content in your own  
browser cache -- or in an intermediate proxy cache server (for proxies  
that support IMS to revalidate their own cache).

Regarding the potential management overhead... this is not relevant to  
the question of whether this strategy would increase your site's  
performance.  Management overhead is a separate question, and not an  
easy one to answer in the general case.  The overhead might be a  
problem for some.  But I know in my own case, the overhead required to  
manage this sort of thing is actually pretty trivial.

Ric




___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc