For any cache, not just Varnish, the Accept-Encoding header used for the purge
(or a regular cache hit) must match the request header /exactly/. If you use
anything other than exactly Accept-Encoding: gzip,deflate your purge will
miss. So this is the expected behavior, AFAIK.
Any other
If your default_ttl is not 0, then this may be the expected behavior. I'm not
sure if Varnish should really ever cache =500 responses?
But in VCL you could do something like:
sub vcl_fetch {
if ( obj.status = 500 ) {
set obj.ttl = 0s;
set obj.cacheable =
this (swappiness already at 1).
Thanks,
--
Ken
On Jan 29, 2010, at 11:16 AM, Ken Brownfield wrote:
On Jan 29, 2010, at 3:54 AM, Tollef Fog Heen wrote:
It should be. You'll lose the last storage silo (since that's not
closed yet), but older objects should be available.
This might
On Jan 29, 2010, at 3:54 AM, Tollef Fog Heen wrote:
It should be. You'll lose the last storage silo (since that's not
closed yet), but older objects should be available.
This might be the source of the confusion. How often are silos closed? My
testing was simply hit the cache for a single
Right, -spersistent. Child restarts are persistent, parent process stop/start
isn't.
Maybe there's a graceful, undocumented method of stopping the parent that I'm
not aware of?
--
kb
On Jan 27, 2010, at 1:26 AM, Tollef Fog Heen wrote:
]] Ken Brownfield
| I'd love to test persistent
I'd love to test persistent under production load, but right now it's not
persistent. :-( (Storage doesn't persist through a parent restart)
--
Ken
On Jan 25, 2010, at 1:26 AM, Tollef Fog Heen wrote:
]] pablort
| And how about 2.1 ? Any release date on the horizon ? :D
Persistent
On Jan 16, 2010, at 7:32 AM, Michael Fischer wrote:
On Sat, Jan 16, 2010 at 1:54 AM, Bendik Heltne bhel...@gmail.com wrote:
Our Varnish servers have ~ 120.000 - 150.000 objects cached in ~ 4GB
memory and the backends have a much easier life than before Varnish.
We are about to upgrade RAM
On Jan 18, 2010, at 3:16 PM, Michael S. Fischer wrote:
On Jan 18, 2010, at 3:08 PM, Ken Brownfield wrote:
In the real world, sites run their applications through web servers, and
this fact does (and should) guide the decision on the base web server to
use, not static file serving.
I
On Jan 15, 2010, at 3:39 PM, pub crawler wrote:
Have we considered adding pooling functionality to Varnish much like
what they have in memcached?
Run multiple Varnish(es) and load distributed amongst the identified
Varnish server pool So an element in Varnish gets hashed and the
hash
Something like:
sub vcl_recv {
if ( req.request == GET ) {
set req.http.OLD-Cookie = req.http.Cookie;
unset req.http.Cookie;
set req.http.OLD-Authorization = req.http.Authorization;
unset req.http.Authorization;
}
}
Those all seem very useful to me, but I think the lowest-hanging performance
fruit right now is simultaneous connections and the threading model (including
the discussions about stacksize and memory usage, etc).
Modeling Varnish's behavior with certain ranges of simultaneous worker and
backend
varnishd -f /path/to/your/config.vcl -C
This will compile your VCL into C and emit it to stdout. It will show
prototypes for all of the VRT interface accessible from VCL, the structs
representing your backend(s) and director(s), and the config itself. The wiki
is a little misleading (and -C
Note that the linked article is from 2004. The kernels that RedHat uses are a
bag of hurt, not to mention ancient.
If you can upgrade to RHELl5 that may be the easiest fix (I can only assume
that the mmap limitation has been removed). Perhaps RedHat has newer RHELl4
kernels in a bleeding
Hopefully your upper management allows you to install contemporary
software and distributions. Otherwise memory leaks and x86_64 would
be the least of your concerns. Honestly, you're waiting for Varnish
to stabilize and you're running v1?
My data point: 5 months and over 100PB of
, Henry Paulissen wrote:
Our load balancer transforms all connections from keep-alive to close.
So keep-alive connections aren’t the issue here.
Also, if I limit the thread count I still see the same behavior.
-Oorspronkelijk bericht-
Van: Ken Brownfield [mailto:k...@slide.com
I've started playing with persistence a bit in trunk, and it seems
like the storage is persistent across restarts of the child, but /not/
the parent.
For a small working set, having any persistence at all is somewhat
optional. For large working sets, you really want persistence across
Ah, I stand corrected. But I was definitely having random crashes
when I enabled the vcl_fetch() section below:
sub vcl_recv {
...
set req.http.Unmodified-Host = req.http.Host;
set req.http.Unmodified-URL = req.url;
...
}
sub vcl_fetch {
...
set
On Sep 16, 2009, at 10:03 AM, Kristian Lyngstol wrote:
On Wed, Sep 16, 2009 at 09:54:25AM -0700, Ken Brownfield wrote:
I'm a bit loathe to reenable this to get a full stacktrace and gdb
output, but if there's really nothing wrong with this I might
consider
it.
Nothing wrong
My weapon of choice there would be oprofile, run something like this
under high load and/or when you have a lot of threads active:
opcontrol --init
# You'll want a debug kernel
# For example, the Ubuntu package is linux-image-debug-server
opcontrol --setup --vmlinux=/boot/vmlinux-2.6.24-server
Hey Karl. :-)
The implementation of purge in Varnish is really a queue of refcounted
ban objects. Every image hit is compared to the ban list to see if
the object in cache should be reloaded from a backend.
If you have purge_dups off, /every/ request to Varnish will regex
against every
See the FAQ:
http://varnish.projects.linpro.no/wiki/FAQ#IhaveasitewithmanyhostnameshowdoIkeepthemfrommultiplyingthecache
If your backends need to see the original hostname, you can unrewrite
it in vcl_miss().
--
Ken
On Jul 11, 2009, at 3:54 AM, Hip Hydra wrote:
Hi, I'm running a network of
On Jul 14, 2009, at 3:05 AM, Kristian Lyngstol wrote:
On Tue, Jul 14, 2009 at 11:46:58AM +0200, Lazy wrote:
the site is usually not so busy, but it has sometimes spikes of
static
traffic (about 50Mbps) that's why i upped the thread limit, 3000 was
to low
I seriously doubt 3k was too low.
Isn't VRT_SetHdr() what you're looking for? Mind its semantics, though.
--
Ken.
On Jul 6, 2009, at 7:26 AM, Laurence Rowe wrote:
Hi,
Thought my C is rather rusty by now, I'd like to make the mod_auth_tkt
[1] signed cookie authentication / authorisation system work with
Varnish. The idea
it, FWIW.
--
Ken.
On Jun 30, 2009, at 5:11 PM, Tollef Fog Heen wrote:
]] Poul-Henning Kamp
| In message 5c056ae2-7207-42f8-9e4b-0f541dc4b...@slide.com, Ken
Brownfield wri
| tes:
|
| Would a stack overflow take out the whole child, or just that
thread?
|
| The kernel would try
[Apologies if this belongs on varnish-dev; this list seemed much more
active.]
This patch came about from observations in tickets #512 and #518.
The attached patch creates a backend flag to change the initial health
of backends upon varnishd startup:
backend foo {
.initial_health
When looking at /proc/map info for varnish threads, I'm seeing the
following allocations in numbers that essentially match the child count:
40111000 8192K rw---[ anon ]
And this at almost double the child count:
7f4d5790 1024K rw---[ anon ]
For example, for 64
:
#define CHUNK_2POW_DEFAULT 20
Thanks!
--
Ken.
On Jun 19, 2009, at 7:15 AM, Tollef Fog Heen wrote:
]] Ken Brownfield
| When looking at /proc/map info for varnish threads, I'm seeing the
| following allocations in numbers that essentially match the child
count:
|
| 40111000
27 matches
Mail list logo