if Varnish
could replace OWC on our site.
I would be _very_ pleased if any of you could gimme comments and thougts about
this. My main consern is if
Varnish is or is not able to work in this kind of situation where we are using:
- multiple ip-based virtual hosts (about 50)
- multiple origin
Hi,
We'll be moving varnish-cache.org tomorrow, Wednesday 2010-03-10 at
about 11:00 CET. The site will be unavailable for about 5 (or 30)
minutes while we move it.
We'll be getting a newer trac version but other than that there should
be no user-noticeable changes.
At the same time
1) How many servers do you have running Varnish?
30
2) What sort of total load are you having? Mbit/s or hits per second are
preferred metrics. 2gbt per second
3) What sort of site is it? ecommerce
*) Online media
*) Cooperate website (ibm.com or similar)
*) Retail
*) Educational
hello,
This confusing behavior was due to a some of different factor a bug in
django cache middleware for i18n pages, the same pages in french, german,
english, and my relative inexperince with varnish. I think that now I
have this issue solved witht the following configuration:
http
Sistemi Open Source
http://hobbygiochi.com
Hobby Giochi, l'e-commerce del divertimento
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc
but it but without success.
Cc'd varnish-misc, maybe there's someone facing the same exact problem,
--
Cosimo
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc
, but lru_nuking never stopped.
On Wed, Feb 24, 2010 at 8:15 PM, Barry Abrahamson ba...@automattic.com wrote:
Howdy,
We are finally getting around to upgrading to the latest version of varnish
and are running into quite a weird problem. Everything works fine for a bit
(1+day) , then all of a sudden
Hi.
We'll be moving varnish-cache.org on monday march the first around 11:00
CET. The site will be unavailable for about 5 minutes while we move it.
We'll be getting a newer trac version but other than that there should be no
user-noticable changes. At some point we will also move the mailing
around fruitlessly to try to understand the overhead of
software raid to explain this, but once i discovered varnish could
take on multiple cache files, i saw no reason for the software raid
and just abandoned it.
Interesting - I will try it out! Thanks for the info.
We had roughly 240GB storage
and
varnish keep on fetching th pages from the cms.
I am going to add below all the information I think useful do not hesitate
to let me know if you need more info ?
Here it is my configuration :
sub vcl_recv {
# clean out requests sent via curls -X mode and LWP
# http
.
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc
%{HTTP_HOST} ^domain.com$ [NC]
RewriteRule ^(.*)$ http://www.domain.com$1 [R=301,L]
Am I missing something?
John
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc
for a lot of simple uses. Some CMS might ship Varnish as
part of it and there won't a need for CLI.
B) CLI on stdin (-d)
C) CLI on TELNET (-T)
D) CLI on call-back (-M)
I really like this one. If we could have an option to let Varnish start
without the cache running (like -d
directives are routine.
RewriteCond %{HTTP_HOST} ^domain.com http://domain.com$ [NC]
RewriteRule ^(.*)$ http://www.domain.com$1 [R=301,L]
Am I missing something?
John
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no
Howdy,
We are finally getting around to upgrading to the latest version of varnish and
are running into quite a weird problem. Everything works fine for a bit
(1+day) , then all of a sudden Varnish starts nuking all of the objects from
the cache:
About 4 hours ago there were 1 million
Hello,
I'm new to varnish and perplexed.
As advised by http://varnish-cache.org/wiki/VarnishOnRedhat and
http://fedoraproject.org/wiki/EPEL/FAQ#howtouse, I've loaded varnish onto my
CentOS server using:
su -c 'rpm -Uvh
http://download.fedora.redhat.com/pub/epel/5/i386/epel-release-5-3
Thanks to our new energy, we have been able to chase down and close
a fair number of weird bugs in Varnish, and right now, -trunk looks
pretty solid.
Subject to the quantum-murphy-o-meter freaking out, I think we are
ready to cut a 2.1 branch, maybe even in time for VUG2.
This will become
Hi list.
Redpill-Linpro (RL) has been the main sponsor of Varnish for the last
three years.
RL is a services company, generating most (probably over 95%) of its
revenue on selling services and products in its local Scandinavian
market.
Developing a product for a global market takes a different
In message 52220de01002090612t6617242eleb63812bfd2ce...@mail.gmail.com, Per B
uer writes:
Therefore RL has decided to form a separate company around Varnish:
Varnish Software AS. The company will initially consist of Tollef Fog
Heen, Kristian Lyngstøl and myself, all moving from Redpill Linpro
Tel : + 33 1 48 24 33 60
Fax : + 33 1 48 24 33 54
www.novactive.com
-Message d'origine-
De : Rob S [mailto:rtshils...@gmail.com]
Envoyé : lundi 8 février 2010 11:50
À : Axel DEAU
Cc : Sacha MILADINOVIC
Objet : Re: Varnish load balancer (keep session)
At the very top of sub
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc
Version: 2.0.6-1
Insall: .deb
Os: Debian 5.0.3
Hi,
I've got two backends running apache2: front1.domain.com
front2.domain.com, set with the load balancing configuration from
http://varnish-cache.org/wiki/LoadBalancing
http://varnish-cache.org/wiki/LoadBalancing .
The issue is, when I
Os: Debian 5.0.3
Hi,
I've got two backends running apache2: front1.domain.com
front2.domain.com, set with the load balancing configuration
from http://varnish-cache.org/wiki/LoadBalancing.
_The issue is, when I shutdown apache2 of the first backend varnish
don't switch to the second
Os: Debian 5.0.3
Hi,
I've got two backends running apache2: front1.domain.com
front2.domain.com, set with the load balancing configuration
from http://varnish-cache.org/wiki/LoadBalancing.
_The issue is, when I shutdown apache2 of the first backend varnish
don't switch to the second
Version: 2.0.6-1
Insall: .deb
Os: Debian 5.0.3
Hi,
I've got two backends running apache2: front1.domain.com
front2.domain.com, set with the load balancing configuration from
http://varnish-cache.org/wiki/LoadBalancing
http://varnish-cache.org/wiki/LoadBalancing .
The issue is, when I
Kristian Lyngstol a écrit :
On Tue, Feb 02, 2010 at 04:44:48PM +0100, Bernardf FRIT wrote:
Hi,
I'am running :
- varnishd (varnish-2.0.4)
Why not 2.0.6?
When a server is running well, I'm a bit reluctant to upgrade. Now, I'm
ok to upgrade as an attempt to fix this.
and it appears
Hi all,
On March the 29th and 30th the second Varnish User Group meeting will be
held in the Bay / Marktplaats.nl offices
On Tue, Feb 02, 2010 at 04:44:48PM +0100, Bernardf FRIT wrote:
Hi,
I'am running :
- varnishd (varnish-2.0.4)
Why not 2.0.6?
and it appears that the grsec Kernel repeatedly and unexpectedly sends
signal 11 to the varnishd child.
grsec seems to just report that a segfault occurred. SIGSEG
'. But be aware that it'll bog it down
dreadfully, so i wouldn't advise it in production.
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc
]] pablort
| Nah. using 2.0.6
Hmm, indeed.
I just fixed it in trunk, will backport it to 2.0 branch.
--
Tollef Fog Heen
Redpill Linpro -- Changing the game!
t: +47 21 54 41 73
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http
Hi all,
I've got a simple question which I've been puzzling on for the last 30
minutes or so - how do you change a string in Varnish to lowercase?
Basically, all the links on our site should be lowercase, but some
people have been randomly capitalising them when linking to them - I'd
just
In message 4b69ab1a.6010...@mangahigh.com, Richard Chiswell writes:
Hi all,
I've got a simple question which I've been puzzling on for the last 30
minutes or so - how do you change a string in Varnish to lowercase?
Basically, all the links on our site should be lowercase, but some
people have
Hi Poul,
Poul-Henning Kamp wrote:
In message 4b69ab1a.6010...@mangahigh.com, Richard Chiswell writes:
I've got a simple question which I've been puzzling on for the last 30
minutes or so - how do you change a string in Varnish to lowercase?
You'll have to use a bit of inline C code
richard.chisw...@mangahigh.com wrote:
Hi Poul,
Poul-Henning Kamp wrote:
In message 4b69ab1a.6010...@mangahigh.com, Richard Chiswell writes:
I've got a simple question which I've been puzzling on for the last 30
minutes or so - how do you change a string in Varnish to lowercase?
You'll have
a string in Varnish to lowercase?
You'll have to use a bit of inline C code to do that.
I'm no varnish or C expert, YMMV etc...
but I managed to get something like this working.
It's a VCL with embedded C that reads the Accept-Language header
and rewrites it (or writes into a different header
On Tue, Feb 2, 2010 at 11:53 PM, Tollef Fog Heen
tfh...@varnish-software.com wrote:
]] Bernardf FRIT
| Then the parent varnishd process starts immediately a new child process
| which lasts some time.
|
| Is there any way to fix this. Remocve the GRSEC kernel ? Upgrade the
| kernel ? Varnish
.
The reason we use varnish most is because our website has complex,
timeconsuming queries to backend systems. The answers to these queries
do vary several times per day but are still cachable. Of course varnish
also helps te bring down the load on those backend systems but the main
use is that varnish
1) 15
2) Arround 10k/s results in 500-600Mbit.
3) Social Network website
4) No
5) Large dataset performance improvements, High traffic performance improvements
rr
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no
incoming
connections, and for outgoing connections to backends
Thanks,
Sam
On 29 January 2010 14:48, Per Andreas Buer per.b...@redpill-linpro.com wrote:
Hi list.
I'm working for Redpill Linpro, you might have heard of us - we're the main
sponsor of Varnish development. We're a bit curious about
Hi,
I'am running :
- varnishd (varnish-2.0.4)
- linux kernel 2.6.27.10-grsec--grs-ipv4-64
and it appears that the grsec Kernel repeatedly and unexpectedly sends
signal 11 to the varnishd child.
.../...
Feb 2 12:01:02 XX varnishd[17111]: segfault at 1000 ip
0043abf0 sp
Thought I'd ask the list before I went on a voyage with this one.
Sometime our backend app servers gets overloaded and go into
protection mode whereby they sends out 503 errors until they recover.
Varnish is in front of the app servers and when this happens the 503
ends up as a cached item
If your default_ttl is not 0, then this may be the expected behavior. I'm not
sure if Varnish should really ever cache =500 responses?
But in VCL you could do something like:
sub vcl_fetch {
if ( obj.status = 500 ) {
set obj.ttl = 0s;
set obj.cacheable
You can use the obj.cacheable property, which is a suggestion from Varnish
about the response should be cacheable. By default Error response codes
aren't cached, so test the following:
sub vcl_fetch {
if (!obj.cacheable){
return (pass);
}
}
___
varnish
1) How many servers do you have running Varnish?
8 servers (2 sites x 4 servers), load balanced behind F5 GTM. We aim to be able
to lose a site AND suffer a hardware failure and keep on truckin'. We could
probably run on one or two servers at a push, but our backend would most likely
explode
.
--
Tollef Fog Heen
Redpill Linpro -- Changing the game!
t: +47 21 54 41 73
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc
]] Bernardf FRIT
| Then the parent varnishd process starts immediately a new child process
| which lasts some time.
|
| Is there any way to fix this. Remocve the GRSEC kernel ? Upgrade the
| kernel ? Varnish ? or whatever ?
Work out why it thinks that varnishd is doing something wrong
bring the
time down somewhat.
alredy set to 1 bud for 4000 threads it takes around 20seconds till
fully working state
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc
Hi Poul,
Any work around possible in varnish to make varnish check objects at backend
some time before they actually reach their expires time? Or is it just not
possible to do so in varnish?
Thank you.
-Paras
On Fri, Jan 29, 2010 at 10:07 AM, Paras Fadte plf...@gmail.com wrote:
Thanks
Nah. using 2.0.6
# varnishtop -V
varnishtop (varnish-2.0.6)
Copyright (c) 2006-2009 Linpro AS / Verdens Gang AS
# varnishtop -1 -i TxStatus
23917.00 TxStatus
2611.00 TxStatus
1183.00 TxStatus
751.00 TxStatus
45.00 TxStatus
On Fri, Jan 29, 2010 at 8:16 AM, Tollef Fog Heen
tfh
Redpill Linpro -- Changing the game!
t: +47 21 54 41 73
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc
]] Abhishek Singh
| I am using varnish for static files caching and multiple domains are getting
| served from varnish cluster and i want to purge static files for specific
| domain so how can i do that.
|
| I tired following but it is not working.
|
| 1)
| purge req.http.host ~ xyz.com
| 101
Poul-Henning Kamp p...@phk.freebsd.dk wrote :
Let's get back to consistent hashing and it's use...
Correct me if I am wrong, but doesn't this mean that adding a new
varnish instance implies a full rehash ?
Yes, that is pretty much guaranteed to be the cost with any
stateless hashing
1) 8 (All servers are serving from mem, big machines. 6 as frontend in
cluster and load balanced, 1 as between frontend varnish and backend
application server and 1 as backup)
2) ~250MB/s (through varnish). 2.5m visitors a day, 27m pageviews a day
(for the whole website, some content doesn't go
I'm not a guru so I could really use more information to figure out
exactly what is happening.
Are you implying that when you have varnish in front of your 2 websites
it is working correctly when accessing the sites through wget but not
hen using internet explorer and/or firefox ?
If so, could
Hi All,
I am using varnish for static files caching and multiple domains are getting
served from varnish cluster and i want to purge static files for specific
domain so how can i do that.
I tired following but it is not working.
1)
purge req.http.host ~ xyz.com
101 44
Unknown request.
Type
:)
cheers
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http
Zeus 3.20
p...@freebsd.org | TCP/IP since RFC 956
FreeBSD committer | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http
.
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc
75cf5801001241820w3e4afd34v64ad2031b8b7...@mail.gmail.com,
Paras Fadte writes:
Is prefetch by default enabled in varnish ?
Prefetch never got implemented, we found other ideas to solve the problem,
such as grace mode.
--
Poul-Henning Kamp | UNIX since Zilog Zeus 3.20
p...@freebsd.org | TCP/IP since RFC 956
this or should I file a bug for that ?
Also, what do you guys use for performance analysis ?
[]'s
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman
from path in module
# Module varnish.stats
#varnishstat_binary /some/other/path/to/varnishstat
# /Module
/Plugin
Here's varnish/stats.py
import collectd, sys, time, subprocess, os.path
from pprint import pformat
varnishstat_binary = /usr/bin/varnishstat
# Map
I keep meaning to look into mod_auth_tkt
(http://www.openfusion.com.au/labs/mod_auth_tkt/) support for varnish.
It should be fairly easy to implement with inline C and doing so would
allow us to cache pages that require authorisation (by matching tokens
in the signed cookie to tokens in an obj
Rowe l...@lrowe.co.uk:
I keep meaning to look into mod_auth_tkt
(http://www.openfusion.com.au/labs/mod_auth_tkt/) support for varnish.
It should be fairly easy to implement with inline C and doing so would
allow us to cache pages that require authorisation (by matching tokens
in the signed
questions in the
past, but then more of a «is it easy to do authentication in Varnish»
question.
b) Done carefully, it sounds reasonable enough to have. Depending on
how much traffic you have, you want to cache and keep as much of this
information in varnish itself so you don't have to open
In message 75cf5801001241820w3e4afd34v64ad2031b8b7...@mail.gmail.com, Paras
Fadte writes:
Is prefetch by default enabled in varnish ?
Prefetch never got implemented, we found other ideas to solve the problem,
such as grace mode.
--
Poul-Henning Kamp | UNIX since Zilog Zeus 3.20
p
Hey,
I am having some problems with Varnish. Unfortunately (depends on how
you look at it), I had to replace our Squid cluster with Varnish in a
day.. And now, we are finding out we're having some issues with it,
sometimes Varnish just stops working.
We have 4 balancers, each running FreeBSD
]] Paras Fadte
| Is prefetch by default enabled in varnish ?
No, it is not implemented and any references to it should be ignored.
It is also removed in trunk.
--
Tollef Fog Heen
Redpill Linpro -- Changing the game!
t: +47 21 54 41 73
___
varnish
On 24-1-2010 20:31, Michael S. Fischer wrote:
The other most common reason why the varnish supervisor can start
killing off children is when they are blocked waiting on a page-in,
which is usually due to VM overcommit (i.e., storage file size
significantly exceeds RAM and you have a very
In message 4b5d70b5.5080...@netmatch.nl, =?ISO-8859-1?Q?Angelo_H=F6ngens?= wr
ites:
How are your disks configured ?
2 cheap SATA disks in a gmirror (it's a simple Dell R300).
Hmm, that's going to hurt obviously...
You would probably have been better off, not mirroring and giving
Varnish
been better off, not mirroring and giving
Varnish a -sfile for each disk.
I'll take it into consideration, but first I'm going to run with the
current configuration for a while to make sure varnish keeps responding.
The disks are now 1-3% busy, and everything seems to run nice..
--
With kind
On 23-1-2010 20:57, Michael Fischer wrote:
On Sat, Jan 23, 2010 at 2:20 AM, Angelo Höngens a.hong...@netmatch.nl
mailto:a.hong...@netmatch.nl wrote:
(second try, I found out I was subscribed using a wrong email address)
Hey,
I am having some problems with Varnish
length of the listen(2) backlog via
netstat -aL By default, varnishd's listen(2) backlog is 512; as long as you
don't see the length hit that value you should be ok.
--Michael
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http
is that varnish sometimes just crashes or stops
responding.
My hit cache ratio is not that high, around 80%, and the backend servers
can be slow at times (quite complex .net web apps). But I've changed
some settings, and I am waiting for the next time varnish starts to stop
responding.. I'm
Can anybody please respond to this query ?
Thank you.
On Fri, Jan 22, 2010 at 11:02 AM, Paras Fadte plf...@gmail.com wrote:
Hi,
Is prefetch by default enabled in varnish ? I have following in VCL . A
value of -30 seconds would mean that the object would be checked 30 seconds
before its
of load, make sure to kldload the http accept filter.
Your varnish-stat looks pretty OK.
Have you configured health-polling of all those backends ?
--
Poul-Henning Kamp | UNIX since Zilog Zeus 3.20
p...@freebsd.org | TCP/IP since RFC 956
FreeBSD committer | BSD since 4.3-tahoe
Dump from my blog posting at http://ingvar.blog.linpro.no/
Ingvar
The usage of varnish, revisited
This is more or less a repost, with updated numbers.
Some months have passed, and it is time to run my poking scripts again, looking
for sites that run Varnish. There is no deep magic here. I
Evening all,
I've been an avid Varnish user both personally and at work for a
couple of years now. At work we use it to cache content across our
global intranet, handling a few million requests per day. At present,
we have the following logical setup...
F5 GTM (GSLB device) F5 load balancer
, Jan 22, 2010 at 9:08 AM, Kristian Lyngstol
krist...@redpill-linpro.com wrote:
(A bit of necroposting, hopefully still relevant)
On Tue, Dec 01, 2009 at 03:26:35PM -0500, Jean-Christophe Petit wrote:
we use 2 layers of Varnish and it proved to be highly scalable and
efficient.
But on the second
Thank you Kristian.
I'll give it a try and hopefully we will have a new release soon ;)
Best Regards,
JC
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc
Hi,
Is prefetch by default enabled in varnish ? I have following in VCL . A
value of -30 seconds would mean that the object would be checked 30 seconds
before its expiry time is reached to see if its modified on backend ?
sub vcl_fetch {
set obj.grace = 30s;
if (!obj.cacheable
-Control ~ (no-cache|no-store|private)))
{
pass;
}
}
[...]
}
Maybe it can be useful to somebody else :)
cheers
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc
In message b5ef6a23-b6bb-49a6-8eab-1043fc7bf...@dynamine.net, Michael S. Fis
cher writes:
Does Varnish already try to utilize CPU caches efficiently by employing =
some sort of LIFO thread reuse policy or by pinning thread pools to =
specific CPUs? If not, there might be some opportunity
2010/1/15 Rob S rtshils...@gmail.com
John Norman wrote:
Folks,
A couple more questions:
(1) Are they any good strategies for splitting load across Varnish
front-ends? Or is the common practice to have just one Varnish server?
(2) How do people avoid single-point-of-failure
On Jan 19, 2010, at 12:46 AM, Poul-Henning Kamp wrote:
In message b5ef6a23-b6bb-49a6-8eab-1043fc7bf...@dynamine.net, Michael S.
Fis
cher writes:
Does Varnish already try to utilize CPU caches efficiently by employing =
some sort of LIFO thread reuse policy or by pinning thread pools
if I am wrong, but doesn't this mean that adding a new varnish
instance implies a full rehash ?
This can be a problem for scalability. Memcached clients typically solve this
by using consistent hashing (a key stays on the same node, even in case of a
node failure or node addition/removal
not be surprised
if HAProxy or nginx can either do it or be taught how to do this.
--
Tollef Fog Heen
Redpill Linpro -- Changing the game!
t: +47 21 54 41 73
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo
to consistent hashing and it's use...
Correct me if I am wrong, but doesn't this mean that adding a new
varnish instance implies a full rehash ?
Yes, that is pretty much guaranteed to be the cost with any
stateless hashing.
--
Poul-Henning Kamp | UNIX since Zilog Zeus 3.20
p
On Jan 16, 2010, at 7:32 AM, Michael Fischer wrote:
On Sat, Jan 16, 2010 at 1:54 AM, Bendik Heltne bhel...@gmail.com wrote:
Our Varnish servers have ~ 120.000 - 150.000 objects cached in ~ 4GB
memory and the backends have a much easier life than before Varnish.
We are about to upgrade RAM
On Jan 18, 2010, at 12:58 PM, pub crawler wrote:
This is an inquiry for the Varnish community.
Wondering how many folks are using Varnish purely for binary storage
and caching (graphic files, archives, audio files, video files, etc.)?
Interested specifically in large Varnish installations
.
Varnish has a significant responsibility, not yet fully met, to tell
the VM system as much about what is going on as possible.
Poul-Henning
--
Poul-Henning Kamp | UNIX since Zilog Zeus 3.20
p...@freebsd.org | TCP/IP since RFC 956
FreeBSD committer | BSD since 4.3-tahoe
Never
the task at hand, but some work a lot better than
others.
Varnish has a significant responsibility, not yet fully met, to tell
the VM system as much about what is going on as possible.
Can you describe in more detail your comparative analysis and plans?
Thanks,
--Michael
in the origin server's
buffer caches, then interposing an additional caching layer may actually be
somewhat harmful because it will add some additional latency.
So far Varnish is performing very well for us as a web server of these
cached objects. The connection time for an item out of Varnish
phenomenon.) If the data is already cached in the origin
server's buffer caches, then interposing an additional caching layer may
actually be somewhat harmful because it will add some additional latency.
So far Varnish is performing very well for us as a web server of these
cached objects
On Jan 18, 2010, at 3:08 PM, Ken Brownfield wrote:
I have a hard time believing that any difference in the total response time
of a cached static object between Varnish and a general-purpose webserver
will be statistically significant, especially considering typical Internet
network
in a web server,
connection pooling, negotiating the transfer, etc. Most sites have so
many latency issues and such a lack of performance. Most folks seem
to just ignore it though and think all is well with low performance.
That's why Varnish and the folks here are so awesome. A band of data
In message 4c3149fb1001181416r7cd1c1c2n923a438d6a0df...@mail.gmail.com, pub c
rawler writes:
So far Varnish is performing very well for us as a web server of these
cached objects. The connection time for an item out of Varnish is
noticeably faster than with web servers we have used - even where
by side. And I stand by my original statement about
their performance relative to Varnish.
--Michael
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc
The average workload of a cache hit, last I looked, was 7 system
calls, with typical service times, from request received from kernel
until response ready to be written to kernel, of 10-20 microseconds.
Well that explains some of the performance difference in Varnish (in
our experience) versus
1 - 100 of 800 matches
Mail list logo