Wanted in inject another discussion heady item into this thread and
see if the idea is confirmed in other folks current architecture.
Sorry in advance for being verbose.
Often web servers (my experience) are smaller servers, less RAM and
fewer CPUs than the app servers and databases. A typical we
On Jan 18, 2010, at 4:35 PM, Poul-Henning Kamp wrote:
> In message <97f066dd-4044-46a7-b3e1-34ce928e8...@slide.com>, Ken Brownfield
> wri
> tes:
>
>> Ironically and IMHO, one of the barriers to Varnish scalability
>> is its thread model, though this problem strikes in the thousands
>> of connect
On Jan 18, 2010, at 4:15 PM, Ken Brownfield wrote:
> Ironically and IMHO, one of the barriers to Varnish scalability is its thread
> model, though this problem strikes in the thousands of connections.
Agreed. In an early thread on varnish-misc in February 2008 I concluded that
reducing thread_
In message , "Michael S. Fis
cher" writes:
>On Jan 18, 2010, at 5:20 AM, Tollef Fog Heen wrote:
>> My suggestion is to also look at Cache-control: no-cache, possibly also
>> private and no-store and obey those.
>
>Why wasn't it doing it all along?
Because we wanted to give the backend a chance
In message <97f066dd-4044-46a7-b3e1-34ce928e8...@slide.com>, Ken Brownfield wri
tes:
>Ironically and IMHO, one of the barriers to Varnish scalability
>is its thread model, though this problem strikes in the thousands
>of connections.
It's only a matter of work to pool slow clients in Varnish into
In message <87f6439f-76fe-416c-b750-5a53a9712...@dynamine.net>, "Michael S. Fis
cher" writes:
>I'm merely contending that the small amount of added =
>latency for a cache hit, where neither server is operating at full =
>capacity, is not enough to significantly affect the user experience.
Which t
On Jan 18, 2010, at 4:06 PM, Poul-Henning Kamp wrote:
> In message <02d0ec1a-d0b0-40ee-b278-b57714e54...@dynamine.net>, "Michael S.
> Fis
> cher" writes:
>
>> But we are not discussing serving dynamic content in this thread
>> anyway. We are talking about binary files, aren't we? Yes? Blobs
>
On Jan 18, 2010, at 4:03 PM, Michael S. Fischer wrote:
>> Does [Apache] perform "well" for static files in the absence of any other
>> function? Yes. Would I choose it for anything other than an application
>> server? No. There are much better solutions out there, and the proof is in
>> the
In message <364f5e3e-0d1e-4c95-b101-b7a00c276...@slide.com>, Ken Brownfield wri
tes:
>A cache hit under Varnish will be comparable in latency to a
>dedicated static server hit, regardless of the backend.
Only provided the "dedicated static server" is written to work in
a modern SMP/VM system, whi
In message <02d0ec1a-d0b0-40ee-b278-b57714e54...@dynamine.net>, "Michael S. Fis
cher" writes:
>But we are not discussing serving dynamic content in this thread
>anyway. We are talking about binary files, aren't we? Yes? Blobs
>on disk? Unless everyone is living on a different plane then me,
>t
On Jan 18, 2010, at 3:54 PM, Ken Brownfield wrote:
> Adding unnecessary software overhead will add latency to requests to the
> filesystem, and obviously should be avoided. However, a cache in front of a
> general web server will 1) cause an object miss to have additional latency
> (though sma
> Let me clear, in case I have not been clear enough already:
>
> I am not talking about the edge cases of those low-concurrency, high-latency,
> scripted-language webservers that are becoming tied to web application
> frameworks like Rails and Django and that are the best fit for front-end
> c
In message <8c3f8d23-3e20-4e2c-ba7c-902d843ff...@dynamine.net>, "Michael S. Fis
cher" writes:
>On Jan 18, 2010, at 1:52 PM, Poul-Henning Kamp wrote:
>Can you describe in more detail your comparative analysis and plans? =20
First of all, the major player in this are the 'stevedores' like
"malloc",
On Jan 18, 2010, at 3:16 PM, Michael S. Fischer wrote:
> On Jan 18, 2010, at 3:08 PM, Ken Brownfield wrote:
>
>> In the real world, sites run their applications through web servers, and
>> this fact does (and should) guide the decision on the base web server to
>> use, not static file serving.
>
> The average workload of a cache hit, last I looked, was 7 system
> calls, with typical service times, from request received from kernel
> until response ready to be written to kernel, of 10-20 microseconds.
Well that explains some of the performance difference in Varnish (in
our experience) vers
On Jan 18, 2010, at 3:47 PM, Poul-Henning Kamp wrote:
> In message , "Michael S.
> Fis
> cher" writes:
>
>> That's why you don't use those webservers as origin servers for
>> that purpose. But you don't use Varnish for it either. It's not
>> an origin server anyway.
>
> Actually, for protocol
On Jan 18, 2010, at 3:37 PM, pub crawler wrote:
>> Differences in latency of serving static content can vary widely based on
>> the web server in use, easily tens of milliseconds or more. There are
>> dozens of web servers out there, some written in interpreted languages, many
>> custom-written f
In message , "Michael S. Fis
cher" writes:
>That's why you don't use those webservers as origin servers for
>that purpose. But you don't use Varnish for it either. It's not
>an origin server anyway.
Actually, for protocol purposes, Varnish is an origin server.
If you read RFC2616 very carefull
In message <4c3149fb1001181416r7cd1c1c2n923a438d6a0df...@mail.gmail.com>, pub c
rawler writes:
>So far Varnish is performing very well for us as a web server of these
>cached objects. The connection time for an item out of Varnish is
>noticeably faster than with web servers we have used - even w
> Differences in latency of serving static content can vary widely based on
> the web server in use, easily tens of milliseconds or more. There are
> dozens of web servers out there, some written in interpreted languages, many
> custom-written for a specific application, many with add-ons and modu
On Jan 18, 2010, at 3:08 PM, Ken Brownfield wrote:
>> I have a hard time believing that any difference in the total response time
>> of a cached static object between Varnish and a general-purpose webserver
>> will be statistically significant, especially considering typical Internet
>> network
> I have a hard time believing that any difference in the total response time
> of a cached static object between Varnish and a general-purpose webserver
> will be statistically significant, especially considering typical Internet
> network latency. If there's any difference it should be well u
On Jan 18, 2010, at 2:16 PM, pub crawler wrote:
>> Most kernels cache recently-accessed files in RAM, and so common web servers
>> such as Apache can ?>already serve up static objects very quickly if they
>> are located in the buffer cache. (Varnish's apparent >speed is largely
>> based on the
> Most kernels cache recently-accessed files in RAM, and so common web servers
> such as Apache can ?>already serve up static objects very quickly if they are
> located in the buffer cache. (Varnish's apparent >speed is largely based on
> the same phenomenon.) If the data is already cached in
On Jan 18, 2010, at 1:52 PM, Poul-Henning Kamp wrote:
> In message , "Michael S.
> Fis
> cher" writes:
>
>> What VM can overcome page-thrashing incurred by constantly referencing a
>> working set that is significantly larger than RAM?
>
> No VM can "overcome" the task at hand, but some work a l
In message , "Michael S. Fis
cher" writes:
>What VM can overcome page-thrashing incurred by constantly referencing a
>working set that is significantly larger than RAM?
No VM can "overcome" the task at hand, but some work a lot better than
others.
Varnish has a significant responsibility, not ye
On Jan 18, 2010, at 12:58 PM, pub crawler wrote:
> This is an inquiry for the Varnish community.
>
> Wondering how many folks are using Varnish purely for binary storage
> and caching (graphic files, archives, audio files, video files, etc.)?
>
> Interested specifically in large Varnish installa
On Jan 18, 2010, at 1:05 PM, Poul-Henning Kamp wrote:
> In message <43a238d7-433d-4000-8aa5-6c574882d...@dynamine.net>, "Michael S.
> Fis
> cher" writes:
>
>> I should have been more clear. If you overcommit and use disk you
>> will die. Even SSD is a problem as the write latencies are high.
In message <43a238d7-433d-4000-8aa5-6c574882d...@dynamine.net>, "Michael S. Fis
cher" writes:
>I should have been more clear. If you overcommit and use disk you
>will die. Even SSD is a problem as the write latencies are high.
That is still very much dependent on the quality of the VM subsyst
This is an inquiry for the Varnish community.
Wondering how many folks are using Varnish purely for binary storage
and caching (graphic files, archives, audio files, video files, etc.)?
Interested specifically in large Varnish installations with either
high number of files or where files are larg
In message <14b75f07-969a-43d1-8cc9-9605a2642...@slide.com>, Ken Brownfield wri
tes:
>> If you receive more than 100 requests/sec per Varnish instance and you
>use a disk cache, you will die.
>
>I was surprised by this, what appears to be grossly irresponsible
>guidance, given how large the in
On Jan 18, 2010, at 12:31 PM, Ken Brownfield wrote:
On Jan 16, 2010, at 7:32 AM, Michael Fischer wrote:
On Sat, Jan 16, 2010 at 1:54 AM, Bendik Heltne
wrote:
Our Varnish servers have ~ 120.000 - 150.000 objects cached in ~ 4GB
memory and the backends have a much easier life than before
Hey there,
Anybody knows what's the plan to release saint mode ? :D
Thanks a lot,
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc
On Jan 16, 2010, at 7:32 AM, Michael Fischer wrote:
> On Sat, Jan 16, 2010 at 1:54 AM, Bendik Heltne wrote:
>
> Our Varnish servers have ~ 120.000 - 150.000 objects cached in ~ 4GB
> memory and the backends have a much easier life than before Varnish.
> We are about to upgrade RAM on the Varnish
On Jan 18, 2010, at 5:20 AM, Tollef Fog Heen wrote:
> we are considering changing the defaults on how the cache-control header
> is handled in Varnish. Currently, we only look at s-maxage and maxage
> to decide if and how long an object should be cached. (We also look at
> expires, but that's not
2010/1/18 Tollef Fog Heen :
>
> Hi all,
>
> we are considering changing the defaults on how the cache-control header
> is handled in Varnish. Currently, we only look at s-maxage and maxage
> to decide if and how long an object should be cached. (We also look at
> expires, but that's not relevant
Hi all,
we are considering changing the defaults on how the cache-control header
is handled in Varnish. Currently, we only look at s-maxage and maxage
to decide if and how long an object should be cached. (We also look at
expires, but that's not relevant here.)
My suggestion is to also look at
On Mon, 18 Jan 2010 10:25:22 +0100
Tollef Fog Heen wrote:
> ]] Simon Effenberg
>
> Hi,
>
> | Is there anything which was changed between 2.0.3 and 2.0.6 which can
> | cause this issue?
>
> It's an intentional change (see r4298). What is the problem you are
> seeing? As in, why is it a probl
]] John Norman
Hi,
| Since the cache is cleared after a restart, how do people avoid
| slamming their backends as the cache is refilled?
You can set max_connections on the backend. That will give 503s to any
clients that can't get a backend connection, but this might be
preferably to having yo
]] Simon Effenberg
Hi,
| Is there anything which was changed between 2.0.3 and 2.0.6 which can
| cause this issue?
It's an intentional change (see r4298). What is the problem you are
seeing? As in, why is it a problem that it does not time out until you
send data?
--
Tollef Fog Heen
Redpil
In message <53c652a09719c54da24741d0157cb26904c5f...@tfprdexs1.tf1.groupetf1.fr
>> It's probably simplest to paraphrase the code:
>>
>> Calculate hash over full complement of backends.
>> Is the selected backend sick
>> Calculate hash over subset of healthy backends
>
>Let'
]] Ken Brownfield
| 3) Hash/bucket URLs to cache pairs.
|
| Same as 2), but for every hash bucket you would send those hits to two
| machines (think RAID-10). This provides redundancy from the effects
| of 2a), and gives essentially infinite scalability for the price of
| doubling your miss rat
-Message d'origine-
> It's probably simplest to paraphrase the code:
>
> Calculate hash over full complement of backends.
> Is the selected backend sick
> Calculate hash over subset of healthy backends
Let's get back to consistent hashing and it's use...
Correc
43 matches
Mail list logo