2010/1/15 Rob S rtshils...@gmail.com
John Norman wrote:
Folks,
A couple more questions:
(1) Are they any good strategies for splitting load across Varnish
front-ends? Or is the common practice to have just one Varnish server?
(2) How do people avoid single-point-of-failure for
]] Ken Brownfield
| 3) Hash/bucket URLs to cache pairs.
|
| Same as 2), but for every hash bucket you would send those hits to two
| machines (think RAID-10). This provides redundancy from the effects
| of 2a), and gives essentially infinite scalability for the price of
| doubling your miss
On Jan 16, 2010, at 7:32 AM, Michael Fischer wrote:
On Sat, Jan 16, 2010 at 1:54 AM, Bendik Heltne bhel...@gmail.com wrote:
Our Varnish servers have ~ 120.000 - 150.000 objects cached in ~ 4GB
memory and the backends have a much easier life than before Varnish.
We are about to upgrade RAM
In message a8edc1fb-e3e2-4be7-887a-92b0d1da9...@dynamine.net, Michael S. Fis
cher writes:
What VM can overcome page-thrashing incurred by constantly referencing a
working set that is significantly larger than RAM?
No VM can overcome the task at hand, but some work a lot better than
others.
On Jan 18, 2010, at 1:52 PM, Poul-Henning Kamp wrote:
In message a8edc1fb-e3e2-4be7-887a-92b0d1da9...@dynamine.net, Michael S.
Fis
cher writes:
What VM can overcome page-thrashing incurred by constantly referencing a
working set that is significantly larger than RAM?
No VM can overcome
So it is possible to start your Varnish with one VCL program, and have
a small script change to another one some minutes later.
What would this small script look like?
Sorry if it's a dumb question :)
___
varnish-misc mailing list
In message 1ff67d7369ed1a45832180c7c1109bca13e23e7...@tmmail0.trademe.local,
Ross Brown writes:
So it is possible to start your Varnish with one VCL program, and have
a small script change to another one some minutes later.
What would this small script look like?=20
sleep 600
I hadn't used varnishadm before. Looks useful.
Thanks!
-Original Message-
From: p...@critter.freebsd.dk [mailto:p...@critter.freebsd.dk] On Behalf Of
Poul-Henning Kamp
Sent: Monday, 18 January 2010 9:38 a.m.
To: Ross Brown
Cc: varnish-misc@projects.linpro.no
Subject: Re: Strategies for
Hey, folks, I just want to thank for this great thread -- I think it
would be well worth breaking it up into Q/A for the FAQ.
We're still a bit undecided as to how we're going to configure our
systems, but we feel like we have options now.
On Sun, Jan 17, 2010 at 4:10 PM, Ross Brown
In message ff646d15-26b5-4843-877f-fb8d469d2...@slide.com, Ken Brownfield wri
tes:
It is important to be absolutely clear about what your objective is here,
availability, cache-hit-ratio or raw performance, the best solution will
depend on what you are after.
For a lot of purposes, you will get
In message 4c3149fb1001151733g73f7a5dfjc84342b9df7f0...@mail.gmail.com, pub c
rawler writes:
Varnish performs very well. Extending this to have a cluster
functionality within Varnish I think just makes sense.
You can do some clever stuff with the hash director to distribute the
content over a
On Sat, Jan 16, 2010 at 1:54 AM, Bendik Heltne bhel...@gmail.com wrote:
I must say that i am a bit confused.
I don't understand the need of routing requests to different varnish
servers based on hash algorithm. So I am wondering what kind of sites
are we talking about?
We're talking about
On Sat, Jan 16, 2010 at 1:59 AM, Poul-Henning Kamp p...@phk.freebsd.dkwrote:
director h1 hash {
{ .backend webserver; .weight 1; }
{ .backend varnish2; .weight 1; }
{ .backend varnish3; .weight 1; }
What happens when varnish2 or varnish3
In message 4c3149fb1001160738l2233481dn82c34c2ba1fcc...@mail.gmail.com, pub c
rawler writes:
Poul, is anyone running the hash director distribution method like
you provided (in production)?
No idea...
--
Poul-Henning Kamp | UNIX since Zilog Zeus 3.20
p...@freebsd.org | TCP/IP
In message d002c4031001160741q63dd5a50i6342116daba15...@mail.gmail.com, Micha
el Fischer writes:
On Sat, Jan 16, 2010 at 1:59 AM, Poul-Henning Kamp p...@phk.freebsd.dkwrote:
director h1 hash {
{ .backend webserver; .weight 1; }
{ .backend varnish2; .weight
In message d002c4031001160929p1f688fc9mcc927dda2c684...@mail.gmail.com, Micha
el Fischer writes:
For instance sizes larger than 2, I think a consistent hash is needed.
Otherwise, the overall hit ratio will fall dramatically upon failure of an
instance as the requests are rerouted.
If you have
On Sat, Jan 16, 2010 at 10:44 AM, Poul-Henning Kamp p...@phk.freebsd.dkwrote:
In message d002c4031001160929p1f688fc9mcc927dda2c684...@mail.gmail.com,
Micha
el Fischer writes:
For instance sizes larger than 2, I think a consistent hash is needed.
Otherwise, the overall hit ratio will fall
On Sat, Jan 16, 2010 at 10:44 AM, Poul-Henning Kamp p...@phk.freebsd.dk wrote:
In message d002c4031001160929p1f688fc9mcc927dda2c684...@mail.gmail.com,
Micha
el Fischer writes:
For instance sizes larger than 2, I think a consistent hash is needed.
Otherwise, the overall hit ratio will fall
In message dcccdf791001161258s3e960aa8t3cd379e42d760...@mail.gmail.com, David
Birdsong writes:
Right, but those 2 remaining are at least still being asked for the
same url's they were prior to the 1 dying.
Correct, the hashing is canonical in the sense that if the
configured backend is up, all
On Sat, Jan 16, 2010 at 1:19 PM, Poul-Henning Kamp p...@phk.freebsd.dkwrote:
In message dcccdf791001161258s3e960aa8t3cd379e42d760...@mail.gmail.com,
David
Birdsong writes:
Right, but those 2 remaining are at least still being asked for the
same url's they were prior to the 1 dying.
On Sat, Jan 16, 2010 at 1:37 PM, Poul-Henning Kamp p...@phk.freebsd.dkwrote:
Are you saying that the default hash is not a mod-n-type algorithm?
Well, it is mod-n, with the footnote that n has nothing to do with
the number of backends, because these have a configurable weight.
If not, what
Thanks again Poul for all you do.
How does Varnish handle the hashing and locating of data where a
backend returns to the pool? Wouldn't the hashing be wrong for prior
loaded items since a machine has returned and the pool widens?
Just trying to figure out the implications of this because in
In message 4c3149fb1001161400n38a1ef1al18985bc3ad1ad...@mail.gmail.com, pub c
rawler writes:
Just trying to figure out the implications of this because in our
environment we regularly find ourselves pulling servers offline.
Wondering if the return of a Varnish would operate like a cold-cache
miss
Folks,
A couple more questions:
(1) Are they any good strategies for splitting load across Varnish
front-ends? Or is the common practice to have just one Varnish server?
(2) How do people avoid single-point-of-failure for Varnish? Do people
run Varnish on two servers, amassing similar local
John Norman wrote:
Folks,
A couple more questions:
(1) Are they any good strategies for splitting load across Varnish
front-ends? Or is the common practice to have just one Varnish server?
(2) How do people avoid single-point-of-failure for Varnish? Do people
run Varnish on two servers,
A couple more questions:
(1) Are they any good strategies for splitting load across Varnish
front-ends? Or is the common practice to have just one Varnish server?
We have 3 servers. A bit overkill, but then we have redundancy even if
one fail. I guess 2 is the minimum option if you have an
HA PROXY is open spurce and works pretty well. Also you can do load balance
based on HAS URL if you want.
On Fri, Jan 15, 2010 at 3:09 PM, Bendik Heltne bhel...@gmail.com wrote:
A couple more questions:
(1) Are they any good strategies for splitting load across Varnish
front-ends? Or is
On Fri, Jan 15, 2010 at 10:11 AM, Rodrigo Benzaquen
rodr...@mercadolibre.com wrote:
HA PROXY is open spurce and works pretty well. Also you can do load balance
based on HAS URL if you want.
aye, the development is pretty active also. i asked for a consistent
hash option in haproxy and got one
Have we considered adding pooling functionality to Varnish much like
what they have in memcached?
Run multiple Varnish(es) and load distributed amongst the identified
Varnish server pool So an element in Varnish gets hashed and the
hash identifies the server in the pool it's on. If the
On Fri, Jan 15, 2010 at 3:39 PM, pub crawler pubcrawler@gmail.comwrote:
The recommendation of load balancers in front on Varnish to facilitate
this feature seems costly when talking about F5 gear. The open
source solutions require at least two severs dedicated to this load
balancing
On Jan 15, 2010, at 3:39 PM, pub crawler wrote:
Have we considered adding pooling functionality to Varnish much like
what they have in memcached?
Run multiple Varnish(es) and load distributed amongst the identified
Varnish server pool So an element in Varnish gets hashed and the
hash
At first glance, this is doing something that you can more cheaply and
efficiently do at a higher level, with software dedicated to that purpose.
It's interesting, but I'm not sure it's more than just a restatement of the
same solution with it's own problems.
Varnish performs very well.
On Fri, Jan 15, 2010 at 6:14 PM, Michael Fischer mich...@dynamine.netwrote:
I'm all for putting backend hashing into Varnish for the purpose of routing
requests to backends based on a consistent hash of the request parameters --
and there's no reason why the backend can't be another Varnish
33 matches
Mail list logo