2010/1/15 Rob S
> John Norman wrote:
> > Folks,
> >
> > A couple more questions:
> >
> > (1) Are they any good strategies for splitting load across Varnish
> > front-ends? Or is the common practice to have just one Varnish server?
> >
> > (2) How do people avoid single-point-of-failure for Varnis
In message <8c3f8d23-3e20-4e2c-ba7c-902d843ff...@dynamine.net>, "Michael S. Fis
cher" writes:
>On Jan 18, 2010, at 1:52 PM, Poul-Henning Kamp wrote:
>Can you describe in more detail your comparative analysis and plans? =20
First of all, the major player in this are the 'stevedores' like
"malloc",
On Jan 18, 2010, at 1:52 PM, Poul-Henning Kamp wrote:
> In message , "Michael S.
> Fis
> cher" writes:
>
>> What VM can overcome page-thrashing incurred by constantly referencing a
>> working set that is significantly larger than RAM?
>
> No VM can "overcome" the task at hand, but some work a l
In message , "Michael S. Fis
cher" writes:
>What VM can overcome page-thrashing incurred by constantly referencing a
>working set that is significantly larger than RAM?
No VM can "overcome" the task at hand, but some work a lot better than
others.
Varnish has a significant responsibility, not ye
On Jan 18, 2010, at 1:05 PM, Poul-Henning Kamp wrote:
> In message <43a238d7-433d-4000-8aa5-6c574882d...@dynamine.net>, "Michael S.
> Fis
> cher" writes:
>
>> I should have been more clear. If you overcommit and use disk you
>> will die. Even SSD is a problem as the write latencies are high.
In message <43a238d7-433d-4000-8aa5-6c574882d...@dynamine.net>, "Michael S. Fis
cher" writes:
>I should have been more clear. If you overcommit and use disk you
>will die. Even SSD is a problem as the write latencies are high.
That is still very much dependent on the quality of the VM subsyst
In message <14b75f07-969a-43d1-8cc9-9605a2642...@slide.com>, Ken Brownfield wri
tes:
>> If you receive more than 100 requests/sec per Varnish instance and you
>use a disk cache, you will die.
>
>I was surprised by this, what appears to be grossly irresponsible
>guidance, given how large the in
On Jan 18, 2010, at 12:31 PM, Ken Brownfield wrote:
On Jan 16, 2010, at 7:32 AM, Michael Fischer wrote:
On Sat, Jan 16, 2010 at 1:54 AM, Bendik Heltne
wrote:
Our Varnish servers have ~ 120.000 - 150.000 objects cached in ~ 4GB
memory and the backends have a much easier life than before
On Jan 16, 2010, at 7:32 AM, Michael Fischer wrote:
> On Sat, Jan 16, 2010 at 1:54 AM, Bendik Heltne wrote:
>
> Our Varnish servers have ~ 120.000 - 150.000 objects cached in ~ 4GB
> memory and the backends have a much easier life than before Varnish.
> We are about to upgrade RAM on the Varnish
]] Ken Brownfield
| 3) Hash/bucket URLs to cache pairs.
|
| Same as 2), but for every hash bucket you would send those hits to two
| machines (think RAID-10). This provides redundancy from the effects
| of 2a), and gives essentially infinite scalability for the price of
| doubling your miss rat
varnish-misc@projects.linpro.no
> Subject: Re: Strategies for splitting load across varnish instances? And
> avoiding single-point-of-failure?
>
> In message <1ff67d7369ed1a45832180c7c1109bca13e23e7...@tmmail0.trademe.local>,
> Ross Brown writes:
>>> So it is possible to
s for splitting load across varnish instances? And
avoiding single-point-of-failure?
In message <1ff67d7369ed1a45832180c7c1109bca13e23e7...@tmmail0.trademe.local>,
Ross Brown writes:
>> So it is possible to start your Varnish with one VCL program, and have
>> a small script cha
In message <1ff67d7369ed1a45832180c7c1109bca13e23e7...@tmmail0.trademe.local>,
Ross Brown writes:
>> So it is possible to start your Varnish with one VCL program, and have
>> a small script change to another one some minutes later.
>
>What would this small script look like?=20
sleep 600
> So it is possible to start your Varnish with one VCL program, and have
> a small script change to another one some minutes later.
What would this small script look like?
Sorry if it's a dumb question :)
___
varnish-misc mailing list
varnish-misc@pr
In message <4c3149fb1001161400n38a1ef1al18985bc3ad1ad...@mail.gmail.com>, pub c
rawler writes:
>Just trying to figure out the implications of this because in our
>environment we regularly find ourselves pulling servers offline.
>Wondering if the return of a Varnish would operate like a cold-cache
Thanks again Poul for all you do.
How does Varnish handle the hashing and locating of data where a
backend returns to the pool? Wouldn't the hashing be wrong for prior
loaded items since a machine has returned and the pool widens?
Just trying to figure out the implications of this because in our
On Sat, Jan 16, 2010 at 1:37 PM, Poul-Henning Kamp wrote:
>Are you saying that the default hash is not a mod-n-type algorithm?
>
> Well, it is mod-n, with the footnote that n has nothing to do with
> the number of backends, because these have a configurable weight.
>
> >If not, what happens when t
In message , Micha
el Fischer writes:
>> In message ,
>> David
>> Birdsong writes:
>>
>> >Right, but those 2 remaining are at least still being asked for the
>> >same url's they were prior to the 1 dying.
>>
>> Correct, the hashing is "canonical" in the sense that if the
>> configured backend is
On Sat, Jan 16, 2010 at 1:19 PM, Poul-Henning Kamp wrote:
> In message ,
> David
> Birdsong writes:
>
> >Right, but those 2 remaining are at least still being asked for the
> >same url's they were prior to the 1 dying.
>
> Correct, the hashing is "canonical" in the sense that if the
> configured
In message , David
Birdsong writes:
>Right, but those 2 remaining are at least still being asked for the
>same url's they were prior to the 1 dying.
Correct, the hashing is "canonical" in the sense that if the
configured backend is up, all traffic for "its" objects will be
sent to it.
--
Poul-
On Sat, Jan 16, 2010 at 10:44 AM, Poul-Henning Kamp wrote:
> In message ,
> Micha
> el Fischer writes:
>
>>For instance sizes larger than 2, I think a consistent hash is needed.
>> Otherwise, the overall hit ratio will fall dramatically upon failure of an
>>instance as the requests are rerouted.
On Sat, Jan 16, 2010 at 10:44 AM, Poul-Henning Kamp wrote:
> In message ,
> Micha
> el Fischer writes:
>
> >For instance sizes larger than 2, I think a consistent hash is needed.
> > Otherwise, the overall hit ratio will fall dramatically upon failure of
> an
> >instance as the requests are rerout
In message , Micha
el Fischer writes:
>For instance sizes larger than 2, I think a consistent hash is needed.
> Otherwise, the overall hit ratio will fall dramatically upon failure of an
>instance as the requests are rerouted.
If you have perfect 1/3 splitting between 3 varnishes, having one die
On Sat, Jan 16, 2010 at 9:22 AM, Poul-Henning Kamp wrote:
> In message ,
> Micha
> el Fischer writes:
>
> >On Sat, Jan 16, 2010 at 1:59 AM, Poul-Henning Kamp >wrote:
> >
> >>director h1 hash {
> >>{ .backend webserver; .weight 1; }
> >>{ .backend varnish2;
In message , Micha
el Fischer writes:
>On Sat, Jan 16, 2010 at 1:59 AM, Poul-Henning Kamp wrote:
>
>>director h1 hash {
>>{ .backend webserver; .weight 1; }
>>{ .backend varnish2; .weight 1; }
>>{ .backend varnish3; .weight 1; }
>
>
>What hap
In message <4c3149fb1001160738l2233481dn82c34c2ba1fcc...@mail.gmail.com>, pub c
rawler writes:
>Poul, is anyone running the hash director distribution method like
>you provided (in production)?
No idea...
--
Poul-Henning Kamp | UNIX since Zilog Zeus 3.20
p...@freebsd.org | TCP/IP
On Sat, Jan 16, 2010 at 1:59 AM, Poul-Henning Kamp wrote:
>director h1 hash {
>{ .backend webserver; .weight 1; }
>{ .backend varnish2; .weight 1; }
>{ .backend varnish3; .weight 1; }
What happens when varnish2 or varnish3 dies?
--Michael
Poul, is anyone running the hash director distribution method like
you provided (in production)?
What sort of throughput and what sort of use limitations has anyone
experienced with this?
This is quite exciting - and seems like a fairly good solution to
scale/cluster Varnish.
-Paul
> You can d
On Sat, Jan 16, 2010 at 1:54 AM, Bendik Heltne wrote:
> I must say that i am a bit confused.
> I don't understand the need of routing requests to different varnish
> servers based on hash algorithm. So I am wondering what kind of sites
> are we talking about?
>
We're talking about sites that hav
In message <4c3149fb1001151733g73f7a5dfjc84342b9df7f0...@mail.gmail.com>, pub c
rawler writes:
>Varnish performs very well. Extending this to have a cluster
>functionality within Varnish I think just makes sense.
You can do some clever stuff with the hash director to distribute the
content ove
> I'm all for putting backend hashing into Varnish for the purpose of routing
> requests to backends based on a consistent hash of the request parameters --
> and there's no reason why the backend can't be another Varnish instance.
> But the appropriate use of this is to more efficiently utilize c
In message , Ken Brownfield wri
tes:
It is important to be absolutely clear about what your objective is here,
availability, cache-hit-ratio or raw performance, the best solution will
depend on what you are after.
For a lot of purposes, you will get a lot of mileage out of a number of
parallel Va
On Fri, Jan 15, 2010 at 5:33 PM, pub crawler wrote:
Varnish performs very well. Extending this to have a cluster
> functionality within Varnish I think just makes sense.
haproxy and F5 equipment also both perform very well.
> The workaround
> solutions so far seem to involve quite a bit of ha
On Fri, Jan 15, 2010 at 6:14 PM, Michael Fischer wrote:
I'm all for putting backend hashing into Varnish for the purpose of routing
> requests to backends based on a consistent hash of the request parameters --
> and there's no reason why the backend can't be another Varnish instance.
> But the a
On Fri, Jan 15, 2010 at 5:33 PM, pub crawler wrote:
>> At first glance, this is doing something that you can more cheaply and
>> efficiently do at a higher level, with >software dedicated to that purpose.
>> It's interesting, but I'm not sure it's more than just a restatement of the
>> >same s
> At first glance, this is doing something that you can more cheaply and
> efficiently do at a higher level, with >software dedicated to that purpose.
> It's interesting, but I'm not sure it's more than just a restatement of the
> >same solution with it's own problems.
Varnish performs very we
On Jan 15, 2010, at 3:39 PM, pub crawler wrote:
> Have we considered adding pooling functionality to Varnish much like
> what they have in memcached?
>
> Run multiple Varnish(es) and load distributed amongst the identified
> Varnish server pool So an element in Varnish gets hashed and the
>
On Fri, Jan 15, 2010 at 3:39 PM, pub crawler wrote:
> The recommendation of load balancers in front on Varnish to facilitate
> this feature seems costly when talking about F5 gear. The open
> source solutions require at least two severs dedicated to this load
> balancing function for sanity sake
Have we considered adding pooling functionality to Varnish much like
what they have in memcached?
Run multiple Varnish(es) and load distributed amongst the identified
Varnish server pool So an element in Varnish gets hashed and the
hash identifies the server in the pool it's on. If the serve
Lots of good suggestions; I would look to LVS and/or haproxy for going on the
cheap; otherwise a NetScaler or F5 would do the trick.
With multiple caches, there are three ways I see to handle it:
1) Duplicate cached data on all Varnish instances.
This is a simple, stateless configuration, but i
On Fri, Jan 15, 2010 at 1:19 PM, David Birdsong wrote:
> On Fri, Jan 15, 2010 at 10:11 AM, Rodrigo Benzaquen
> wrote:
> > HA PROXY is open spurce and works pretty well. Also you can do load
> balance
> > based on HAS URL if you want.
>
> aye, the development is pretty active also. i asked for a
On Fri, Jan 15, 2010 at 10:11 AM, Rodrigo Benzaquen
wrote:
> HA PROXY is open spurce and works pretty well. Also you can do load balance
> based on HAS URL if you want.
aye, the development is pretty active also. i asked for a consistent
hash option in haproxy and got one in less than 2 weeks -o
HA PROXY is open spurce and works pretty well. Also you can do load balance
based on HAS URL if you want.
On Fri, Jan 15, 2010 at 3:09 PM, Bendik Heltne wrote:
> > A couple more questions:
> >
> > (1) Are they any good strategies for splitting load across Varnish
> > front-ends? Or is the commo
> A couple more questions:
>
> (1) Are they any good strategies for splitting load across Varnish
> front-ends? Or is the common practice to have just one Varnish server?
We have 3 servers. A bit overkill, but then we have redundancy even if
one fail. I guess 2 is the minimum option if you have an
John Norman wrote:
> Folks,
>
> A couple more questions:
>
> (1) Are they any good strategies for splitting load across Varnish
> front-ends? Or is the common practice to have just one Varnish server?
>
> (2) How do people avoid single-point-of-failure for Varnish? Do people
> run Varnish on two se
Folks,
A couple more questions:
(1) Are they any good strategies for splitting load across Varnish
front-ends? Or is the common practice to have just one Varnish server?
(2) How do people avoid single-point-of-failure for Varnish? Do people
run Varnish on two servers, amassing similar local cach
46 matches
Mail list logo