Re: Connections to backend not closing

2010-02-10 Thread Michael Fischer
On Wed, Feb 10, 2010 at 4:17 PM, Thimo E. a...@digithi.de wrote:

 After 1 day running varnish I have 140 sockets of the backend webserver
 in FIN_WAIT2 state, this is quite a lot.


I'm why do you believe this is a lot?  Do you have evidence that this is
causing your server to behave suboptimally?  The impact should be no more
than a bit of RAM.

--Michael
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Memory usage

2010-01-26 Thread Michael Fischer
On Tue, Jan 26, 2010 at 7:23 PM, Martin Goldman m...@mgoldman.com wrote:

I'm running Varnish on a box with 4GB RAM. There are hundreds of thousands
 of objects being served, and I'm certain that they don't all fit in that
 relatively meager amount of RAM. I understand that Varnish's model dictates
 that the kernel will be trusted to use virtual memory as necessary if the
 cached objects don't fit in RAM. I have a few questions about this:

 1. How can you tell whether your Varnish objects fit in RAM?


You can't guarantee that they will unless you set your cache size at or
below the amount of RAM you have installed.


 2. If I have objects residing in virtual memory, to what extent will my
 performance be adversely affected? If I want my site to be fast, do I
 basically need to go out and buy as much RAM as it will take so that virtual
 memory isn't needed?


Technically, it's go out and buy as much RAM as it will take to avoid being
swamped by paging.  But yes.


 3. I noticed tonight that my machine was using a few hundred megs of swap
 space, which I've never seen happen before. Varnish is the only non-system
 service running on this box. My understanding was that Varnish would get
 only as much RAM as was available and then send the overflow into the
 file-backed virtual memory. If that's the case, though, then why is swap
 space being used? Is this just a side effect of how the kernel allocates
 memory, or is something else going on here?


Is your backing store file-based, or malloc-based?  If the latter, that
would explain the swap space being consumed.  Or, as Darryl said, the
housekeeping overhead of a VERY large file-backed cache could make the
Varnish process very large.

--Michael
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Strategies for splitting load across varnish instances? And avoiding single-point-of-failure?

2010-01-16 Thread Michael Fischer
On Sat, Jan 16, 2010 at 1:54 AM, Bendik Heltne bhel...@gmail.com wrote:

 I must say that i am a bit confused.
 I don't understand the need of routing requests to different varnish
 servers based on hash algorithm. So I am wondering what kind of sites
 are we talking about?


We're talking about sites that have a hot working set much larger than the
amount of RAM you can fit in a single Varnish instance (i.e., 32-64GB).

Our Varnish servers have ~ 120.000 - 150.000 objects cached in ~ 4GB
 memory and the backends have a much easier life than before Varnish.
 We are about to upgrade RAM on the Varnish boxes, and eventually we
 can switch to disk cache if needed.


If you receive more than 100 requests/sec per Varnish instance and you use a
disk cache, you will die.

--Michael
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Strategies for splitting load across varnish instances? And avoiding single-point-of-failure?

2010-01-16 Thread Michael Fischer
On Sat, Jan 16, 2010 at 1:59 AM, Poul-Henning Kamp p...@phk.freebsd.dkwrote:

director h1 hash {
{ .backend webserver; .weight 1; }
{ .backend varnish2; .weight 1; }
{ .backend varnish3; .weight 1; }


What happens when varnish2 or varnish3 dies?

--Michael
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Strategies for splitting load across varnish instances? And avoiding single-point-of-failure?

2010-01-16 Thread Michael Fischer
On Sat, Jan 16, 2010 at 10:44 AM, Poul-Henning Kamp p...@phk.freebsd.dkwrote:

 In message d002c4031001160929p1f688fc9mcc927dda2c684...@mail.gmail.com,
 Micha
 el Fischer writes:

 For instance sizes larger than 2, I think a consistent hash is needed.
  Otherwise, the overall hit ratio will fall dramatically upon failure of
 an
 instance as the requests are rerouted.

 If you have perfect 1/3 splitting between 3 varnishes, having one die
 will do bad things to your hitrate until the remaining two distribute
 the load between them.

 That's a matter of math, and has nothing to do with the hash algorithm.


Let me put it this way and leave the math up to you:  it will be way worse
if you don't use a consistent hash.

--Michael
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Strategies for splitting load across varnish instances? And avoiding single-point-of-failure?

2010-01-16 Thread Michael Fischer
On Sat, Jan 16, 2010 at 1:19 PM, Poul-Henning Kamp p...@phk.freebsd.dkwrote:

 In message dcccdf791001161258s3e960aa8t3cd379e42d760...@mail.gmail.com,
 David
  Birdsong writes:

 Right, but those 2 remaining are at least still being asked for the
 same url's they were prior to the 1 dying.

 Correct, the hashing is canonical in the sense that if the
 configured backend is up, all traffic for its objects will be
 sent to it.


Are you saying that the default hash is not a mod-n-type algorithm?

If not, what happens when the failed backend is restored to service?

--Michael
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Strategies for splitting load across varnish instances? And avoiding single-point-of-failure?

2010-01-16 Thread Michael Fischer
On Sat, Jan 16, 2010 at 1:37 PM, Poul-Henning Kamp p...@phk.freebsd.dkwrote:

Are you saying that the default hash is not a mod-n-type algorithm?

 Well, it is mod-n, with the footnote that n has nothing to do with
 the number of backends, because these have a configurable weight.

 If not, what happens when the failed backend is restored to service?

 It's probably simplest to paraphrase the code:

Calculate hash over full complement of backends.
Is the selected backend sick
Calculate hash over subset of healthy backends


Ah, ok.  That should behave reasonably in the event of a backend failure if
you're implementing Varnish tiers.  Thanks for the clarification.

--Michael
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Strategies for splitting load across varnish instances? And avoiding single-point-of-failure?

2010-01-15 Thread Michael Fischer
On Fri, Jan 15, 2010 at 3:39 PM, pub crawler pubcrawler@gmail.comwrote:

 The recommendation of load balancers in front on Varnish to facilitate
 this feature seems costly when talking about F5 gear.   The open
 source solutions require at least two severs dedicated to this load
 balancing function for sanity sake (which is costly).


You can use virtual machines for the front-end load balancers if you're
concerned about cost.  Alternatively, you can run the load balancers on the
same hosts as the caches, provided the Varnish caches listen on different
ports or different IP addresses than the load balancers.

--Michael
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Strategies for splitting load across varnish instances? And avoiding single-point-of-failure?

2010-01-15 Thread Michael Fischer
On Fri, Jan 15, 2010 at 6:14 PM, Michael Fischer mich...@dynamine.netwrote:

I'm all for putting backend hashing into Varnish for the purpose of routing
 requests to backends based on a consistent hash of the request parameters --
 and there's no reason why the backend can't be another Varnish instance.
  But the appropriate use of this is to more efficiently utilize cache memory
 by associating an object with a designated Varnish server in a pool, not for
 HA.  This was one of my first requests that still (alas) has seen no
 traction.


Although now that haproxy does it, my request may now be moot.

--Michael
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Slow connections

2009-12-22 Thread Michael Fischer
Are you seeing any increased TCP send queue lengths on the haproxy side, or
TCP receive queue lengths on the Varnish side?  (netstat -a)  That might
provide some clue as to what's going on.

--Michael

On Tue, Dec 22, 2009 at 2:58 PM, Henry Paulissen h.paulis...@qbell.nlwrote:

 Have a look to the conntrack setting in the kernel (sysctl) on both sides.
 It could be that your conntrack is full (connectrack only exists if you use
 iptables with netfilter_conntrack).

 Regards,
 Henry

 -Oorspronkelijk bericht-
 Van: varnish-misc-boun...@projects.linpro.no
 [mailto:varnish-misc-boun...@projects.linpro.no] Namens Joe Williams
 Verzonden: dinsdag 22 december 2009 18:12
 Aan: varnish-misc@projects.linpro.no
 Onderwerp: Slow connections


 I am seeing a good amount (1/100) of connections to varnish (from
 haproxy) taking 3 seconds. My first thought was the connection backlog
 but somaxconn and listen_depth are both set higher than the number of
 connections. Anyone have any suggestions on how to track down what is
 causing this or settings I can use to try to aleviate it?

 Thanks.

 -Joe

 --
 Name: Joseph A. Williams
 Email: j...@joetify.com
 Blog: http://www.joeandmotorboat.com/

 ___
 varnish-misc mailing list
 varnish-misc@projects.linpro.no
 http://projects.linpro.no/mailman/listinfo/varnish-misc

 ___
 varnish-misc mailing list
 varnish-misc@projects.linpro.no
 http://projects.linpro.no/mailman/listinfo/varnish-misc

___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Slow connections

2009-12-22 Thread Michael Fischer
haproxy has never supported keep-alive HTTP connections, to my knowledge.

--Michael

On Tue, Dec 22, 2009 at 3:41 PM, Henry Paulissen h.paulis...@qbell.nlwrote:

 Next one.

 Did you tune the tcp fin timeout? (on both servers)
 Linux will standard holds all connection open till it hits the fin timeout
 length (tcp_fin and tcp_fin2).
 We decreased it to 3.

 HAProxy support:
 Do you forced a http connection close in haproxy?
 If all connections are in keep-alive your queue will fill up real quick.

 Henry

 -Oorspronkelijk bericht-
 Van: Joe Williams [mailto:j...@joetify.com]
 Verzonden: woensdag 23 december 2009 0:23
 Aan: Henry Paulissen
 CC: varnish-misc@projects.linpro.no
 Onderwerp: Re: Slow connections


 Thanks Henry, nf_conntrack_max is set high on both machines. I've had
 the full table issue before :P



 On 12/22/09 2:58 PM, Henry Paulissen wrote:
  Have a look to the conntrack setting in the kernel (sysctl) on both
 sides.
  It could be that your conntrack is full (connectrack only exists if you
 use
  iptables with netfilter_conntrack).
 
  Regards,
  Henry
 
  -Oorspronkelijk bericht-
  Van: varnish-misc-boun...@projects.linpro.no
  [mailto:varnish-misc-boun...@projects.linpro.no] Namens Joe Williams
  Verzonden: dinsdag 22 december 2009 18:12
  Aan: varnish-misc@projects.linpro.no
  Onderwerp: Slow connections
 
 
  I am seeing a good amount (1/100) of connections to varnish (from
  haproxy) taking 3 seconds. My first thought was the connection backlog
  but somaxconn and listen_depth are both set higher than the number of
  connections. Anyone have any suggestions on how to track down what is
  causing this or settings I can use to try to aleviate it?
 
  Thanks.
 
  -Joe
 
 

 --
 Name: Joseph A. Williams
 Email: j...@joetify.com
 Blog: http://www.joeandmotorboat.com/

 ___
 varnish-misc mailing list
 varnish-misc@projects.linpro.no
 http://projects.linpro.no/mailman/listinfo/varnish-misc

___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc