Re: Weird log entries
Morning. Same behaviour, same lines, same nulls over here, with the same amd64 arch. I just discarded those lines in the filter i use to split varnishncsa logs. Not really a big problem, but something else to take into account apart of crashes :-) David (Kitai) Cruz E-Comm Factory 2009/4/7 Alecs Henry alecshe...@gmail.com: So, after a while digging through other stuff I realized what the (null) entries are. They are cache misses. For (apparently) every miss there's an entry on varnishncsa for that miss, and it's logged as a bunch of nulls. I haven't figured out what the other entries are though. I still see some entries (not as many as the null ones) with the date field set to [00/Jan/1900:00:00:00 +], with all the other information valid (URL, Referer, BackendName, etc..) except for the object size, which is set to (null) and ther server response code (also (null)). There are also entries that the only field set is the server response (200 or 404 -- these have the date field set correctly) and some entries with the host (or where the backend name should go) set to 127.0.0.1 and the other fields set correctly. One thing though, in none of the entries (correct, broken or plain weird) the object size is set, all I see are -, except for one entry that had it set (strangely enough, favicon.ico!) and even then, only once. If anybody else has seen this stuff or know anything that might help explain, I'd greatly appreciate some help. I can also send you any information you might deem necessary in order to figure this stuff out. This is the varnish package for debian (2.0.3-2_amd64) all with HTTP/1.1 requests. Thanks! Alecs On Mon, Feb 9, 2009 at 5:29 PM, Alecs Henry alecshe...@gmail.com wrote: Hi Nick, I forgot to say... It's varnish-trunk. But I saw it on both varnish 2.0.1 and 2.0.2. Alecs On Mon, Feb 9, 2009 at 6:20 PM, Nick Loman n...@loman.net wrote: Alecs Henry wrote: EDITED TO INCLUDE THE ORIGINAL MESSAGE... Hi guys, I was wondering if any of you has seen the following log entries when using varnishncsa: 127.0.0.1 - - [00/Jan/1900:00:00:00 +] (null) (null) (null) (null) - - - Those are coupled with: 127.0.0.1 - - [09/Feb/2009:19:39:46 +] (null) (null) (null) 200 39678 - - I can see an object in the page that has that size (image) -- through firebug, but the object didn't load into the browser until I hit reload. And there are also many entries like the following: 127.0.0.1 - - [09/Feb/2009:19:39:52 +] GET http://www.example.com/img/mod.gif HTTP/1.1 200 951 http://www.example.com/style.css; Mozilla/5.0 (X11; U; Linux x86_64; en; rv:1.9.0.5) Gecko/2008121623 Ubuntu/8.10 (intrepid) Firefox/3.0.5 These entries are from communication with the backend server I expected it to show the backend name instead of 127.0.0.1. Is there an explanation? Thanks! Alecs I've seen log entries like this for HTTP 1.0 requests (using Varnish 1.2), as that version of Varnish did not have support for parsing HTTP 1.0 headers. Cheers, Nick. ___ varnish-misc mailing list varnish-misc@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-misc ___ varnish-misc mailing list varnish-misc@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-misc
Re: varnish 2.0.4 and new config changes
On Tue, Apr 07, 2009 at 04:11:04AM -0700, Jauder Ho wrote: On Tue, Apr 7, 2009 at 3:19 AM, Kristian Lyngstol krist...@redpill-linpro.com wrote: On Tue, Apr 07, 2009 at 02:55:13AM -0700, Jauder Ho wrote: I just downloaded the new 2.0.4 release and noticed that a couple of things had changed causing my existing config to stop working. Therefore, I had a couple of questions that I'm hoping that you can help answer. a) Is beresp.ttl basically replacing obj.ttl? If so, I should be able to do the following right in vcl_fetch? if (beresp.ttl 300s) { set beresp.ttl = 300s; } Yes, for all intents and purposes, beresp is obj. The name change is reflecting some underlying changes that doesn't really affect how you write VCL - yet. FYI, beresp.ttl does not work on varnish 2.0.4. It does work on -trunk though. It would seem I was mistaken. It doesn't look like the obj to beresp commits made it into 2.0.4 after a quick check. So that's only relevant to trunk. c) Lastly, I have a config of user nginx1 varnish nginx2 php-fpm and noticed that the IP being logged on nginx2 is the internal IP. nginx1 has X-Forwarded-For set. (...) If your first nginx sets X-Forwarded-For, then setting req.http.X-Forwarded-For = client.ip; will overwrite it with nginx1's IP. So don't set X-Forwarded-For in vcl_recv and it will pass straight through Varnish. 2.0.4 does not affect this compared to 2.0.3. Right now, only nginx1 sets X-Forwarded-For; varnish does not set X-Forwarded-For; nginx2 is a fastcgi frontend and does not set X-Forwarded-For. With this config (and using -trunk), nginx2 still logs the gateway IP so something strange is going on here. I'm not familiar with how nginx logs, but I'd start by verifying: 1. That the X-Forwarded-For does indeed reach the relevant server. 2. That the entity that logs honors X-Forwarded-For. -- Kristian Lyngstøl Redpill Linpro AS Tlf: +47 21544179 Mob: +47 99014497 pgpaJ2gJHx0nH.pgp Description: PGP signature ___ varnish-misc mailing list varnish-misc@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-misc
503 Service Unavailable + Stop Responding
Hi im getting folloing error when accessing a Picture (jpg), delivered by Varnish-Cache. Error 503 Service Unavailable Service Unavailable Guru Meditation: XID: 502211958 Varnish http://www.varnish-cache.org/ This happens sporadically. When i access the picture from the Backend directly, its working. My varnishd is running on a Debian Server proxycache2:~# uname -a Linux proxycache2 2.6.18-6-amd64 #1 SMP Mon Jun 16 22:30:01 UTC 2008 x86_64 GNU/Linux proxycache2:~# varnishd -V varnishd (varnish-2.0.3) Copyright (c) 2006-2008 Linpro AS / Verdens Gang AS varnishd -a :80 -b xx.xx.xx.40:80 -s malloc,15360M It also happens, that the daemon just stop answering on Port 80, i have to manually restart it! proxycache2:~# varnishstat -1 uptime 691288 . Child uptime client_conn 1375713419.90 Client connections accepted client_req 6250133690.41 Client requests received cache_hit5900865485.36 Cache hits cache_hitpass 35 0.00 Cache hits for pass cache_miss3454784 5.00 Cache misses backend_conn 3491596 5.05 Backend connections success backend_unhealthy0 0.00 Backend connections not attempted backend_busy0 0.00 Backend connections too many backend_fail 1094 0.00 Backend connections failures backend_reuse 3190621 4.62 Backend connections reuses backend_recycle 3300087 4.77 Backend connections recycles backend_unused 0 0.00 Backend connections unused n_srcaddr1082 . N struct srcaddr n_srcaddr_act 69 . N active struct srcaddr n_sess_mem 4061 . N struct sess_mem n_sess325 . N struct sess n_object 727166 . N struct object n_objecthead 412871 . N struct objecthead n_smf 0 . N struct smf n_smf_frag 0 . N small free smf n_smf_large 0 . N large free smf n_vbe_conn 25 . N struct vbe_conn n_bereq 173 . N struct bereq n_wrk 46 . N worker threads n_wrk_create 6590 0.01 N worker threads created n_wrk_failed0 0.00 N worker threads not created n_wrk_max 0 0.00 N worker threads limited n_wrk_queue 0 0.00 N queued work requests n_wrk_overflow 40391 0.06 N overflowed work requests n_wrk_drop 0 0.00 N dropped work requests n_backend 1 . N backends n_expired 498 . N expired objects n_lru_nuked 2717782 . N LRU nuked objects n_lru_saved 0 . N LRU saved objects n_lru_moved 50761824 . N LRU moved objects n_deathrow 0 . N objects on deathrow losthdr 0 0.00 HTTP header overflows n_objsendfile 0 0.00 Objects sent with sendfile n_objwrite 4366464663.16 Objects sent with write n_objoverflow 0 0.00 Objects overflowing workspace s_sess 1375712219.90 Total Sessions s_req6250136990.41 Total Requests s_pipe 10 0.00 Total pipe s_pass 37897 0.05 Total pass s_fetch 3477236 5.03 Total fetch s_hdrbytes19737092290 28551.19 Total header bytes s_bodybytes 681892000484986407.98 Total body bytes sess_closed544021 0.79 Session Closed sess_pipeline 164402 0.24 Session Pipeline sess_readahead 65588 0.09 Session Read Ahead sess_linger 0 0.00 Session Linger sess_herd6180987689.41 Session herd shm_records2591397419 3748.65 SHM records shm_writes 170178899 246.18 SHM writes shm_flushes65 0.00 SHM flushes due to overflow shm_cont 5567 0.01 SHM MTX contention shm_cycles931 0.00 SHM cycles through buffer sm_nreq 0 0.00 allocator requests sm_nobj 0 . outstanding allocations sm_balloc 0 . bytes allocated sm_bfree0 . bytes free sma_nreq 970848514.04 SMA allocator requests sma_nobj 1442487 . SMA outstanding allocations sma_nbytes16106012162 . SMA outstanding bytes sma_balloc81856959440 . SMA bytes allocated sma_bfree 65750947278
Re: 503 Service Unavailable + Stop Responding
Search in the /var/log/messages log if varnish it's restarting. After every crash, varnish starts answering with a 503 to every request. DAvid (Kitai) Cruz 2009/4/7 Sascha Kain s.k...@eraffe-media.de: Hi im getting folloing error when accessing a Picture (jpg), delivered by Varnish-Cache. Error 503 Service Unavailable Service Unavailable Guru Meditation: XID: 502211958 Varnish http://www.varnish-cache.org/ This happens sporadically. When i access the picture from the Backend directly, its working. My varnishd is running on a Debian Server proxycache2:~# uname -a Linux proxycache2 2.6.18-6-amd64 #1 SMP Mon Jun 16 22:30:01 UTC 2008 x86_64 GNU/Linux proxycache2:~# varnishd -V varnishd (varnish-2.0.3) Copyright (c) 2006-2008 Linpro AS / Verdens Gang AS varnishd -a :80 -b xx.xx.xx.40:80 -s malloc,15360M It also happens, that the daemon just stop answering on Port 80, i have to manually restart it! proxycache2:~# varnishstat -1 uptime 691288 . Child uptime client_conn 13757134 19.90 Client connections accepted client_req 62501336 90.41 Client requests received cache_hit 59008654 85.36 Cache hits cache_hitpass 35 0.00 Cache hits for pass cache_miss 3454784 5.00 Cache misses backend_conn 3491596 5.05 Backend connections success backend_unhealthy 0 0.00 Backend connections not attempted backend_busy 0 0.00 Backend connections too many backend_fail 1094 0.00 Backend connections failures backend_reuse 3190621 4.62 Backend connections reuses backend_recycle 3300087 4.77 Backend connections recycles backend_unused 0 0.00 Backend connections unused n_srcaddr 1082 . N struct srcaddr n_srcaddr_act 69 . N active struct srcaddr n_sess_mem 4061 . N struct sess_mem n_sess 325 . N struct sess n_object 727166 . N struct object n_objecthead 412871 . N struct objecthead n_smf 0 . N struct smf n_smf_frag 0 . N small free smf n_smf_large 0 . N large free smf n_vbe_conn 25 . N struct vbe_conn n_bereq 173 . N struct bereq n_wrk 46 . N worker threads n_wrk_create 6590 0.01 N worker threads created n_wrk_failed 0 0.00 N worker threads not created n_wrk_max 0 0.00 N worker threads limited n_wrk_queue 0 0.00 N queued work requests n_wrk_overflow 40391 0.06 N overflowed work requests n_wrk_drop 0 0.00 N dropped work requests n_backend 1 . N backends n_expired 498 . N expired objects n_lru_nuked 2717782 . N LRU nuked objects n_lru_saved 0 . N LRU saved objects n_lru_moved 50761824 . N LRU moved objects n_deathrow 0 . N objects on deathrow losthdr 0 0.00 HTTP header overflows n_objsendfile 0 0.00 Objects sent with sendfile n_objwrite 43664646 63.16 Objects sent with write n_objoverflow 0 0.00 Objects overflowing workspace s_sess 13757122 19.90 Total Sessions s_req 62501369 90.41 Total Requests s_pipe 10 0.00 Total pipe s_pass 37897 0.05 Total pass s_fetch 3477236 5.03 Total fetch s_hdrbytes 19737092290 28551.19 Total header bytes s_bodybytes 681892000484 986407.98 Total body bytes sess_closed 544021 0.79 Session Closed sess_pipeline 164402 0.24 Session Pipeline sess_readahead 65588 0.09 Session Read Ahead sess_linger 0 0.00 Session Linger sess_herd 61809876 89.41 Session herd shm_records 2591397419 3748.65 SHM records shm_writes 170178899 246.18 SHM writes shm_flushes 65 0.00 SHM flushes due to overflow shm_cont 5567 0.01 SHM MTX contention shm_cycles 931 0.00 SHM cycles through buffer sm_nreq 0 0.00 allocator requests sm_nobj 0 . outstanding allocations sm_balloc 0 . bytes allocated sm_bfree 0 . bytes free sma_nreq
Re: 503 Service Unavailable + Stop Responding
Hi, its running correct. no such messages in logfiles. Im still having the second problem, that the varnishd just stops responding on port 80 and all requests die. maybe ill switch to 2.0.4, and check it out on high load. Kitai wrote: Search in the /var/log/messages log if varnish it's restarting. After every crash, varnish starts answering with a 503 to every request. DAvid (Kitai) Cruz 2009/4/7 Sascha Kain s.k...@eraffe-media.de: Hi im getting folloing error when accessing a Picture (jpg), delivered by Varnish-Cache. Error 503 Service Unavailable Service Unavailable Guru Meditation: XID: 502211958 Varnish http://www.varnish-cache.org/ This happens sporadically. When i access the picture from the Backend directly, its working. My varnishd is running on a Debian Server proxycache2:~# uname -a Linux proxycache2 2.6.18-6-amd64 #1 SMP Mon Jun 16 22:30:01 UTC 2008 x86_64 GNU/Linux proxycache2:~# varnishd -V varnishd (varnish-2.0.3) Copyright (c) 2006-2008 Linpro AS / Verdens Gang AS varnishd -a :80 -b xx.xx.xx.40:80 -s malloc,15360M It also happens, that the daemon just stop answering on Port 80, i have to manually restart it! proxycache2:~# varnishstat -1 uptime 691288 . Child uptime client_conn 1375713419.90 Client connections accepted client_req 6250133690.41 Client requests received cache_hit5900865485.36 Cache hits cache_hitpass 35 0.00 Cache hits for pass cache_miss3454784 5.00 Cache misses backend_conn 3491596 5.05 Backend connections success backend_unhealthy0 0.00 Backend connections not attempted backend_busy0 0.00 Backend connections too many backend_fail 1094 0.00 Backend connections failures backend_reuse 3190621 4.62 Backend connections reuses backend_recycle 3300087 4.77 Backend connections recycles backend_unused 0 0.00 Backend connections unused n_srcaddr1082 . N struct srcaddr n_srcaddr_act 69 . N active struct srcaddr n_sess_mem 4061 . N struct sess_mem n_sess325 . N struct sess n_object 727166 . N struct object n_objecthead 412871 . N struct objecthead n_smf 0 . N struct smf n_smf_frag 0 . N small free smf n_smf_large 0 . N large free smf n_vbe_conn 25 . N struct vbe_conn n_bereq 173 . N struct bereq n_wrk 46 . N worker threads n_wrk_create 6590 0.01 N worker threads created n_wrk_failed0 0.00 N worker threads not created n_wrk_max 0 0.00 N worker threads limited n_wrk_queue 0 0.00 N queued work requests n_wrk_overflow 40391 0.06 N overflowed work requests n_wrk_drop 0 0.00 N dropped work requests n_backend 1 . N backends n_expired 498 . N expired objects n_lru_nuked 2717782 . N LRU nuked objects n_lru_saved 0 . N LRU saved objects n_lru_moved 50761824 . N LRU moved objects n_deathrow 0 . N objects on deathrow losthdr 0 0.00 HTTP header overflows n_objsendfile 0 0.00 Objects sent with sendfile n_objwrite 4366464663.16 Objects sent with write n_objoverflow 0 0.00 Objects overflowing workspace s_sess 1375712219.90 Total Sessions s_req6250136990.41 Total Requests s_pipe 10 0.00 Total pipe s_pass 37897 0.05 Total pass s_fetch 3477236 5.03 Total fetch s_hdrbytes19737092290 28551.19 Total header bytes s_bodybytes 681892000484986407.98 Total body bytes sess_closed544021 0.79 Session Closed sess_pipeline 164402 0.24 Session Pipeline sess_readahead 65588 0.09 Session Read Ahead sess_linger 0 0.00 Session Linger sess_herd6180987689.41 Session herd shm_records2591397419 3748.65 SHM records shm_writes 170178899 246.18 SHM writes shm_flushes65 0.00 SHM flushes due to overflow shm_cont 5567 0.01 SHM MTX contention shm_cycles931 0.00 SHM cycles through buffer sm_nreq
varnish killing off the child process after a few minutes
Hi guys, We're on varnish 2.0.3 It looks like varnish restarts the child process for us every so often, causing 503s :(. Was wondering if this is a known issue. ## # ps auxww | grep varnishd nobody3385 0.0 0.3 8098024 7560 ?Sl 19:49 0:00 /usr/sbin/varnishd -P /var/run/varnishd.pid -a :6081 -T localhost:6082 -p sess_timeout 10 -p obj_workspace 8192 -p sess_workspace 32768 -p listen_depth 8192 -p connect_timeout 1s -p thread_pool_min 100 -f /etc/varnish/default.vcl root 3938 0.0 0.0 111584 904 ?Ss Apr03 0:01 /usr/sbin/varnishd -P /var/run/varnishd.pid -a :6081 -T localhost:6082 -p sess_timeout 10 -p obj_workspace 8192 -p sess_workspace 32768 -p listen_depth 8192 -p connect_timeout 1s -p thread_pool_min 100 -f /etc/varnish/default.vcl root 3961 0.0 0.0 3884 672 pts/2S+ 19:55 0:00 grep --colour=auto varnishd # ## # varnishstat -1 uptime472 . Child uptime client_conn40 0.08 Client connections accepted client_req100 0.21 Client requests received cache_hit 1 0.00 Cache hits cache_hitpass 0 0.00 Cache hits for pass cache_miss 23 0.05 Cache misses backend_conn 99 0.21 Backend connections success backend_unhealthy0 0.00 Backend connections not attempted backend_busy0 0.00 Backend connections too many backend_fail0 0.00 Backend connections failures backend_reuse 82 0.17 Backend connections reuses backend_recycle99 0.21 Backend connections recycles backend_unused 0 0.00 Backend connections unused n_srcaddr 1 . N struct srcaddr n_srcaddr_act 0 . N active struct srcaddr n_sess_mem 13 . N struct sess_mem n_sess 1 . N struct sess n_object 23 . N struct object n_objecthead 29 . N struct objecthead n_smf 47 . N struct smf n_smf_frag 0 . N small free smf n_smf_large 1 . N large free smf n_vbe_conn 6 . N struct vbe_conn n_bereq 6 . N struct bereq n_wrk 200 . N worker threads n_wrk_create 200 0.42 N worker threads created n_wrk_failed0 0.00 N worker threads not created n_wrk_max 1332628.23 N worker threads limited n_wrk_queue 0 0.00 N queued work requests n_wrk_overflow 0 0.00 N overflowed work requests n_wrk_drop 0 0.00 N dropped work requests n_backend 2 . N backends n_expired 0 . N expired objects n_lru_nuked 0 . N LRU nuked objects n_lru_saved 0 . N LRU saved objects n_lru_moved 1 . N LRU moved objects n_deathrow 0 . N objects on deathrow losthdr 0 0.00 HTTP header overflows n_objsendfile 0 0.00 Objects sent with sendfile n_objwrite 65 0.14 Objects sent with write n_objoverflow 0 0.00 Objects overflowing workspace s_sess 40 0.08 Total Sessions s_req 100 0.21 Total Requests s_pipe 0 0.00 Total pipe s_pass 76 0.16 Total pass s_fetch99 0.21 Total fetch s_hdrbytes 3565475.54 Total header bytes s_bodybytes206284 437.04 Total body bytes sess_closed 0 0.00 Session Closed sess_pipeline 0 0.00 Session Pipeline sess_readahead 0 0.00 Session Read Ahead sess_linger 0 0.00 Session Linger sess_herd 100 0.21 Session herd shm_records 959420.33 SHM records shm_writes 1884 3.99 SHM writes shm_flushes 0 0.00 SHM flushes due to overflow shm_cont 10 0.02 SHM MTX contention shm_cycles 0 0.00 SHM cycles through buffer sm_nreq 186 0.39 allocator requests sm_nobj46 . outstanding allocations sm_balloc 425984 . bytes allocated sm_bfree 6431973376 . bytes free sma_nreq0 0.00 SMA allocator
Varnish 2.0.3 consuming excessive memory
Hi All, I have an odd problem that I have only noticed happening since moving from 1.1.2 to 2.0.3 - excessive memory consumption of the varnish child process. For example, I have a varnish instance with a 256MB cache allocated, that is currently consuming 4.9GB of resident memory (6.5GB virtual). The instance has only been running for 4 days and has only got 25MB of objects in the cache. This is clearly excessive and is causing us some serious problems in terms of memory pressure on the machine. Our VCL is largely unchanged from our 1.1.2 setup to the 2.0.3 except for the obvious vcl.syntax changes, and the introduction of request restarts under certain limited scenarios. Can anyone shed some light? regards, Darryl Dixon Winterhouse Consulting Ltd http://www.winterhouseconsulting.com ___ varnish-misc mailing list varnish-misc@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-misc
Re: Varnish 2.0.3 consuming excessive memory
Not that I have an answer, but I'd be curious to see the differences in 'pmap -x pid' output for the different children. --Michael On Apr 7, 2009, at 6:27 PM, Darryl Dixon - Winterhouse Consulting wrote: Hi All, I have an odd problem that I have only noticed happening since moving from 1.1.2 to 2.0.3 - excessive memory consumption of the varnish child process. For example, I have a varnish instance with a 256MB cache allocated, that is currently consuming 4.9GB of resident memory (6.5GB virtual). The instance has only been running for 4 days and has only got 25MB of objects in the cache. This is clearly excessive and is causing us some serious problems in terms of memory pressure on the machine. Our VCL is largely unchanged from our 1.1.2 setup to the 2.0.3 except for the obvious vcl.syntax changes, and the introduction of request restarts under certain limited scenarios. Can anyone shed some light? One further footnote to this. I have a second varnish instance running on the same machine which talks to different backend servers (still primarily the same sort of content though), with the VCL only fractionally different - it does not seem to suffer from the runaway memory consumption of the first instance. The only difference in the VCL between the two is that in the one with runaway memory this is present in vcl_recv(): +if (req.http.Pragma ~ .*no-cache.* || req.http.Cache-Control ~ .*no-cache.*) { +purge_url(regsub(req.url, [?].*$, .*$)); +} + Is there possibly something in the regsub engine being triggered that is very expensive and would cause it to consume and hold on to large amounts of memory? regards, Darryl Dixon Winterhouse Consulting Ltd http://www.winterhouseconsulting.com ___ varnish-misc mailing list varnish-misc@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-misc ___ varnish-misc mailing list varnish-misc@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-misc
Re: Varnish 2.0.3 consuming excessive memory
Further to this, I just ran the pmap on the runaway process again (roughly 15 minutes since last time), and the large ~2GB allocation has jumped by about 60MB in that time, eg: -- 3940: /u01/app/varnish/sbin/varnishd -a web-cms-edit1.prd.internal:8080 web-cms-edit.prd.internal:8080 -T web-cms-edit1.prd.internal:18080 -t 3600 -f /u01/data/prd-edit/varnish/vcl.conf -s file,/u01/data/prd-edit/varnish/storage.cache,256M -n /u01/data/prd-edit/varnish/state -P /u01/data/prd-edit/varnish/varnishd.pid Address Kbytes RSSAnon Locked Mode Mapping 0040 312 - - - r-x-- varnishd 0064d000 8 - - - rw--- varnishd 0064f000 68 - - - rw---[ anon ] 4000 4 - - - -[ anon ] 40001000 10240 - - - rw---[ anon ] [...snip...] 2aaab090 262144 - - - rw-s- storage.cache 2aaac090 28 - - - r-x-- vcl.1P9zoqAU.so 2aaac09070002048 - - - - vcl.1P9zoqAU.so 2aaac0b07000 8 - - - rw--- vcl.1P9zoqAU.so 2aaac0c02056 - - - rw---[ anon ] 2aaac0f02052 - - - rw---[ anon ] 2aaac120 233476 - - - rw---[ anon ] 2aaacf70 2403328 - - - rw---[ anon ] 7fff4b619000 84 - - - rw---[ stack ] ff608192 - - - -[ anon ] -- -- -- -- total kB 3187344 - - - -- ...And in the time it's taken me to compose this mail it's grown by another 20MB: -- 2aaacf70 2418688 - - - rw---[ anon ] -- ...so it is a pretty consistent, continuous upwards trend in terms of memory usage... regards, Darryl Dixon Winterhouse Consulting Ltd http://www.winterhouseconsulting.com Not that I have an answer, but I'd be curious to see the differences in 'pmap -x pid' output for the different children. --Michael That's a good thought - I'd almost forgotten about pmap. Anyhow, the 'normal' process is set up with a 2GB storage cache and pmap shows pretty much what you would expect - the largest anonymous segment is 41MB, with the vast majority of allocated memory booked against storage.cache (2GB). The pmap for the 'runaway' process shows 256MB booked against storage.cache, and then one of the an anonymous allocations is 2.3GB. For example: Normal process: -- 3896: /u01/app/varnish/sbin/varnishd -a web-cms1.prd.internal:8080 web-cms.prd.internal:8080 -T web-cms1.prd.internal:18080 -t 108000 -f /u01/data/prd/varnish/vcl.conf -s file,/u01/data/prd/varnish/storage.cache,2048M -n /u01/data/prd/varnish/state -P /u01/data/prd/varnish/varnishd.pid Address Kbytes RSSAnon Locked Mode Mapping 0040 312 - - - r-x-- varnishd 0064d000 8 - - - rw--- varnishd 0064f000 68 - - - rw---[ anon ] 4000 4 - - - -[ anon ] 40001000 10240 - - - rw---[ anon ] [...snip...] 2aaab090 2097152 - - - rw-s- storage.cache 2aab3090 28 - - - r-x-- vcl.1P9zoqAU.so 2aab309070002044 - - - - vcl.1P9zoqAU.so 2aab30b06000 8 - - - rw--- vcl.1P9zoqAU.so 2aab30c02056 - - - rw---[ anon ] 2aab30f02056 - - - rw---[ anon ] 2aab3120 41984 - - - rw---[ anon ] 7fffae8f2000 88 - - - rw---[ stack ] ff608192 - - - -[ anon ] -- -- -- -- total kB 2468512 - - - -- Runaway process: -- 3940: /u01/app/varnish/sbin/varnishd -a web-cms-edit1.prd.internal:8080 web-cms-edit.prd.internal:8080 -T web-cms-edit1.prd.internal:18080 -t 3600 -f /u01/data/prd-edit/varnish/vcl.conf -s file,/u01/data/prd-edit/varnish/storage.cache,256M -n /u01/data/prd-edit/varnish/state -P /u01/data/prd-edit/varnish/varnishd.pid Address Kbytes RSSAnon Locked Mode Mapping 0040 312 - - - r-x-- varnishd 0064d000 8 - - - rw--- varnishd