Hi Pål,

Sure - the non-standard parameters here:

% echo 'param.show' | varnishadm|grep -v '(default)'
200
accept_filter              off [bool]
gzip_level                 8
gzip_memlevel              6
max_restarts               2 [restarts]
max_retries                0 [retries]
thread_pool_max            350 [threads]
thread_pool_min            225 [threads]
thread_pools               12 [pools]
vsl_space                  250M [bytes]
vsm_space                  4M [bytes]

VMODs in use are all sourced from varnish-modules-0.9.1_1:

import std;
import directors;
import softpurge;

I will have to scrutinize the paths, but I'm 99% certain that softpurge is not being called.

Cheers,
-Mark


On Wed, 18 Oct 2017 05:34:40 -0400, Pål Hermunn Johansen <[email protected]> wrote:

Hello Mark,

Can you include a list of VMODs you are using? Also, did you change
any of the parameters from the default? The last question can be
answered by running

varnishadm param.show

Best,
Pål


2017-10-18 4:17 GMT+02:00 Mark Staudinger <[email protected]>:
Hi Folks,

I've seen this panic recently, twice, on two companion servers running
Varnish-4.1.8 on FreeBSD-11.0

% varnishd -V
varnishd (varnish-4.1.8 revision d266ac5c6)
Copyright (c) 2006 Verdens Gang AS
Copyright (c) 2006-2015 Varnish Software AS

% uname -a
FreeBSD hostname 11.0-RELEASE-p2 FreeBSD 11.0-RELEASE-p2 #0: Mon Oct 24
06:55:27 UTC 2016
[email protected]:/usr/obj/usr/src/sys/GENERIC  amd64

Unfortunately I do not have the full backtrace, but here's what I do have.

Oct 16 12:24:47 hostname varnishd[50931]: Child (50932) Last panic at: Mon,
16 Oct 2017 12:24:47 GMT "Assert error in obj_getmethods(),
cache/cache_obj.c line 55: Condition((oc->stobj->stevedore) != NULL) not
true. thread = (cache-worker) version = varnish-4.1.8 revision d266ac5c6
ident =
FreeBSD,11.0-RELEASE-p2,amd64,-junix,-sfile,-smalloc,-sfile,-hcritbit,kqueue
now = 3794380.754560 (mono), 1508156686.857677 (real) Backtrace: 0x433a38:
varnishd   0x431821: varnishd   0x431f62: varnishd   0x425f9d: varnishd
0x41eb0c: varnishd   0x420d51: varnishd   0x41e8db: varnishd   0x41e36a:
varnishd 0x426155: varnishd busyobj = 0xbf88dbbb60 { ws = 0xbf88dbbbf8 { id = \"bo\", {s,f,r,e} = {0xbf88dbdab0,+4712,0x0,+57480}, }, refcnt
= 2,   retries = 0, failed = 1, state = 1,   flags = {do_esi, is_gzip},
http_conn = 0xbf88dbde30 {     fd = 153,     doclose = RX_BODY,     ws =
0xbf88dbbbf8,     {rxbuf_b, rxbuf_e} = {0xbf88dbdee0, 0xbf88dbe134},
{pipeline_b, pipeline_e} = {0xbf88dbe134, 0xbf88dbea65},
Oct 16 12:24:47 hostname kernel: xbf88dbea65},

Varnishd process uptime was near-identical on both servers, and the panics occurred at around the same time on both machines, which could potentially indicate that the panic was caused either by a particular request, and/or some resource-related issue. Time between panics was approximately 19 days.

I would welcome any advice about known possible causes for this particular
assertion failing!

Best Regards,
Mark Staudinger
_______________________________________________
varnish-misc mailing list
[email protected]
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
_______________________________________________
varnish-misc mailing list
[email protected]
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc

Reply via email to