ema added a comment.
Those connection resets on the varnish backend layer happen when frontend
caches are full and varnish cannot make space for a newly fetched object body:
-- ExpKill LRU_Cand p=0x7f7bbc64f740 f=0x0 r=1
-- ExpKill LRU x=980813893
# [...] the above happening 50 times, our current setting for nuke_limit
-- ExpKill LRU_Exhausted
-- FetchError Could not get storage
-- BackendClose 11217
vcl-ed50cc64-c0ad-4266-89a4-9e4539972e1a.be_cp3033_esams_wmnet
# varnish frontend closing the connection to varnish backend
Interestingly, the problem is not reproducible with larger objects, as
varnish autonomously decide they're too large and does not cache them (see the
"pass" entry in X-Cache):
$ curl -L --resolve releases.wikimedia.org:443:91.198.174.192 --http1.1 -v
-o /dev/null
https://releases.wikimedia.org/parsoid/parsoid_0.10.0all_all.deb?x=$RANDOM 2>&1
| egrep "Content-Length|X-Cache:"
< Content-Length: 46716380
< X-Cache: cp1077 pass, cp3033 miss, cp3033 pass
We are currently limiting the maximum object size on cache_upload frontends
to 256K. The assumption was that cache_text would not really benefit from the
cutoff given that the dataset is made of smaller objects (compared to upload).
Let's add the limit to cache_text too.
TASK DETAIL
https://phabricator.wikimedia.org/T216006
EMAIL PREFERENCES
https://phabricator.wikimedia.org/settings/panel/emailpreferences/
To: ema
Cc: BBlack, Vgutierrez, ayounsi, Stashbot, akosiaris, Addshore, ema, Aklapper,
VladimirAlexiev, alaa_wmde, Legado_Shulgin, Nandana, thifranc, AndyTan,
Davinaclare77, Qtn1293, Lahi, Gq86, GoranSMilovanovic, Th3d3v1ls, Hfbn0,
QZanden, LawExplorer, Zppix, _jensen, rosalieper, Jonas, Wong128hk,
Wikidata-bugs, aude, Lydia_Pintscher, faidon, Mbch331, Jay8g, fgiunchedi
_______________________________________________
Wikidata-bugs mailing list
[email protected]
https://lists.wikimedia.org/mailman/listinfo/wikidata-bugs