How do you guys deal with large request rates when offloading SSL? For
example, Hitch/Nginx do 5k requests/second, which then get sent to Varnish
and that's another 5k requests/second. This causes a tremendous spike in
internal connections which ultimately increases resource consumption
two-fold,
then you can reset the SSH tunnel.
--
Guillaume Quintard
On Wed, Nov 15, 2017 at 3:48 PM, Andrei <lag...@gmail.com> wrote:
> What do you mean exactly when you say "drain the connections"? :D
>
> On Wed, Nov 15, 2017 at 8:46 AM, Guillaume Quintard <
> guilla...@va
you the number of open connections open
> to any backend.
>
> --
> Guillaume Quintard
>
> On Wed, Nov 15, 2017 at 3:42 PM, Andrei <lag...@gmail.com> wrote:
>
>> Thanks for the pointers! The tunnel setup is pretty flexible so I'll go
>> ahead and mark the backend sick
on here, is it?
>
> --
> Guillaume Quintard
>
> On Wed, Nov 15, 2017 at 3:29 PM, Andrei <lag...@gmail.com> wrote:
>
>> Hi Guillaume,
>>
>> Thanks for getting back to me
>>
>> On Wed, Nov 15, 2017 at 8:11 AM, Guillaume Quintard <
>> guilla.
time
> you ask it the content)
>
How would you suggest "restarting" the request to try and force a grace
cache object to be returned if present in that case?
>
> --
> Guillaume Quintard
>
> On Wed, Nov 15, 2017 at 6:02 AM, Andrei <lag...@gmail.com> wrote:
>
bump
On Sun, Nov 5, 2017 at 2:12 AM, Andrei <lag...@gmail.com> wrote:
> Hello everyone,
>
> One of the backends we have configured, runs through an SSH tunnel which
> occasionally gets restarted. When the tunnel is restarted, Varnish is
> returning a 503 since it ca
the ssh tunnel to the bakend is restarted
On Nov 5, 2017 10:12, "Andrei" <lag...@gmail.com> wrote:
Hello everyone,
One of the backends we have configured, runs through an SSH tunnel which
occasionally gets restarted. When the tunnel is restarted, Varnish is
returning a 503 sinc
Hello everyone,
One of the backends we have configured, runs through an SSH tunnel which
occasionally gets restarted. When the tunnel is restarted, Varnish is
returning a 503 since it can't reach the backend for pages which would
normally be cached (we force cache on the front page of the related
Thanks for sharing!
On Wed, Oct 18, 2017 at 3:35 PM, Hugues Alary wrote:
> Since this could be useful to some other people, here's a bit more details
> on how it's implemented on my end.
>
> TL;DR: it's automatic. I embedded the script in my docker image and it
> gets run
Chain order needs to be followed per RFC. While not all browsers may care,
quite a few payment gateways do.
On Wed, Oct 18, 2017 at 11:15 AM, Nicolas Delmas
wrote:
> Hello,
>
> I'm surprising, that we need to keep an order to merge all files. In my
> case I contact like
(or received,
> for that matter).
>
> --
> Guillaume Quintard
>
> On Sep 25, 2017 07:29, "Andrei" <lag...@gmail.com> wrote:
>
> Hi Guillaume,
>
> Thanks for the update! :)
> Am I reading the log wrong, or is there a difference in Content-Length (
> 63072), and th
ish-software.com> wrote:
> You client dropped the connection, not much you can do or worry about.
>
> --
> Guillaume Quintard
>
> On Sat, Sep 2, 2017 at 10:15 PM, Andrei <lag...@gmail.com> wrote:
>
>> Hello everyone,
>>
>> I'm running 4.1.8 as a fron
Please provide the varnishlog output for a request seen leading to the
described issue. There are multiple sections in which cookies are unset,
where you could be triggering this behavior.
On Wed, Sep 20, 2017 at 4:47 AM, Christopher Edwards <
christop...@hippomotorgroup.co.uk> wrote:
> When a
Thanks for the confirmation, and outstanding work phk!
On Mon, Sep 18, 2017 at 10:11 AM, Poul-Henning Kamp <p...@phk.freebsd.dk>
wrote:
>
> In message <CAP+vvEvbZdW3hitOJw+Krh7=Whg-19QxFejio-xpkuotG0wOCA@mail.
> gmail.com>, Andrei writes:
>
> >Am I missing
Hello everyone,
Am I missing something or did Geoff's UNIX socket patch not make it in
5.2.0?
On Fri, Sep 15, 2017 at 4:19 PM, Poul-Henning Kamp
wrote:
> We have just released Varnish 5.2.0:
>
> http://varnish-cache.org/releases/rel5.2.0.html#rel5-2-0
>
> A big
Hello everyone,
I'm running 4.1.8 as a frontend to a local Apache 2.4 and just recently ran
into a burst of the following "idle/write errors" without any corresponding
errors logged in Apache. Any suggestions on what to keep an eye on or
further review would be greatly appreciated.
* <<
Glad to see the progress :) They're safe to ignore.
On Fri, Aug 18, 2017 at 5:39 AM, Admin Beckspaced
wrote:
> Hello again ;)
>
> hitch is up and online on production server
>
> seeing some SSL handshake errors in the logs:
>
> Aug 18 12:32:47 cx40 hitch[19755]:
+1 for SSL with Hitch/HAProxy. The setup described with the Apache
runaround will more than likely tank as soon as large traffic spikes appear
On Tue, Aug 15, 2017 at 3:04 PM, Jan Hugo Prins | BetterBe <
jpr...@betterbe.com> wrote:
> I would not do it like that.
> Better is to use something like
Please provide more details regarding your setup, and the full error. If
you're certain there's nothing listening on the port, and you're still
getting the error, I'd check for selinux, portreserve, and straggling
semaphores
On Thu, Aug 3, 2017 at 2:17 PM, Rodney Bizzell
Just a thought, if you're going to force an otherwise uncacheable request
to be cached, you should probably: set beresp.uncacheable = false;
On Thu, Jul 20, 2017 at 9:03 PM, Girouard, Yanick
wrote:
> Hi Reza,
>
>
>
> Yes we are. Here's the default we apply. Those two
Out of curiosity, what does ethtool show for the related nics on both
servers? I also have Varnish on a 10G server, and can reach around
7.7Gbit/s serving anywhere between 6-28k requests/second, however it did
take some sysctl tuning and the westwood TCP congestion control algo
On Wed, Jul 5,
Good catch. Thanks for the details!
On Fri, Jun 2, 2017 at 3:49 AM, i...@dubistmeinheld.de <
i...@dubistmeinheld.de> wrote:
> On 01.06.2017 18:52, Jason Price wrote:
> > dimesg might help. log files should indicate if you're in an 'open
> > files limit' issue...
>
> You pointed me in the right
is no any clue in the logs. There is no evidence that Apache
> restarts on the backend pool during occurence of the issue.
>
> On Sat, Apr 1, 2017 at 9:44 PM, Andrei <lag...@gmail.com> wrote:
>
>> If it's during peak hours are you sure there aren't any rate limits being
>>
have started tcpdump on a test environment of another implementation and
> will let you as soon as the issue gets triggerred again.
>
> On Fri, Mar 31, 2017 at 4:17 PM, Andrei <lag...@gmail.com> wrote:
>
>> Can you provide a tcpdump/ngrep of the requests between
>> Client/Va
Can you provide a tcpdump/ngrep of the requests between
Client/Varnish/Apache along with the varnishlog entry to see if that
uncovers anything?
On Fri, Mar 31, 2017 at 7:25 AM, Hazar Güney wrote:
> Any idea?
>
> On Thu, Mar 30, 2017 at 3:41 PM, Hazar Güney
Hi Devin,
The easiest method would be to use external analytics services for your
site(s), such as Google Analytics. However, if you do not wish to use
external services then I suggest using something like splitlogs, and having
both Apache and varnishncsa cache hits piped to it, which in return
Oh yeah, Guillaume also has a great post on it @
https://info.varnish-software.com/blog/sticky-session-with-cookies :D
On Tue, Mar 28, 2017 at 6:28 AM, Andrei <lag...@gmail.com> wrote:
> Hi Mark,
>
> I suggest going over the following blog post for the changes you're
> look
Hi Mark,
I suggest going over the following blog post for the changes you're looking
for. Good luck moving forward :D
https://info.varnish-software.com/blog/proper-sticky-session-load-balancing-varnish
On Tue, Mar 28, 2017 at 4:52 AM, Mark Hanford
wrote:
> Hi folks.
On Mon, Mar 20, 2017 at 8:45 PM, Jason Price <japr...@gmail.com> wrote:
> Andrei:
>
> Why do you care that the cache is synchronized between each remote DC?
>
Because that's the whole point of having a CDN, with High Availability.
There's no reason not to keep cache consisten
(dubbing this over from the mod_auth thread for relevance due to my mistake
earlier)
Out of curiosity, has anyone done a CDN of Varnish servers? I have 4
Varnish servers in different datacenters around the world, and use anycast
IPs to direct traffic based on the region. I managed to do cache
t;>
>> RemoteIPHeader x-cdn-ip
>>
>> RemoteIPTrustedProxy 127.0.0.1 172.31.29.204
>>
>> I don't probable need the IF but since this was in place for some reason,
>> I just leave it.
>>
>> It seems to be working just fine. What do you think?
&g
warded-For,
>> "^([^,]+),?.*$", "\1");
>>
>> I was able to get the first IP but not the second only which is the one I
>> need. Any one can point me in the right direction with the regsub?
>>
>> Thank you!
>>
>> On Fri, Mar 17, 2017 at
Authenticated requests should typically bypass cache, unless you want to
hash the related session id(s), however that can get "interesting". I
suggest using an Apache module such as rpaf or remoteip in order for Apache
to set the client IP from the X-Forwarded-For header set by Varnish. This
way,
So by copying the cloudflare/google analytics cookies for example to a
custom header before stripping them for a possible cache hit, we can later
add it back to the client response cookies? Is it even worth bothering for?
I strip all cloudflare/google analytics cookies and haven't had any
or suggestions
are greatly appreciated!
Andrei
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
This definitely isn't an SELinux issue on my end. I've also seen Varnish
work fine with SELinux (after policy updates as Dridi mentioned).
On Mon, Feb 20, 2017 at 4:43 PM, Dridi Boukelmoune wrote:
> On Mon, Feb 20, 2017 at 11:25 PM, Daniel Parthey wrote:
> > It
On Mon, Feb 20, 2017 at 7:13 AM, Dridi Boukelmoune <dr...@varni.sh> wrote:
> On Mon, Feb 20, 2017 at 1:09 PM, Andrei <lag...@gmail.com> wrote:
> > Hi Dridi,
> >
> > Thanks for the input. Looking over the panic I initially linked, the
> exact
> > version
the "Clock step
detected" mentioned in the panic log, and have seen some reports of clock
stepping causing issues, but there were no ntp/hwclock changes recorded at
the time.
On Mon, Feb 20, 2017 at 4:09 AM, Dridi Boukelmoune <dr...@varni.sh> wrote:
> On Mon, Feb 13, 2017 at 6
You can modify it as you normally would in vcl_recv, by setting
req.http.X-Forwarded-For. Note the header may contain two IP addresses
depending on your stack, and only one should typically be passed to the
backend for proper logging.
On Fri, Feb 17, 2017 at 12:32 AM, Oliver Joa
Hello all,
I woke up to numerous site timeouts, and when I went to check the backend
list, this is what was returned:
root@aviator [~]# varnishadm backend.list
Unknown request in manager process (child not running).
Type 'help' for more info.
Command failed with error code 101
root@aviator [~]#
This can also be approached by sending the request to an intermediary
"fanout"/request duplication endpoint. That way, the backends will always
send the purge request to a single location, which then duplicates the
requests to the related varnish nodes/clusters.
On Tue, Jan 24, 2017 at 1:31 PM,
t; I also wrote an implementation of ip2location as a vmod:
> https://github.com/controversy187/libvmod-ip2location
>
> I'm not well versed in C, and this is my first vmod, but maybe it could be
> helpful. And I'm also open to suggestions for improvement :)
>
> Brett
>
> On M
This change would definitely help! From what I'm seeing both ip2loc and
maxmind are just as accurate when it comes to country tags, which is what
we're aiming for.
On Mon, Nov 28, 2016 at 9:41 AM, Thomas Lecomte <
thomas.leco...@virtual-expo.com> wrote:
> On Mon, Nov 28, 2016 at 8:34 A
is
why I'm trying to avoid the added syscalls.
On Mon, Nov 28, 2016 at 9:16 AM, Thomas Lecomte <
thomas.leco...@virtual-expo.com> wrote:
> On Sat, Nov 26, 2016 at 3:22 PM, Andrei <lag...@gmail.com> wrote:
> > Hello all,
> >
> > I was wondering if there were an
://github.com/thlc/libvmod-ip2location
geoip2 - https://github.com/fgsch/libvmod-geoip2
maxminddb - https://github.com/simonvik/libvmod_maxminddb
-- Andrei
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists
These are concerns which you will want to take up with your
sysadmin/developer. We can only speculate on your Varnish issues, not your
entire stack.
On Tue, Nov 22, 2016 at 12:47 AM, Ayberk Kimsesiz wrote:
> This problem started in October. According to the reports
rwards using awk or whatever, but that's
> adding an extra layer and serializing a process that doesn't need to be.
>
> But I'm not an admin, so I may be off.
>
> On Nov 13, 2016 16:27, "Andrei" <lag...@gmail.com> wrote:
>
>> Hello,
>>
>> By not
and
> the logs would be created directly.
>
> please correct me if I'm wrong?
>
> thanks for your time & help
> Becki
>
>
> Am 12.11.2016 um 17:05 schrieb Andrei:
>
>> Hello again,
>>
>> My apologies for not explaining my thoughts bette
be greatly appreciated!
Have a great weekend everyone!
Andrei
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
Hello,
Your backend server is returning a 503 with Retry-After: 5 to Varnish. I
suggest reviewing your backend logs for the reason the requests aren't
going through as expected.
On Fri, Oct 21, 2016 at 4:54 AM, Ayberk Kimsesiz
wrote:
> Hi,
>
> We are having serious
Out of curiosity, how expensive is it to use std.strstr()? Would it even
have any sort of noticeable performance impact or just a slightly elevated
cpu time for somewhat elevated traffic (~25k req/s)?
On Mon, Oct 17, 2016 at 6:22 PM, Frederik Ramm wrote:
> Hi,
>
> On
return (hash);
>> > # This is for phpmyadmin
>> > if (req.http.Host == "ki1.org") {
>> > return (pass);
>> > }
>> >
>> > if (req.http.Host == "mysql.ki1.org") {
>> > return (pass);
>> > }
>> >
>> > }
>
I SET THE VARY TO ACCEPT-ENCODING, THIS OVERRIDES W3TC
> # TENDANCY TO SET VARY USER-AGENT. YOU MAY OR MAY NOT WANT
> # TO DO THIS
> # ##
> set beresp.http.Vary = "Accept-Encoding";
>
> # IF NOT WP-ADMIN THEN UNSET COOKIES AND SET THE AMOUNT OF
> # T
Aug 4, 2016 at 8:34 AM, Andrei <lag...@gmail.com> wrote:
> Hello,
>
> Aside from the provided VCL being for WordPress, while you're running
> XenForo, the xf_ cookies are being dropped by your config. A quick fix is:
>
> sub vcl_recv {
> if( req.http.Coo
Hello,
Aside from the provided VCL being for WordPress, while you're running
XenForo, the xf_ cookies are being dropped by your config. A quick fix is:
sub vcl_recv {
if( req.http.Cookie ~ "xf_(session|user)") {
return (pass);
}
}
sub vcl_backend_response {
if (req.http.Cookie ~
9:
> $popups[ $i ]['connect_hash'] = md5(date('m-d-Y'). $popu
> ps[ $i ]['id']. NONCE_KEY);
> wp-content/plugins/popup-by-supsystic/modules/popup/js/frontend.popup.js:451:
> , data: {mod: 'statistics', action: 'add', id: popup.
Ok now just type the following from the wp docroot to find your culprit:
egrep -Rn 'sm_type|is_unique|connect_hash|reqType'
wp-content/{plugins,themes}
On Tue, Aug 2, 2016 at 10:58 AM, Ayberk Kimsesiz <ayberk.kimse...@gmail.com>
wrote:
> Hi Andrei,
>
> Here are the results:
&g
Those admin-ajax.php POST requests won't get cached, and are likely related
to WordPress heartbeats, or plugins. The quickest way to see what those
requests actually are, which will help you identify the plugin/theme option
is using ngrep: ngrep 'admin-ajax' -d any dst port 8080 -W byline -q
On
Apache config for better performance. Thanks for the netdata tip, I noticed
it in the menu shortly after asking :)
On Tue, Aug 2, 2016 at 2:28 AM, Ayberk Kimsesiz <ayberk.kimse...@gmail.com>
wrote:
> Hi Andrei,
>
> If i disable the Varnish, everything returns to normal. I use MPM as a
Are you using the Event MPM by chance? If so, I suggest switching over to
Worker since Event rarely runs into a bug/issue which causes threads to use
inordinate amounts of resources while processing trivial requests.
A bit off-topic, but what's that monitoring software being used from the ss?
On
Also, you'll want to enable ProxyPreserveHost in Apache.
http://httpd.apache.org/docs/2.2/mod/mod_proxy.html#proxypreservehost
http://httpd.apache.org/docs/2.4/mod/mod_proxy.html#proxypreservehost
On Jul 17, 2016 17:51, "Jeff Potter" wrote:
>
> Is apache getting
RespHeader Accept-Ranges: bytes
- Debug "RES_MODE 0"
- RespHeader Connection: keep-alive
- Timestamp Resp: 1467223441.362802 0.000100 0.36
- ReqAcct172 0 172 409 0 409
- End
On Wed, Jun 29, 2016 at 11:14 AM, Guillaume Quintard <
guilla...@va
Hello,
I'm currently working on forcing cached results using vsthrottle vs
dropping requests, but for some reason (I probably did it wrong :) I can't
get var.get/var.set_duration to work. The vcl_recv snippet is as follows,
any input is greatly appreciated:
sub vcl_recv {
if
ng similar? Any help is greatly appreciated!
Best Regards,
Andrei
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
64 matches
Mail list logo