I am running with Nginx 1.16. I have a really simple configuration for
wordpress, seen below.
I have one test case:
curl -H "Host: x.com" "http://127.0.0.1/wp-admin/;
Which succeeds - I can see in the php-fpm log that it does "GET
/wp-admin/index.php"
I have a second test case:
curl -H "Host:
t; accepts X-Forwarded-For from 172.0.0.0/8.
>
> On Mon, Aug 28, 2017 at 8:25 PM, CJ Ess <zxcvbn4...@gmail.com> wrote:
>
>> I've been struggling all day with this, I'm missing something, hoping
>> someone can point out what I'm doing wrong w/ the realip module:
>>
I've been struggling all day with this, I'm missing something, hoping
someone can point out what I'm doing wrong w/ the realip module:
nginx.conf:
...
log_format xyz '$remote_addr - $remote_user [$time_iso8601] '
'"$request" $status $body_bytes_sent '
wrote:
> Hello, CJ Ess
> Both of cases, access log is disabled and error log is enabled with level
> ERROR. However, there only are a very few of errors in both cases, so I
> think it does not matter with the logging. Anyway, I will have another test
> with error log disabled later.
>
How about logging? If your using the default settings then nginx is logging
directly to disk, and disk writes will block the worker. Do you see the
same degradation with logging disabled or via syslog?
On Mon, May 22, 2017 at 10:59 PM, fengx wrote:
> There should
I'd be interested in knowing more also - I know that the Linux 2.6 kernel
is still really popular and didn't have the SO_REUSEPORT socket option
(though it was in the include files and wouldn't cause an error if you
referenced it), might that be what your running into?
On Wed, May 17, 2017 at
My employer uses Nginx in front of PHP-FPM to generate their web content.
They have PHP's error reporting shut off in production so when something
does go wrong in their PHP scripts they end up with a "White Screen Of
Death". From a protocol level the white screen of death is a 200 response
with
linux. Both NIC supports the speed of 1000Mb/s the server got round about
> 600 Mb/s up and 13Mb/s down.
>
> CJ Ess Wrote:
> ---
> > Which OS? What NIC? You also have to consider the traffic source, is
> > it
> > know
specified or scheme default)? I looked though the vaiable
descriptions but didn't see any that looked appropriate.
On Fri, Nov 18, 2016 at 3:15 PM, Maxim Dounin <mdou...@mdounin.ru> wrote:
> Hello!
>
> On Fri, Nov 18, 2016 at 02:55:13PM -0500, CJ Ess wrote:
>
> > I know its
OVH and Hetzner CIDR lists from RIPE are huge because of all the tiny
subnets - however they compress down really well if you merge all the
adjacent networks, you end up with a few dozen entires each. Whatever set
of CIDRs you are putting in a set, always merge them unless you need to
know which
custom bit of java code that that bit rate limits
> tcp streams..
>
> just bought into nginx so looking at stream proxing it through it instead
> A
>
> On 29 October 2016 at 02:48, CJ Ess <zxcvbn4...@gmail.com> wrote:
> > Cool. Probably off topic, but why rate limit FIX? My
I don't think managing large lists of IPs is nginx's strength - as far as I
can tell all of its ACLs are arrays that have the be iterated through on
each request.
When I do have to manage IP lists in Nginx I try to compress the lists into
the most compact CIDR representation so there is less to
<a...@samad.com.au> wrote:
> Hi
>
> yeah I have had a very quick look, just wondering if any one on the
> list had set one up.
>
> Alex
>
> On 28 October 2016 at 16:15, CJ Ess <zxcvbn4...@gmail.com> wrote:
> > Maybe this is what you want:
> > https://nginx
Maybe this is what you want:
https://nginx.org/en/docs/stream/ngx_stream_proxy_module.html
See the parts about proxy_download_rate and proxy_upload_rate
On Thu, Oct 27, 2016 at 11:22 PM, Alex Samad <a...@samad.com.au> wrote:
> Yep
>
> On 28 October 2016 at 11:57, CJ Ess <zx
FIX as in the financial information exchange protocol?
On Thu, Oct 27, 2016 at 7:19 PM, Alex Samad wrote:
> Hi
>
> any one setup nginx infront of a fix engine to do rate limiting ?
>
> Alex
>
> ___
> nginx mailing list
>
The clients will send an "Accept-Encoding" header which includes "br" as
one of the accepted types, that will trigger the module if its configured.
It has a set of directives similar to the gzip module, so you'll need to
set those.
I think I see brotli support most from Chrome on Android.
On
You probably have some module leaking memory - send output of 'nginx -V' so
people can see what version you have and what modules are there.
On Wed, Aug 24, 2016 at 1:54 PM, Amanat wrote:
> i was using Apache from last 3 years. never faced a single problem. Few
>
md5 shouldn't give different results regardless of implementation - my
guess is that your different platforms are using different character
encodings (iso8859 vs utf8 for instance) and that is the source of your
differences. To verify your md5 implementation there are test vectors here
You can get the nginx source code from here: http://nginx.org/
Or here: https://github.com/nginx/nginx
On Wed, Jul 20, 2016 at 3:51 PM, Thiago Farina wrote:
>
>
> On Mon, Jun 27, 2016 at 8:37 AM, Pankaj Chaudhary
> wrote:
>
>>
>>
>> Is there such
Ok, that explains it then. Does the cache survive reloads? Or does it need
to requery?
On Wed, Jun 29, 2016 at 1:23 AM, Kurt Cancemi <k...@x64architecture.com>
wrote:
> Hello,
>
> Nginx uses a per worker OCSP cache.
>
> On Tuesday, June 28, 2016, CJ Ess <zxcvbn4...@
I think I've got ocsp stapling setup correctly with Nginx (1.9.0). I am
seeing valid OCSP responses however if I keep querying the same server I
also frequently see "No response". The OCSP responses are valid for seven
days. Is each worker doing its own OCSP query independently of the others?
Or
You were correct, there was a typeo in my rpm spec that kept the diff from
applying but didn't kill the build. The curl request is working now! Now I
need to see if those other POST requests are working.
On Mon, Jun 27, 2016 at 8:38 PM, CJ Ess <zxcvbn4...@gmail.com> wrote:
> I'm tryi
. Bartenev <vb...@nginx.com>
wrote:
> On Monday 27 June 2016 17:33:12 CJ Ess wrote:
> > I finally had a chance to test this, I applied ce94f07d5082 to the 1.9.15
> > code -- it applied cleanly and compiled cleanly. However, my test post
> > request over http2 with curl f
pe: application/json" -d "{}" "
https://test-server_name/;
And my curl is: curl 7.49.1 (x86_64-pc-linux-gnu) libcurl/7.49.1
OpenSSL/1.0.2h nghttp2/1.11.1
On Sun, Jun 26, 2016 at 8:55 AM, Valentin V. Bartenev <vb...@nginx.com>
wrote:
> On Saturday 25 June 2016 21:0
Thank you very much for the pointer to the change, I'm going give that a
shot ASAP.
On Sun, Jun 26, 2016 at 8:55 AM, Valentin V. Bartenev <vb...@nginx.com>
wrote:
> On Saturday 25 June 2016 21:00:37 CJ Ess wrote:
> > I could use some help with this one - I took a big leap with en
I could use some help with this one - I took a big leap with enabling
http/2 support and I got knocked back really quick. There seems to be an
issue with POSTs and it seems to be more pronounced with ios devices (as
much as you can trust user agents) but there were some non-ios devices that
seemed
Check that you have both the certificate and any intermediate certificates
in your pem file - you can skip the top-most CA certificates as those are
generally included in your browser's CA store - but the intermediates are
not.
I believe Nginx wants certs ordered from bottom-most (your cert) to
I once knew a guy who convinced someone they had hacked their site by
making a DNS entry to 127.0.0.1. So when the guy tried to access the
"other" site his passwords worked, all his files were there, it was even
running the same software! He made changes on his site and they instantly
appeared on
at any one of them could bump up the backlog, but if any two
server stanzas have options to do it then it causes an error. Maybe the
best way to do it is to have some sort of dummy entry that sets the options
- if its always the last server stanza that sets the listen options then
maybe include all
Very cool! lua-resty-waf is actually at the top of my list of WAFs to try
as soon as I finish deploying openresty everywhere.
On Mon, Apr 25, 2016 at 11:09 AM, Robert Paprocki <
rpapro...@fearnothingproductions.net> wrote:
> There are also several WAFs built upon Openresty (nginx + luajit at
>>
There is a version of modsecurity for Nginx -
https://github.com/SpiderLabs/ModSecurity - however it tends to cause
random mysterious problems including segfaults so maybe not what your
looking for.
There are also several WAFs built upon Openresty (nginx + luajit at
openresty.com) however I
Ok, I understand what is happening now, thank you!
On Wed, Apr 20, 2016 at 11:52 AM, Maxim Dounin <mdou...@mdounin.ru> wrote:
> Hello!
>
> On Wed, Apr 20, 2016 at 09:24:52AM -0400, CJ Ess wrote:
>
> > I've tried putting this directive into the nginx config file in bo
l restart (not a reload)? I would imagine the master
> process needs to flush everything out.
>
> > On Apr 20, 2016, at 06:24, CJ Ess <zxcvbn4...@gmail.com> wrote:
> >
> > I've tried putting this directive into the nginx config file in both the
> main and html sections:
I've tried putting this directive into the nginx config file in both the
main and html sections:
error_log syslog:server=127.0.0.1,facility=local5 error;
The file tests fine and reloads without issue, however if I do fuser -u on
the error file (which is the same one used by syslog) I see that
or if
there was a legit reason for doing this. Either way its not an nginx (or
haproxy) issue.
On Fri, Apr 15, 2016 at 4:49 PM, Валентин Бартенев <vb...@nginx.com> wrote:
> On Thursday 14 April 2016 22:45:36 CJ Ess wrote:
> > In my environment I have Nginx terminating connections, th
wrote:
> On Thursday 14 April 2016 22:45:36 CJ Ess wrote:
> > In my environment I have Nginx terminating connections, then sending them
> > to an HAProxy upstream. We've noticed that whenever HAProxy emts a 403
> > error (Forbidden, in response to our ACL rules), NGINX reports
In my environment I have Nginx terminating connections, then sending them
to an HAProxy upstream. We've noticed that whenever HAProxy emts a 403
error (Forbidden, in response to our ACL rules), NGINX reports a 503 result
(service unavailable) and I believe is logging an "upstream prematurely
I was trying to think of a hack day project, and one idea was to implement
a blob server similar to Facebook's haystack. Facebook did their server
with the evhttpd library, I was thinking of making it an nginx module. In
order to make it work I'd need to have nginx send a range of bytes from a
Your right, I should make a simple test case like you did in the prev
message. I'll put that together.
On Thu, Mar 31, 2016 at 4:29 PM, Francis Daly <fran...@daoine.org> wrote:
> On Thu, Mar 31, 2016 at 01:21:02PM -0400, CJ Ess wrote:
>
> Hi there,
>
> > I would like to h
I would like to have an Nginx setup where I have specific logic depending
on which interface (ip) the request arrived on.
I was able to make this work by having a server stanza for each ip on the
server, but was't able to do a combination of a specific ip and a wildcard
ip (as a catchall) - is
e of a *shared memory zone*, most probably
> allocated by the master process at configuration loading time and then
> accessible/accessed by workers when needed.
>
> You will be able to make a conclusion by yourself. :o)
> ---
> *B. R.*
>
> On Sat, Mar 19, 2016 at
The value I specify for the size of my key zone in the proxy_path statement
- is that a per-worker memory allocation or a shared memory zone? (i.e. if
its 64mb and I have 32 processors, does the zone consume 64mb of main
memory or 2gb?)
___
nginx mailing
Last-Modified: Thu, 10 Mar 2016 05:01:30 GMT
> ETag: "d0f3-52daab51fbe80"
> Expires: Sun, 19 Mar 2017 20:42:48 GMT
> Cache-Control: max-age=31536000
> Cache-Control: public, max-age=31536000
> X-Cache-Status: HIT
> Accept-Ranges: bytes
>
>
> CJ Ess wrote:
>
&
I think I've run into the problem before - move the proxypass statement
from the top of the location stanza to the bottom, and I think that will
solve your issue.
On Sat, Mar 19, 2016 at 4:10 PM, shiz wrote:
> Been playing with this for 2 days.
>
> proxy_pass is
I did some performance tests and it seemed to me as-if the status stub
caused a bit of a performance hit but nothing really concerning. However
for the status stub doesn't really give a lot of useful information IMO
because its just supposed to be a placeholder for an nginx+ status page-
I'm going
If your backend is sensitive to keepalive traffic (mine are), then my
advice is to enable keepalives as far into your stack as you can.
i.e. I have nginx fronting haproxy and varnish, I enable keepalives to both
haproxy and varnish add have them add a "connection: close" header to their
backend
PM, CJ Ess <zxcvbn4...@gmail.com> wrote:
> Hello! I'm testing out a new configuration and there are two issues with
> the proxy cacheing feature I'm getting stuck on.
>
> 1) Everything is a cache miss, and I'm not sure why:
>
> My cache config (anonymized):
>
> .
can do a
>
> location / {
> return 404;
> }
> On Feb 29, 2016 16:15, "Payam Chychi" <pchy...@gmail.com> wrote:
>
>> Look at your proxy cache path... (proxy_cache_path /var/www/test_cache)
>> Are you sure the path exists and had proper perms/ownership?
&
.
On Mon, Feb 29, 2016 at 2:15 PM, Payam Chychi <pchy...@gmail.com> wrote:
> Look at your proxy cache path... (proxy_cache_path /var/www/test_cache)
> Are you sure the path exists and had proper perms/ownership?
>
> Payam
>
>
> On Feb 29, 2016, 11:03 AM -0800, CJ Ess &l
Hello! I'm testing out a new configuration and there are two issues with
the proxy cacheing feature I'm getting stuck on.
1) Everything is a cache miss, and I'm not sure why:
My cache config (anonymized):
...
proxy_cache_path /var/www/test_cache levels=2:2 keys_zone=TEST:32m
inactive=365d
Thank you! I've got it all set up now, thanks for the pointer to $
EscapeControlCharactersOnReceive
On Thu, Feb 25, 2016 at 6:25 PM, Ekaterina Kukushkina <e...@nginx.com> wrote:
> Hello CJ,
>
>
> > On 26 Feb 2016, at 00:50, CJ Ess <zxcvbn4...@gmail.com> wrote:
I would really like to output my nginx access log to syslog in a tab
delimited format.
I'm using the latest nginx and rsyslogd 7.2.5
I haven't found an example of doing this, I'm wondering if/how to add tabs
to the format in the log_format directive
And also if there is anything I need to do to
Does anyone know if the author still maintains nginx_upstream_check_module?
I see only a handful of commits in the past year and they all look like
contributed changes.
On Thu, Jan 28, 2016 at 9:28 PM, Dewangga Bachrul Alam <
dewangg...@xtremenitro.org> wrote:
> -BEGIN PGP SIGNED
I think what they are asking is to support the transport layer so that they
don't have to support both protocols on whatever endpoint they are
developing.
Maybe I'm wrong and someone has grand plans about multiplexing requests to
an upstream with http/2, but I haven't seen anyone ask for that
Looks like Cloudflare patched SPDY support back into NGINX, and they will
release the patch to everyone next year:
https://blog.cloudflare.com/introducing-http2/#comment-2391853103
On Thu, Dec 3, 2015 at 1:14 PM, CJ Ess <zxcvbn4...@gmail.com> wrote:
> NGINX devs,
>
> I kno
Let me get back to you on that - we're going to send some traffic through
Cloudflare and see how the traffic breaks out given the choice of all three
protocols.
On Thu, Dec 3, 2015 at 1:29 PM, Maxim Konovalov <ma...@nginx.com> wrote:
> Hello,
>
> On 12/3/15 9:14 PM, CJ Ess wrote:
Just curious - if I am using the deferred listen option on Linux my
understanding is that nginx will not be woken up until data arrives for the
connection. If someone is trying to DDOS me by opening as many connections
as possible (has happened before) how does that situation play out with
be active simultaneously at any point.
On Thu, Nov 5, 2015 at 8:19 AM, Maxim Dounin <mdou...@mdounin.ru> wrote:
> Hello!
>
> On Thu, Nov 05, 2015 at 12:55:36AM -0500, CJ Ess wrote:
>
> > So I'm looking for some advice on determining an appropriate number for
> the
>
I was under the impression that SPDY support had been dropped from NGINX
altogether - however
http://nginx.org/en/docs/http/ngx_http_core_module.html#listen seems to
suggest it might still be possible to select it. Which is correct?
If its not possible to select SPDY it would have been nice to
Hello!
I'm experimenting with fastcgi caching - I've added $upstream_cache_status
to the access log, and I can see that periodically there will be a small
cluster of EXPIRED requests for an object.
Does EXPIRED imply that the object was fetched from origin each time?
..or that the requests were
I have an nginx 1.9.0 deploy and I noticed a working config where the name
given to the server_name directive doesn't match the names in the Host
headers or the certificate DNs. It looks like a mistake, but it works, and
I don't know why! Is it possible that if there is only one server stanza
that
Try incorporating haproxy (http://www.haproxy.org/) or Apache Traffic
Server (http://trafficserver.apache.org/) into your setup. I use NGINX to
terminate SSL/SPDY then haproxy to direct the request to the appropriate
backend server pool - Haproxy is very good at being a reverse proxy but has
no
Hello,
I am looking for advice. I am using nginx to terminate SSL and forward the
request to php via fastcgi. Of all of requests I am forwarding to fastcgi
there is one particular URL that I want to cache, hopefully bypassing
communication with the fastcgi and php processes altogether.
- Would I
if to set a
variable which I could use to match on the URL and trigger
fastcgi_cache_bypass for everything not matching. Is if so toxic that I
shouldn't consider doing it this way?
On Tue, Jun 23, 2015 at 6:07 PM, Francis Daly fran...@daoine.org wrote:
On Tue, Jun 23, 2015 at 04:19:48PM -0400, CJ Ess
What is the best approach for having nginx in a web farm type setup where I
want to forward http connections to an proxy upstream if they match one of
a very long/highly dynamic list of host names? All of the host names we are
interested in will resolve to our address space, so could it be as
The only way you can stop people from mirroring your site is to pull the
plug. Anything you set up can be bypassed like a normal user would. If you
put CAPTCHAs on every page, someone motivated can get really smart people
in poor countries to type in the letters, click the blue box, complete the
Behind my web server is an application that doesn't include content-length
headers because it doesn't know what it is. I'm pretty sure this is an
application issue but I promised I'd come here and ask the question - is
there a way to have nginx buffer an entire server response and insert a
67 matches
Mail list logo