Re: Iptables at boot, was fail2ban for apache2

2019-12-02 Thread Gene Heskett
On Monday 02 December 2019 07:46:22 Alessandro Vesely wrote:

> On Mon 02/Dec/2019 10:35:26 +0100 Andrei POPESCU wrote:
> > You might want to install iptables-persistent, otherwise you'll have
> > to roll-out your own solution.
>
> I'm not using iptables-persistent, but just looked at it out of
> curiosity.
>
> Its LSB:
>
> ### BEGIN INIT INFO
> # Provides:  netfilter-persistent
> # Required-Start:mountkernfs $remote_fs
> # Required-Stop: $remote_fs
> # Default-Start: S
> # Default-Stop:  0 1 6
> # Short-Description: Load boot-time netfilter configuration
> # Description:   Loads boot-time netfilter configuration
> ### END INIT INFO
>
> S also starts in single-user mode, i.e. without network?
>
> $remote_fs requires ip links to be already set up?
>
> Stop, for good measure, does nothing.  The comment in the script is
> crisply nice:
>
> stop)
> # Why? because if stop is used, the firewall gets flushed for a
> variable # amount of time during package upgrades, leaving the machine
> vulnerable # It's also not always desirable to flush during purge
> echo "Automatic flushing disabled, use \"flush\" instead of
> \"stop\"" ;;
>
> > In the particular case of iptables instead of writing a script you
> > should probably just reuse your existing rules file and load that
> > with an 'iptables-restore' from the .service unit.
>
> That's somewhat questionable in some cases.  I'd recommend to write a
> script with iptables commands rather than interactively issue iptables
> command until you are satisfied with the current setup.  That's
> natural, since iptables doesn't give a visual feedback, so reasoning
> is your best friend.  IOW, a commented script is more readable than an
> interactive setup.
>
> Then, since you have a script, why not run it directly, rather than
> saving/restoring its results?

Since I had spent a week battling the bots, and doing a new save for 
every addition, I find the iptables-restore both starts it and restores 
it. Good enough till I get a new machine built, by the weekend I hope.

> > We are quite far from the original topic so I would suggest you
> > start a new thread in case you need assistance with this.
>
> I try, but don't reset References:/In-Reply-To: header fields.

And kmail doesn't make that easy.

>
> Best
> Ale
Thanks Alessandro.

Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 



Re: fail2ban for apache2

2019-12-02 Thread John Hasler
Gene Heskett wrote:
> It, iptables,  did not get restarted on the fresh boot, so obviously the 
> systemd manager hasn't been informed to start iptables, reloading 
> from /etc/iptables/saved-rules.

You would not be having these problems were you using Shorewall...
-- 
John Hasler 
jhas...@newsguy.com
Elmwood, WI USA



Re: Iptables at boot, was fail2ban for apache2

2019-12-02 Thread Greg Wooledge
On Mon, Dec 02, 2019 at 01:46:22PM +0100, Alessandro Vesely wrote:
> ### BEGIN INIT INFO
> # Provides:  netfilter-persistent
> # Required-Start:mountkernfs $remote_fs
> # Required-Stop: $remote_fs
> # Default-Start: S
> # Default-Stop:  0 1 6
> # Short-Description: Load boot-time netfilter configuration
> # Description:   Loads boot-time netfilter configuration
> ### END INIT INFO
> 
> S also starts in single-user mode, i.e. without network?

I believe single-user mode starts the network.  It may not start all
of the network services, of course.



Re: Iptables at boot, was fail2ban for apache2

2019-12-02 Thread Reco
On Mon, Dec 02, 2019 at 01:46:22PM +0100, Alessandro Vesely wrote:
> On Mon 02/Dec/2019 10:35:26 +0100 Andrei POPESCU wrote:
> > 
> > You might want to install iptables-persistent, otherwise you'll have to 
> > roll-out your own solution.
> 
> I'm not using iptables-persistent, but just looked at it out of curiosity.
> 
> Its LSB:
> 
> ### BEGIN INIT INFO
> # Provides:  netfilter-persistent
> # Required-Start:mountkernfs $remote_fs
> # Required-Stop: $remote_fs
> # Default-Start: S
> # Default-Stop:  0 1 6
> # Short-Description: Load boot-time netfilter configuration
> # Description:   Loads boot-time netfilter configuration
> ### END INIT INFO
> 
> S also starts in single-user mode, i.e. without network?

And Default-Stop value prevents it from running in single-user.
Besides, unless one does something really stupid (like using hostnames
in netfilter rules) - what's wrong with netfilter rules loaded at
runlevel 1?
You can load a rule that processes packet on non-existent interface, for
instance.


> $remote_fs requires ip links to be already set up?

mountkernfs is more problematic here. Presumably it's for NFS-root
configuration.


> > In the particular case of iptables instead of writing a script you 
> > should probably just reuse your existing rules file and load that with 
> > an 'iptables-restore' from the .service unit.
> 
> 
> That's somewhat questionable in some cases.  I'd recommend to write a script
> with iptables commands rather than interactively issue iptables command until
> you are satisfied with the current setup.  That's natural, since iptables
> doesn't give a visual feedback, so reasoning is your best friend.  IOW, a
> commented script is more readable than an interactive setup.

"-m comment" anyone?


Personally I see little value in this package. There are cases that
require modifying netfilter rules ad-hoc, saving those at system reboot
can lead to undesirable side-effects. My solution to those is the (ab)use
of /etc/network/interfaces:

auto lo
iface lo inet loopback
up /sbin/iptables-restore < /etc/network/iptables.rules
up /sbin/ip6tables-restore < /etc/network/ip6tables.rules

Because I have no problem in running "iptables-save >
/etc/network/iptables.rules" then the need arises.

Reco



Iptables at boot, was fail2ban for apache2

2019-12-02 Thread Alessandro Vesely
On Mon 02/Dec/2019 10:35:26 +0100 Andrei POPESCU wrote:
> 
> You might want to install iptables-persistent, otherwise you'll have to 
> roll-out your own solution.


I'm not using iptables-persistent, but just looked at it out of curiosity.

Its LSB:

### BEGIN INIT INFO
# Provides:  netfilter-persistent
# Required-Start:mountkernfs $remote_fs
# Required-Stop: $remote_fs
# Default-Start: S
# Default-Stop:  0 1 6
# Short-Description: Load boot-time netfilter configuration
# Description:   Loads boot-time netfilter configuration
### END INIT INFO

S also starts in single-user mode, i.e. without network?

$remote_fs requires ip links to be already set up?

Stop, for good measure, does nothing.  The comment in the script is crisply 
nice:

stop)
# Why? because if stop is used, the firewall gets flushed for a variable
# amount of time during package upgrades, leaving the machine vulnerable
# It's also not always desirable to flush during purge
echo "Automatic flushing disabled, use \"flush\" instead of \"stop\""
;;


> In the particular case of iptables instead of writing a script you 
> should probably just reuse your existing rules file and load that with 
> an 'iptables-restore' from the .service unit.


That's somewhat questionable in some cases.  I'd recommend to write a script
with iptables commands rather than interactively issue iptables command until
you are satisfied with the current setup.  That's natural, since iptables
doesn't give a visual feedback, so reasoning is your best friend.  IOW, a
commented script is more readable than an interactive setup.

Then, since you have a script, why not run it directly, rather than
saving/restoring its results?


> We are quite far from the original topic so I would suggest you start a 
> new thread in case you need assistance with this.


I try, but don't reset References:/In-Reply-To: header fields.


Best
Ale




signature.asc
Description: OpenPGP digital signature


Re: was: fail2ban for apache2, now iptables help

2019-12-02 Thread Gene Heskett
On Monday 02 December 2019 04:35:26 Andrei POPESCU wrote:

> On Du, 01 dec 19, 22:28:43, Gene Heskett wrote:
> > It, iptables,  did not get restarted on the fresh boot, so obviously
> > the systemd manager hasn't been informed to start iptables,
> > reloading from /etc/iptables/saved-rules.
>
> To my knowledge Debian doesn't include anything like this by default.
>
> > So 1. how do I query systemd to determine if it should have started
> > iptables, and if not, 2. what is the command to set it so it does
> > start iptables at bootup?
>
> You might want to install iptables-persistent, otherwise you'll have
> to roll-out your own solution.
>
> With systemd the generic solution would look like:
>
> 1. Write a script that does what you want
> 2. Write a corresponding .service unit describing how / when it's run
> 3. Tell systemd to use your .service unit.
>
> In the particular case of iptables instead of writing a script you
> should probably just reuse your existing rules file and load that with
> an 'iptables-restore' from the .service unit.
>
> We are quite far from the original topic so I would suggest you start
> a new thread in case you need assistance with this.
>
I did find the syntax for iptables-restore and have that working as I'd 
been doing a new iptables-save everytime I added a new rule. So I've got 
most of them muzzled again.

But you're right, the thread has drifted as I looked for a solution for 
the DDOS I was suffering from.

> Kind regards,
> Andrei


Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 



Re: fail2ban for apache2

2019-12-02 Thread Andrei POPESCU
On Du, 01 dec 19, 22:28:43, Gene Heskett wrote:
> 
> It, iptables,  did not get restarted on the fresh boot, so obviously the 
> systemd manager hasn't been informed to start iptables, reloading 
> from /etc/iptables/saved-rules.  

To my knowledge Debian doesn't include anything like this by default.

> So 1. how do I query systemd to determine if it should have started 
> iptables, and if not, 2. what is the command to set it so it does start 
> iptables at bootup?

You might want to install iptables-persistent, otherwise you'll have to 
roll-out your own solution.

With systemd the generic solution would look like:

1. Write a script that does what you want
2. Write a corresponding .service unit describing how / when it's run
3. Tell systemd to use your .service unit.

In the particular case of iptables instead of writing a script you 
should probably just reuse your existing rules file and load that with 
an 'iptables-restore' from the .service unit.

We are quite far from the original topic so I would suggest you start a 
new thread in case you need assistance with this.

Kind regards,
Andrei
-- 
http://wiki.debian.org/FAQsFromDebianUser


signature.asc
Description: PGP signature


Re: fail2ban for apache2

2019-12-01 Thread Gene Heskett
On Tuesday 12 November 2019 21:35:49 Gene Heskett wrote:

> On Tuesday 12 November 2019 19:53:15 John Hasler wrote:
> > I wrote:
> > > Install Shorewall.
> >
> > Gene writes:
> > > Did, spent half an hour reading its man page, but I don't see a
> > > command that will extract and save an existing iptables setup, and
> > > a later reapply of that saved data.
> >
> > I meant use it instead of using Iptables directly: the package takes
> > care of restoring filter rules on boot and is more user-friendly
> > than Iptables. Shorewall-save will save the existing rules.
> >
> > But why aren't you already using Iptables-save and Iptables-restore?
>
> I am now,, so that problem is solved.

Except its not apparently setup to restore at bootup.

Long story even longer.  Motherboard burned up at a usb breakout 
connection early friday evening, and I have moved 2 of the drives from 
that machine to an old, slow and memory starved Dell So memory starved 
that the middle of the night job will probably fail from OOM, so I'll 
kill the cron job before I hit the rack.

It, iptables,  did not get restarted on the fresh boot, so obviously the 
systemd manager hasn't been informed to start iptables, reloading 
from /etc/iptables/saved-rules.  

So 1. how do I query systemd to determine if it should have started 
iptables, and if not, 2. what is the command to set it so it does start 
iptables at bootup?

All new stuff for a new build ordered, but will be later this week 
arriving.

Thanks everyone.

> Cheers, Gene Heskett


Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 



Re: fail2ban for apache2

2019-11-12 Thread Gene Heskett
On Tuesday 12 November 2019 20:03:12 ghe wrote:

> On 11/12/19 5:46 PM, Gene Heskett wrote:
> > Oh goody and I get to name & pick the file and its location. Now,
> > wheres a good place to put the restore in the reboot path?
>
> How about /etc? Or /etc/init.d? That's where mine is...

I've already put mine in rc.local, right under a bunch of stuff designed 
to override udev, and give heyu a port it can use.

Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 



Re: fail2ban for apache2

2019-11-12 Thread Gene Heskett
On Tuesday 12 November 2019 19:53:15 John Hasler wrote:

> I wrote:
> > Install Shorewall.
>
> Gene writes:
> > Did, spent half an hour reading its man page, but I don't see a
> > command that will extract and save an existing iptables setup, and a
> > later reapply of that saved data.
>
> I meant use it instead of using Iptables directly: the package takes
> care of restoring filter rules on boot and is more user-friendly than
> Iptables. Shorewall-save will save the existing rules.
>
> But why aren't you already using Iptables-save and Iptables-restore?

I am now,, so that problem is solved.

Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 



Re: fail2ban for apache2

2019-11-12 Thread ghe
On 11/12/19 5:46 PM, Gene Heskett wrote:

> Oh goody and I get to name & pick the file and its location. Now, wheres 
> a good place to put the restore in the reboot path? 

How about /etc? Or /etc/init.d? That's where mine is...

-- 
Glenn English



Re: fail2ban for apache2

2019-11-12 Thread John Hasler
I wrote:
> Install Shorewall.

Gene writes:
> Did, spent half an hour reading its man page, but I don't see a
> command that will extract and save an existing iptables setup, and a
> later reapply of that saved data.

I meant use it instead of using Iptables directly: the package takes
care of restoring filter rules on boot and is more user-friendly than
Iptables. Shorewall-save will save the existing rules.

But why aren't you already using Iptables-save and Iptables-restore?
-- 
John Hasler 
jhas...@newsguy.com
Elmwood, WI USA



Re: fail2ban for apache2

2019-11-12 Thread Gene Heskett
On Tuesday 12 November 2019 16:04:07 to...@tuxteam.de wrote:

> On Tue, Nov 12, 2019 at 12:40:45PM -0500, Gene Heskett wrote:
>
> [...]
>
> > So I have to find all that in the history and re-invent
> > a 33 line filter DROP. I'll be baqck when I've stuck a hot tater in
> > semrushes exit port.
>
> See iptables-save (will dump the currently active iptables to a file)
> and iptables-restore (will read that file to set up iptables).
>
Oh goody and I get to name & pick the file and its location. Now, wheres 
a good place to put the restore in the reboot path? Make rc.local 
executable and put it there?

I am amazed that as long as iptables has been around, that it has no 
reserved storage for these rules in /etc, and that I had to create a 
directory for it.

All that has been done.  And shorewall purged.

Thanks Tomas.

> Cheers
> -- tomás


Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 



Re: fail2ban for apache2

2019-11-12 Thread Gene Heskett
On Tuesday 12 November 2019 14:28:38 John Hasler wrote:

> Gene writes:
> > So I had been adding iptables rules but had to reboot this morning
> > to get a baseline cups start, only to find my iptables rules were
> > all gone and the bots are DDOSing me again.
>
> Install Shorewall.

Did, spent half an hour reading its man page, but I don't see a command 
that will extract and save an existing iptables setup, and a later 
reapply of that saved data. Am I blind?


Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 



Re: fail2ban for apache2

2019-11-12 Thread Gene Heskett
On Tuesday 12 November 2019 13:30:24 ghe wrote:

> Gene wrote
>
> > So I had been adding iptables rules but had to reboot this
> > morning to get a baseline cups start, only to find my iptables rules
> > were all gone and the bots are DDOSing me again. Grrr
>
> 0) Can you block them with an ACL in your router/firewall? And wr mem
> so the ACL will be there when it boots. (pardon the Cisco-ese)
>
> 1) There's a way (that I haven't needed to use yet) to put all your
> iptables rules in a file to be used at every reboot. And I suspect
> systemd knows how, or can be asked, to run that file on boot.
>
> You may have to ask iptables to write that file every time you add
> IPs.

My thinking runs along those lines too, but the man page is swahili in 
explaining how to do that.

Thanks ghe

Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 



Re: fail2ban for apache2

2019-11-12 Thread tomas
On Tue, Nov 12, 2019 at 12:40:45PM -0500, Gene Heskett wrote:

[...]

> So I have to find all that in the history and re-invent
> a 33 line filter DROP. I'll be baqck when I've stuck a hot tater in 
> semrushes exit port.

See iptables-save (will dump the currently active iptables to a file)
and iptables-restore (will read that file to set up iptables).

Cheers
-- tomás


signature.asc
Description: Digital signature


Re: fail2ban for apache2

2019-11-12 Thread John Hasler
Gene writes:
> So I had been adding iptables rules but had to reboot this morning to
> get a baseline cups start, only to find my iptables rules were all
> gone and the bots are DDOSing me again.

Install Shorewall.
-- 
John Hasler 
jhas...@newsguy.com
Elmwood, WI USA



Re: fail2ban for apache2

2019-11-12 Thread ghe
Gene wrote

> So I had been adding iptables rules but had to reboot this 
> morning to get a baseline cups start, only to find my iptables rules 
> were all gone and the bots are DDOSing me again. Grrr

0) Can you block them with an ACL in your router/firewall? And wr mem so
the ACL will be there when it boots. (pardon the Cisco-ese)

1) There's a way (that I haven't needed to use yet) to put all your
iptables rules in a file to be used at every reboot. And I suspect
systemd knows how, or can be asked, to run that file on boot.

You may have to ask iptables to write that file every time you add IPs.

-- 
Glenn English



Re: fail2ban for apache2

2019-11-12 Thread Gene Heskett
On Tuesday 12 November 2019 11:01:08 Lee wrote:

> On 11/11/19, Gene Heskett  wrote:
> > On Monday 11 November 2019 08:33:13 Greg Wooledge wrote:
>
>   ... snip ...
>
> >> I *know* I told you to look at your log files, and to turn on
> >> user-agent logging if necessary.
> >>
> >> I don't remember seeing you ever *post* your log files here, not
> >> even a single line from a single instance of this bot.  Maybe I
> >> missed it.
> >
> > Only one log file seems to have useful data, the "other..." file,
> > and I have posted several single lines here, but here's a  few more:
>
>... snip ...
>
> > [11/Nov/2019:12:11:39 -0500] "GET
> > /gene/nitros9/level1/coco1_6309/bootfiles/bootfile_covga_cocosdc
> > HTTP/1.1" 200 16133 "-" "Mozilla/5.0 (compatible; Daum/4.1;
> > +http://cs.daum.net/faq/15/4118.html?faqId=28966)"
> >
> > I did ask earlier if daum was a bot but no one answered.  They are
> > becoming a mite pesky.
>
> Google translate can be your friend:
> https://translate.google.com/translate?hl==ko=en=https%3A%2F%2
>Fcs.daum.net%2Ffaq%2F15%2F4118.html
>
> Note they even tell you how to turn off collection:
> I want to automatically exclude documents from my site from web
> document search results.
> [robots.txt Exclusion using file]
> Please write the following in Notepad, and save it as robots.txt file
> to the root directory.
>
> User-agent: DAUM
> Disallow: /
>
> Using * instead of DAUM can prevent web collection robots from
> collecting documents on all search services, not just Daum.
>
> So let's take a look at what you've got:
> $ curl http://geneslinuxbox.net:6309/robots.txt
> # $Id: robots.txt 410967 2009-08-06 19:44:54Z oden $
> # $HeadURL:
> svn+ssh://svn.mandriva.com/svn/packages/cooker/apache-conf/current/SOU
>RCES/robots.txt $
> # exclude help system from robots
>
> User-agent: googlebot-Image
> Disallow: /
>
> User-agent: googlebot
> Disallow: /
>
> User-agent: *
> Disallow: /manual/
>
> User-agent: *
> Disallow: /manual-2.2/
>
> User-agent: *
> Disallow: /addon-modules/
>
> User-0agent: *
> Disallow: /doc/
>
> User-agent: *
> Disallow: /images/
>
> # the next line is a spam bot trap, for grepping the logs. you should
> _really_ change this to something else...
> #Disallow: /all_our_e-mail_addresses
> # same idea here...
>
> User-agent: *
> Disallow: /admin/
>
> # but allow htdig to index our doc-tree
> # User-agent: htdig
> # Disallow:
>
> User-agent: *
> Disallow: stress test
>
> User-agent: stress-agent
> Disallow: /
>
> User-agent *
> Disallow: /
>
> $
>
> You're missing a ':' - it should be
> User-agent: *
> Disallow: /
>
> and I don't think "User-0agent: *" is going to do what you want..
>
> Regards,
> Lee
it didn't. So I had been adding iptables rules but had to reboot this 
morning to get a baseline cups start, only to find my iptables rules 
were all gone and the bots are DDOSing me again. Grrr

So I have to find all that in the history and re-invent
a 33 line filter DROP. I'll be baqck when I've stuck a hot tater in 
semrushes exit port.

Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 



Re: fail2ban for apache2

2019-11-12 Thread Lee
On 11/11/19, Gene Heskett  wrote:
> On Monday 11 November 2019 08:33:13 Greg Wooledge wrote:
  ... snip ...
>> I *know* I told you to look at your log files, and to turn on
>> user-agent logging if necessary.
>>
>> I don't remember seeing you ever *post* your log files here, not even
>> a single line from a single instance of this bot.  Maybe I missed it.
>
> Only one log file seems to have useful data, the "other..." file, and I
> have posted several single lines here, but here's a  few more:
   ... snip ...
> [11/Nov/2019:12:11:39 -0500] "GET
> /gene/nitros9/level1/coco1_6309/bootfiles/bootfile_covga_cocosdc
> HTTP/1.1" 200 16133 "-" "Mozilla/5.0 (compatible; Daum/4.1;
> +http://cs.daum.net/faq/15/4118.html?faqId=28966)"
>
> I did ask earlier if daum was a bot but no one answered.  They are
> becoming a mite pesky.

Google translate can be your friend:
https://translate.google.com/translate?hl==ko=en=https%3A%2F%2Fcs.daum.net%2Ffaq%2F15%2F4118.html

Note they even tell you how to turn off collection:
I want to automatically exclude documents from my site from web
document search results.
[robots.txt Exclusion using file]
Please write the following in Notepad, and save it as robots.txt file
to the root directory.

User-agent: DAUM
Disallow: /

Using * instead of DAUM can prevent web collection robots from
collecting documents on all search services, not just Daum.

So let's take a look at what you've got:
$ curl http://geneslinuxbox.net:6309/robots.txt
# $Id: robots.txt 410967 2009-08-06 19:44:54Z oden $
# $HeadURL: 
svn+ssh://svn.mandriva.com/svn/packages/cooker/apache-conf/current/SOURCES/robots.txt
$
# exclude help system from robots

User-agent: googlebot-Image
Disallow: /

User-agent: googlebot
Disallow: /

User-agent: *
Disallow: /manual/

User-agent: *
Disallow: /manual-2.2/

User-agent: *
Disallow: /addon-modules/

User-0agent: *
Disallow: /doc/

User-agent: *
Disallow: /images/

# the next line is a spam bot trap, for grepping the logs. you should
_really_ change this to something else...
#Disallow: /all_our_e-mail_addresses
# same idea here...

User-agent: *
Disallow: /admin/

# but allow htdig to index our doc-tree
# User-agent: htdig
# Disallow:

User-agent: *
Disallow: stress test

User-agent: stress-agent
Disallow: /

User-agent *
Disallow: /

$

You're missing a ':' - it should be
User-agent: *
Disallow: /

and I don't think "User-0agent: *" is going to do what you want..

Regards,
Lee



Re: fail2ban for apache2

2019-11-11 Thread Cindy Sue Causey
On 11/11/19, Greg Wooledge  wrote:
> On Mon, Nov 11, 2019 at 12:18:17PM -0500, Gene Heskett wrote:
>>
>> HTTP/1.1" 200 554724 "-" "Mozilla/5.0 (compatible; Daum/4.1;
>> +http://cs.daum.net/faq/15/4118.html?faqId=28966)"
>> coyote.coyote.den:80 203.133.169.54 - -
>> [11/Nov/2019:12:11:29 -0500] "GET
>> /gene/nitros9/level1/dalpha/modules/defsfile
>> HTTP/1.1" 200 248 "-" "Mozilla/5.0 (compatible; Daum/4.1;
>> +http://cs.daum.net/faq/15/4118.html?faqId=28966)"
>> coyote.coyote.den:80 203.133.169.54 - -
>> [11/Nov/2019:12:11:34 -0500] "GET
>> /gene/nitros9/level1/atari/modules/n1_scdwv.dd
>> HTTP/1.1" 200 280 "-" "Mozilla/5.0 (compatible; Daum/4.1;
>> +http://cs.daum.net/faq/15/4118.html?faqId=28966)"
>> coyote.coyote.den:80 203.133.169.54 - -
>> [11/Nov/2019:12:11:39 -0500] "GET
>> /gene/nitros9/level1/coco1_6309/bootfiles/bootfile_covga_cocosdc
>> HTTP/1.1" 200 16133 "-" "Mozilla/5.0 (compatible; Daum/4.1;
>> +http://cs.daum.net/faq/15/4118.html?faqId=28966)"
>>
>> I did ask earlier if daum was a bot but no one answered.  They are
>> becoming a mite pesky.
>
> Well, maybe nobody knows.
>
> I went to daum.net in a web browser, and it looks like it's in an Asian
> language.  It also looks like it's selling a bunch of stuff (at least,
> it's laid out the way a retailer's web page is typically laid out).
>
> I also went to the URL in your log
> .  Again, it's in a
> language that I can't read, but it's talking about robots.txt and shows
> an example of how to block them.
>
> So, yes, it's a bot.
>
> Did you not try either of these steps yourself?


I tried what I do when I get stuff like this: A search engine using
either "s-p-a-m" or abuse along with the site in question.

This "cs-daum" one pulls up talking a lot about being some kind of
mail server, too. That take was garnered yet again via the search
results without actually visiting any websites. That didn't make much
sense with respect to the complaint, other than it's something that a
well-rounded website might be offering.

If I'm real sure something's foul, I'll go straight for searching with
e.g. "Spamhaus" as an accompanying keyword. As an afterthought, I did
just that, too, and received the following:

"This is a confirmed bad bot but isn't blocked yet by the blocker:"

Credit without visiting the website appears to go to Github account
"mariusv" that is tracking issues for "nginx-badbot-blocker". That may
change if one actually visits the website. I'm not able to just this
second..

I'm glad I did the more generic search first. Mail didn't get much of
a mention when Spamhaus was used instead. Something called "hanmail"
that may or may not be related got a few "loud" head nods in my first
search but was much more buried in the second one.

Am donning a conspiracy hat now because of all the chatter about
machinery on regular occasion.

After a few seconds of contemplation, it comes to mind to wonder out
loud: Are they hitting it hard...

* Just because they can?

* Or because it appears to them that there may be steal-worthy
information they could turn around and patent or otherwise profit from
somehow?

The "espionage" angle is becoming ridiculous out there. Just saw
something in my inbox yesterday about the military going after a
product source or contractor that sold them "Made in the USA" products
that were instead made elsewhere.

Regular users discovered the fraud when foreign language characters
instead of en-US appeared on the screen that was monitoring military
folks wearing on-body cameras...

Not joking/exaggerating when I say I'm really starting to wonder about
ANY products we buy right now. It's at the top of my own list of
concerns because of all that sudden, simultaneous crash-and-burn of my
software and multiple pieces of hardware a few weeks ago.

Things were working just fine until I bought a couple various new,
small add-ons, e.g. a dual bay hard drive docking station and a couple
of 64GB [thumb drives].

Even those inexpensive, nay, CHEAP wifi dongles.. I mentioned I bought
3 of those myself a few months back...

Who knows

And how the doodles is the average user supposed to sanity/safety
check every single piece of computer-based, possibly chip containing
hardware from now on. The implication is that one computer compromised
internally that way is most likely networked with a whole bunch of
others in the meantime, too.

AND.. I don't think how we obtain these items affects any perceived
risk in the future. Something from a big box store can be just as
easily compromised as single items we may buy from "online
marketplaces"

Cindy :)
-- 
Cindy-Sue Causey
Talking Rock, Pickens County, Georgia, USA

* runs with birdseed *



Re: fail2ban for apache2

2019-11-11 Thread Frank McCormick

Sorry Gene. Hit reply instead of reply list.

On 11/11/19 12:18 PM, Gene Heskett wrote:

On Monday 11 November 2019 08:33:13 Greg Wooledge wrote:


I have a list of ipv4's I want fail2ban to block.


Not sure that fail2ban is the best tool for the job. Where you
already have a list of IPs that you want to block why not just
directly create the iptables rules?


just did that, got most of them but semrush apparently has fallback
addys to use.  But I'm no longer being DDOSed, which was the point.
Thanks.


In case it wasn't already clear, what fail2ban does is parse a log
file looking for repeated instances of an invalid login (or whatever).
  You have to tell it what to look for, and what to do about it.





coyote.coyote.den:80 40.94.105.9 - -
[11/Nov/2019:12:08:53 -0500] "GET /gene/ HTTP/1.1" 200
5141 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36
(KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36"
coyote.coyote.den:80 40.94.105.9 - -
[11/Nov/2019:12:08:53 -0500] "GET /gene/pix/EasterSundayCropped2004-1.jpg
HTTP/1.1" 200 194478 "http://geneslinuxbox.net:6309/gene/; "Mozilla/5.0
(Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko)
Chrome/57.0.2987.133 Safari/537.36"
coyote.coyote.den:80 40.94.105.9 - -
[11/Nov/2019:12:08:56 -0500] "GET /favicon.ico HTTP/1.1" 200
1705 "http://geneslinuxbox.net:6309/gene/; "Mozilla/5.0 (Windows NT
10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko)
Chrome/57.0.2987.133 Safari/537.36"
coyote.coyote.den:80 203.133.169.54 - -
[11/Nov/2019:12:10:52 -0500] "GET /robots.txt HTTP/1.1" 200
1092 "-" "Mozilla/5.0 (compatible; Daum/4.1;
+http://cs.daum.net/faq/15/4118.html?faqId=28966)"
coyote.coyote.den:80 203.133.169.54 - -
[11/Nov/2019:12:10:53 -0500] "GET /gene/nitros9/level1/d64/modules/sysgo_h0
HTTP/1.1" 200 706 "-" "Mozilla/5.0 (compatible; Daum/4.1;
+http://cs.daum.net/faq/15/4118.html?faqId=28966)"
coyote.coyote.den:80 203.133.169.54 - -
[11/Nov/2019:12:10:58 -0500] "GET 
/gene/nitros9/level1/coco2b/NOS9_6809_L1_coco2b_cocosdc.dsk
HTTP/1.1" 200 4718822 "-" "Mozilla/5.0 (compatible; Daum/4.1;
+http://cs.daum.net/faq/15/4118.html?faqId=28966)"
coyote.coyote.den:80 203.133.169.54 - -
[11/Nov/2019:12:11:21 -0500] "GET 
/gene/nitros9/level1/coco2_6309/NOS9_6309_L1_coco2_6309_dw_directmodempak.dsk
HTTP/1.1" 200 554724 "-" "Mozilla/5.0 (compatible; Daum/4.1;
+http://cs.daum.net/faq/15/4118.html?faqId=28966)"
coyote.coyote.den:80 203.133.169.54 - -
[11/Nov/2019:12:11:29 -0500] "GET /gene/nitros9/level1/dalpha/modules/defsfile
HTTP/1.1" 200 248 "-" "Mozilla/5.0 (compatible; Daum/4.1;
+http://cs.daum.net/faq/15/4118.html?faqId=28966)"
coyote.coyote.den:80 203.133.169.54 - -
[11/Nov/2019:12:11:34 -0500] "GET /gene/nitros9/level1/atari/modules/n1_scdwv.dd
HTTP/1.1" 200 280 "-" "Mozilla/5.0 (compatible; Daum/4.1;
+http://cs.daum.net/faq/15/4118.html?faqId=28966)"
coyote.coyote.den:80 203.133.169.54 - -
[11/Nov/2019:12:11:39 -0500] "GET 
/gene/nitros9/level1/coco1_6309/bootfiles/bootfile_covga_cocosdc
HTTP/1.1" 200 16133 "-" "Mozilla/5.0 (compatible; Daum/4.1;
+http://cs.daum.net/faq/15/4118.html?faqId=28966)"

I did ask earlier if daum was a bot but no one answered.  They are
becoming a mite pesky.



Here's your answer:

https://www.distilnetworks.com/bot-directory/bot/daum-4-1/





Thanks.

Cheers, Gene Heskett



--
Frank McCormick



Re: fail2ban for apache2

2019-11-11 Thread Gene Heskett
On Monday 11 November 2019 12:38:09 Greg Wooledge wrote:

> On Mon, Nov 11, 2019 at 12:18:17PM -0500, Gene Heskett wrote:
> > Only one log file seems to have useful data, the "other..." file,
> > and I have posted several single lines here, but here's a  few more:
> >
> > coyote.coyote.den:80 40.94.105.9 - -
> > [11/Nov/2019:12:08:53 -0500] "GET /gene/ HTTP/1.1" 200
> > 5141 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64)
> > AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133
> > Safari/537.36"
> > coyote.coyote.den:80 40.94.105.9 - -
> > [11/Nov/2019:12:08:53 -0500] "GET
> > /gene/pix/EasterSundayCropped2004-1.jpg HTTP/1.1" 200 194478
> > "http://geneslinuxbox.net:6309/gene/; "Mozilla/5.0 (Windows NT 10.0;
> > Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko)
> > Chrome/57.0.2987.133 Safari/537.36"
> > coyote.coyote.den:80 40.94.105.9 - -
> > [11/Nov/2019:12:08:56 -0500] "GET /favicon.ico HTTP/1.1" 200
> > 1705 "http://geneslinuxbox.net:6309/gene/; "Mozilla/5.0 (Windows NT
> > 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko)
> > Chrome/57.0.2987.133 Safari/537.36"
> > coyote.coyote.den:80 203.133.169.54 - -
> > [11/Nov/2019:12:10:52 -0500] "GET /robots.txt HTTP/1.1" 200
> > 1092 "-" "Mozilla/5.0 (compatible; Daum/4.1;
> > +http://cs.daum.net/faq/15/4118.html?faqId=28966)"
> > coyote.coyote.den:80 203.133.169.54 - -
> > [11/Nov/2019:12:10:53 -0500] "GET
> > /gene/nitros9/level1/d64/modules/sysgo_h0 HTTP/1.1" 200 706 "-"
> > "Mozilla/5.0 (compatible; Daum/4.1;
> > +http://cs.daum.net/faq/15/4118.html?faqId=28966)"
> > coyote.coyote.den:80 203.133.169.54 - -
> > [11/Nov/2019:12:10:58 -0500] "GET
> > /gene/nitros9/level1/coco2b/NOS9_6809_L1_coco2b_cocosdc.dsk
> > HTTP/1.1" 200 4718822 "-" "Mozilla/5.0 (compatible; Daum/4.1;
> > +http://cs.daum.net/faq/15/4118.html?faqId=28966)"
> > coyote.coyote.den:80 203.133.169.54 - -
> > [11/Nov/2019:12:11:21 -0500] "GET
> > /gene/nitros9/level1/coco2_6309/NOS9_6309_L1_coco2_6309_dw_directmod
> >empak.dsk HTTP/1.1" 200 554724 "-" "Mozilla/5.0 (compatible;
> > Daum/4.1; +http://cs.daum.net/faq/15/4118.html?faqId=28966)"
> > coyote.coyote.den:80 203.133.169.54 - -
> > [11/Nov/2019:12:11:29 -0500] "GET
> > /gene/nitros9/level1/dalpha/modules/defsfile HTTP/1.1" 200 248 "-"
> > "Mozilla/5.0 (compatible; Daum/4.1;
> > +http://cs.daum.net/faq/15/4118.html?faqId=28966)"
> > coyote.coyote.den:80 203.133.169.54 - -
> > [11/Nov/2019:12:11:34 -0500] "GET
> > /gene/nitros9/level1/atari/modules/n1_scdwv.dd HTTP/1.1" 200 280 "-"
> > "Mozilla/5.0 (compatible; Daum/4.1;
> > +http://cs.daum.net/faq/15/4118.html?faqId=28966)"
> > coyote.coyote.den:80 203.133.169.54 - -
> > [11/Nov/2019:12:11:39 -0500] "GET
> > /gene/nitros9/level1/coco1_6309/bootfiles/bootfile_covga_cocosdc
> > HTTP/1.1" 200 16133 "-" "Mozilla/5.0 (compatible; Daum/4.1;
> > +http://cs.daum.net/faq/15/4118.html?faqId=28966)"
> >
> > I did ask earlier if daum was a bot but no one answered.  They are
> > becoming a mite pesky.
>
> Well, maybe nobody knows.
>
> I went to daum.net in a web browser, and it looks like it's in an
> Asian language.  It also looks like it's selling a bunch of stuff (at
> least, it's laid out the way a retailer's web page is typically laid
> out).
>
> I also went to the URL in your log
> .  Again, it's in a
> language that I can't read, but it's talking about robots.txt and
> shows an example of how to block them.
>
> So, yes, it's a bot.
>
> Did you not try either of these steps yourself?

I've at least 2 dozen robots.txt, with every known recipe scattered about 
including the one they read, and then ignored.  That leaves iptables...  
Its working and after several weeks I have some upload bandwidth left.

Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 



Re: fail2ban for apache2

2019-11-11 Thread Greg Wooledge
On Mon, Nov 11, 2019 at 12:18:17PM -0500, Gene Heskett wrote:
> Only one log file seems to have useful data, the "other..." file, and I 
> have posted several single lines here, but here's a  few more:
> 
> coyote.coyote.den:80 40.94.105.9 - - 
> [11/Nov/2019:12:08:53 -0500] "GET /gene/ HTTP/1.1" 200 
> 5141 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 
> (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36"
> coyote.coyote.den:80 40.94.105.9 - - 
> [11/Nov/2019:12:08:53 -0500] "GET /gene/pix/EasterSundayCropped2004-1.jpg 
> HTTP/1.1" 200 194478 "http://geneslinuxbox.net:6309/gene/; "Mozilla/5.0 
> (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) 
> Chrome/57.0.2987.133 Safari/537.36"
> coyote.coyote.den:80 40.94.105.9 - - 
> [11/Nov/2019:12:08:56 -0500] "GET /favicon.ico HTTP/1.1" 200 
> 1705 "http://geneslinuxbox.net:6309/gene/; "Mozilla/5.0 (Windows NT 
> 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) 
> Chrome/57.0.2987.133 Safari/537.36"
> coyote.coyote.den:80 203.133.169.54 - - 
> [11/Nov/2019:12:10:52 -0500] "GET /robots.txt HTTP/1.1" 200 
> 1092 "-" "Mozilla/5.0 (compatible; Daum/4.1; 
> +http://cs.daum.net/faq/15/4118.html?faqId=28966)"
> coyote.coyote.den:80 203.133.169.54 - - 
> [11/Nov/2019:12:10:53 -0500] "GET /gene/nitros9/level1/d64/modules/sysgo_h0 
> HTTP/1.1" 200 706 "-" "Mozilla/5.0 (compatible; Daum/4.1; 
> +http://cs.daum.net/faq/15/4118.html?faqId=28966)"
> coyote.coyote.den:80 203.133.169.54 - - 
> [11/Nov/2019:12:10:58 -0500] "GET 
> /gene/nitros9/level1/coco2b/NOS9_6809_L1_coco2b_cocosdc.dsk 
> HTTP/1.1" 200 4718822 "-" "Mozilla/5.0 (compatible; Daum/4.1; 
> +http://cs.daum.net/faq/15/4118.html?faqId=28966)"
> coyote.coyote.den:80 203.133.169.54 - - 
> [11/Nov/2019:12:11:21 -0500] "GET 
> /gene/nitros9/level1/coco2_6309/NOS9_6309_L1_coco2_6309_dw_directmodempak.dsk 
> HTTP/1.1" 200 554724 "-" "Mozilla/5.0 (compatible; Daum/4.1; 
> +http://cs.daum.net/faq/15/4118.html?faqId=28966)"
> coyote.coyote.den:80 203.133.169.54 - - 
> [11/Nov/2019:12:11:29 -0500] "GET 
> /gene/nitros9/level1/dalpha/modules/defsfile 
> HTTP/1.1" 200 248 "-" "Mozilla/5.0 (compatible; Daum/4.1; 
> +http://cs.daum.net/faq/15/4118.html?faqId=28966)"
> coyote.coyote.den:80 203.133.169.54 - - 
> [11/Nov/2019:12:11:34 -0500] "GET 
> /gene/nitros9/level1/atari/modules/n1_scdwv.dd 
> HTTP/1.1" 200 280 "-" "Mozilla/5.0 (compatible; Daum/4.1; 
> +http://cs.daum.net/faq/15/4118.html?faqId=28966)"
> coyote.coyote.den:80 203.133.169.54 - - 
> [11/Nov/2019:12:11:39 -0500] "GET 
> /gene/nitros9/level1/coco1_6309/bootfiles/bootfile_covga_cocosdc 
> HTTP/1.1" 200 16133 "-" "Mozilla/5.0 (compatible; Daum/4.1; 
> +http://cs.daum.net/faq/15/4118.html?faqId=28966)"
> 
> I did ask earlier if daum was a bot but no one answered.  They are 
> becoming a mite pesky.

Well, maybe nobody knows.

I went to daum.net in a web browser, and it looks like it's in an Asian
language.  It also looks like it's selling a bunch of stuff (at least,
it's laid out the way a retailer's web page is typically laid out).

I also went to the URL in your log
.  Again, it's in a
language that I can't read, but it's talking about robots.txt and shows
an example of how to block them.

So, yes, it's a bot.

Did you not try either of these steps yourself?



Re: fail2ban for apache2

2019-11-11 Thread Gene Heskett
On Monday 11 November 2019 08:33:13 Greg Wooledge wrote:

> > > > I have a list of ipv4's I want fail2ban to block.
> > >
> > > Not sure that fail2ban is the best tool for the job. Where you
> > > already have a list of IPs that you want to block why not just
> > > directly create the iptables rules?
> >
> > just did that, got most of them but semrush apparently has fallback
> > addys to use.  But I'm no longer being DDOSed, which was the point. 
> > Thanks.
>
> In case it wasn't already clear, what fail2ban does is parse a log
> file looking for repeated instances of an invalid login (or whatever).
>  You have to tell it what to look for, and what to do about it.
>
> The typical use is with an ssh server, looking for rapid, repeated
> login failures.  If enough failed logins occur from a single IP, then
> it adds a firewall rule to block that IP address.
>
> Hence "fail 2 ban", i.e. "fail -> ban".
>
> If you already know the IP addresses/ranges that you want to block,
> you don't need fail2ban.
>
> But once again, I really think you'd be better served by blocking this
> particular bot based on user-agent string, assuming it has an easily
> identifiable user-agent in your log files.  That way, when it changes
> its IP address, it'll still be blocked.
>
> I *know* I told you to look at your log files, and to turn on
> user-agent logging if necessary.
>
> I don't remember seeing you ever *post* your log files here, not even
> a single line from a single instance of this bot.  Maybe I missed it.

Only one log file seems to have useful data, the "other..." file, and I 
have posted several single lines here, but here's a  few more:

coyote.coyote.den:80 40.94.105.9 - - 
[11/Nov/2019:12:08:53 -0500] "GET /gene/ HTTP/1.1" 200 
5141 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 
(KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36"
coyote.coyote.den:80 40.94.105.9 - - 
[11/Nov/2019:12:08:53 -0500] "GET /gene/pix/EasterSundayCropped2004-1.jpg 
HTTP/1.1" 200 194478 "http://geneslinuxbox.net:6309/gene/; "Mozilla/5.0 
(Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) 
Chrome/57.0.2987.133 Safari/537.36"
coyote.coyote.den:80 40.94.105.9 - - 
[11/Nov/2019:12:08:56 -0500] "GET /favicon.ico HTTP/1.1" 200 
1705 "http://geneslinuxbox.net:6309/gene/; "Mozilla/5.0 (Windows NT 
10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) 
Chrome/57.0.2987.133 Safari/537.36"
coyote.coyote.den:80 203.133.169.54 - - 
[11/Nov/2019:12:10:52 -0500] "GET /robots.txt HTTP/1.1" 200 
1092 "-" "Mozilla/5.0 (compatible; Daum/4.1; 
+http://cs.daum.net/faq/15/4118.html?faqId=28966)"
coyote.coyote.den:80 203.133.169.54 - - 
[11/Nov/2019:12:10:53 -0500] "GET /gene/nitros9/level1/d64/modules/sysgo_h0 
HTTP/1.1" 200 706 "-" "Mozilla/5.0 (compatible; Daum/4.1; 
+http://cs.daum.net/faq/15/4118.html?faqId=28966)"
coyote.coyote.den:80 203.133.169.54 - - 
[11/Nov/2019:12:10:58 -0500] "GET 
/gene/nitros9/level1/coco2b/NOS9_6809_L1_coco2b_cocosdc.dsk 
HTTP/1.1" 200 4718822 "-" "Mozilla/5.0 (compatible; Daum/4.1; 
+http://cs.daum.net/faq/15/4118.html?faqId=28966)"
coyote.coyote.den:80 203.133.169.54 - - 
[11/Nov/2019:12:11:21 -0500] "GET 
/gene/nitros9/level1/coco2_6309/NOS9_6309_L1_coco2_6309_dw_directmodempak.dsk 
HTTP/1.1" 200 554724 "-" "Mozilla/5.0 (compatible; Daum/4.1; 
+http://cs.daum.net/faq/15/4118.html?faqId=28966)"
coyote.coyote.den:80 203.133.169.54 - - 
[11/Nov/2019:12:11:29 -0500] "GET /gene/nitros9/level1/dalpha/modules/defsfile 
HTTP/1.1" 200 248 "-" "Mozilla/5.0 (compatible; Daum/4.1; 
+http://cs.daum.net/faq/15/4118.html?faqId=28966)"
coyote.coyote.den:80 203.133.169.54 - - 
[11/Nov/2019:12:11:34 -0500] "GET 
/gene/nitros9/level1/atari/modules/n1_scdwv.dd 
HTTP/1.1" 200 280 "-" "Mozilla/5.0 (compatible; Daum/4.1; 
+http://cs.daum.net/faq/15/4118.html?faqId=28966)"
coyote.coyote.den:80 203.133.169.54 - - 
[11/Nov/2019:12:11:39 -0500] "GET 
/gene/nitros9/level1/coco1_6309/bootfiles/bootfile_covga_cocosdc 
HTTP/1.1" 200 16133 "-" "Mozilla/5.0 (compatible; Daum/4.1; 
+http://cs.daum.net/faq/15/4118.html?faqId=28966)"

I did ask earlier if daum was a bot but no one answered.  They are 
becoming a mite pesky.

Thanks.

Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 



Re: fail2ban for apache2

2019-11-11 Thread Greg Wooledge
On Mon, Nov 11, 2019 at 02:52:36PM +0100, to...@tuxteam.de wrote:
> On Mon, Nov 11, 2019 at 08:33:13AM -0500, Greg Wooledge wrote:
> > > > > I have a list of ipv4's I want fail2ban to block.
> 
> [...]
> 
> > I don't remember seeing you ever *post* your log files here, not even
> > a single line from a single instance of this bot.  Maybe I missed it.
> 
> We had one sample in this thread.

Aye, I hadn't read all the way down that other branch of the thread yet.
The sample in question was from a bingbot, not a SemrushBot.



Re: fail2ban for apache2

2019-11-11 Thread tomas
On Mon, Nov 11, 2019 at 08:33:13AM -0500, Greg Wooledge wrote:
> > > > I have a list of ipv4's I want fail2ban to block.

[...]

> I don't remember seeing you ever *post* your log files here, not even
> a single line from a single instance of this bot.  Maybe I missed it.

We had one sample in this thread.

Cheers
-- t


signature.asc
Description: Digital signature


Re: fail2ban for apache2

2019-11-11 Thread Greg Wooledge
> > > I have a list of ipv4's I want fail2ban to block.
> >
> > Not sure that fail2ban is the best tool for the job. Where you already
> > have a list of IPs that you want to block why not just directly create
> > the iptables rules?
> 
> just did that, got most of them but semrush apparently has fallback addys 
> to use.  But I'm no longer being DDOSed, which was the point.  Thanks.

In case it wasn't already clear, what fail2ban does is parse a log file
looking for repeated instances of an invalid login (or whatever).  You
have to tell it what to look for, and what to do about it.

The typical use is with an ssh server, looking for rapid, repeated
login failures.  If enough failed logins occur from a single IP, then
it adds a firewall rule to block that IP address.

Hence "fail 2 ban", i.e. "fail -> ban".

If you already know the IP addresses/ranges that you want to block, you
don't need fail2ban.

But once again, I really think you'd be better served by blocking this
particular bot based on user-agent string, assuming it has an easily
identifiable user-agent in your log files.  That way, when it changes
its IP address, it'll still be blocked.

I *know* I told you to look at your log files, and to turn on user-agent
logging if necessary.

I don't remember seeing you ever *post* your log files here, not even
a single line from a single instance of this bot.  Maybe I missed it.



Re: fail2ban for apache2

2019-11-11 Thread tomas
On Sun, Nov 10, 2019 at 06:07:37PM -0500, Gene Heskett wrote:
> On Sunday 10 November 2019 16:07:22 to...@tuxteam.de wrote:
> 
> > On Sun, Nov 10, 2019 at 10:55:03AM -0500, Gene Heskett wrote:
> > > On Sunday 10 November 2019 08:02:46 Michael wrote:
> > >
> > > Which contains such gems as this:
> > > coyote.coyote.den:80 40.77.167.79 - -
> > > [10/Nov/2019:10:44:45 -0500] "GET /gene/fence/18.html HTTP/1.1" 200
> > > 1121 "-" "Mozilla/5.0 (iPhone; CPU iPhone OS 7_0 like Mac OS X)
> > > AppleWebKit/537.51.1 (KHTML, like Gecko) Version/7.0 Mobile/11A465
> > > Safari/9537.53 (compatible; bingbot/2.0;
> > > +http://www.bing.com/bingbot.htm)"
> > >
> > > But I've no clue which of the above blather is the "User agent"
> > > [...]
> >
> > It's the sixth field:
> >
> I don't see an obvious field delimiter in this. Tomas. Is it definable?

It's the "".

> >   "Mozilla/5.0 (iPhone; CPU iPhone OS 7_0 like Mac OS X)
> >   AppleWebKit/537.51.1 (KHTML, like Gecko) Version/7.0 Mobile/11A465
> >   Safari/9537.53 (compatible; bingbot/2.0;
> >   +http://www.bing.com/bingbot.htm)"
> >
> > Yes, a bit long. But focusing on the bingbot part seems reasonable.

This is the "Common Log Format" (cf. [1] and links therein:
of special interest all that software out there designed to
parse and grok that stuff) or some mutation thereof. And yes,
it can be configured at your heart's content by (drumroll...)
munging your Apache config [2] -- a topic on which you stubbornly
keep a suspicious silence ;-)

Cheers

[1] https://en.wikipedia.org/wiki/Common_Log_Format
[2] https://httpd.apache.org/docs/2.4/logs.html

-- tomás


signature.asc
Description: Digital signature


Re: fail2ban for apache2

2019-11-11 Thread Michael

On Monday, November 11, 2019 12:07:37 AM CET, Gene Heskett wrote:

On Sunday 10 November 2019 16:07:22 to...@tuxteam.de wrote:


On Sun, Nov 10, 2019 at 10:55:03AM -0500, Gene Heskett wrote: ...

I don't see an obvious field delimiter in this. Tomas. Is it definable?


like thomas told you earlier: there is a good documentation for apache2 at 
https://httpd.apache.org/docs/


just select the appropriate version (probably 2.4), and then klick on 'Log 
Files'. et voilà: you have your answer! double check it with the entries in 
your apache2-configuration in /etc/apache2/sites-enabled/ and you're all 
set.


please take any advice to read documentation seriously, because it might be 
enlightening, and can answer your questions even before they occur.


greetings...



Re: fail2ban for apache2

2019-11-10 Thread Tixy
On Sun, 2019-11-10 at 19:37 +, Brian wrote:
> On Sun 10 Nov 2019 at 10:26:17 -0800, Kushal Kumaran wrote:
> [...]
> > One thing you could try is to examine the iptables rule counters
> > daily/weekly.  If the counters do not increase during some
> > interval,
> > then the rule is no longer useful to you, so you could delete
> > it.  This
> > should be fairly straightforward to automate, but I don't know if
> > someone has already built this tooling.
> 
> I hardly use iptables, so this is the first I have heard about rule
> counters. I'll work something out to accomodate it.

And you can zero all the counters with "/sbin/iptables -Z" (or zero
individual rule couters if you want). 

-- 
Tixy



Re: fail2ban for apache2

2019-11-10 Thread Gene Heskett
On Sunday 10 November 2019 18:07:37 Gene Heskett wrote:

> On Sunday 10 November 2019 16:07:22 to...@tuxteam.de wrote:
> > On Sun, Nov 10, 2019 at 10:55:03AM -0500, Gene Heskett wrote:
> > > On Sunday 10 November 2019 08:02:46 Michael wrote:
> > >
> > > Which contains such gems as this:
> > > coyote.coyote.den:80 40.77.167.79 - -
> > > [10/Nov/2019:10:44:45 -0500] "GET /gene/fence/18.html HTTP/1.1"
> > > 200 1121 "-" "Mozilla/5.0 (iPhone; CPU iPhone OS 7_0 like Mac OS
> > > X) AppleWebKit/537.51.1 (KHTML, like Gecko) Version/7.0
> > > Mobile/11A465 Safari/9537.53 (compatible; bingbot/2.0;
> > > +http://www.bing.com/bingbot.htm)"
> > >
> > > But I've no clue which of the above blather is the "User agent"
> > > [...]
> >
> > It's the sixth field:
>
> I don't see an obvious field delimiter in this. Tomas. Is it
> definable?
>
>
> >
> > Cheers
> > -- t
>
> Thanks Tomas.
>
>
> Cheers, Gene Heskett

Is cs.Daum.net a bot?  They are getting pesky.


Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 



Re: fail2ban for apache2

2019-11-10 Thread Gene Heskett
On Sunday 10 November 2019 16:07:22 to...@tuxteam.de wrote:

> On Sun, Nov 10, 2019 at 10:55:03AM -0500, Gene Heskett wrote:
> > On Sunday 10 November 2019 08:02:46 Michael wrote:
> >
> > Which contains such gems as this:
> > coyote.coyote.den:80 40.77.167.79 - -
> > [10/Nov/2019:10:44:45 -0500] "GET /gene/fence/18.html HTTP/1.1" 200
> > 1121 "-" "Mozilla/5.0 (iPhone; CPU iPhone OS 7_0 like Mac OS X)
> > AppleWebKit/537.51.1 (KHTML, like Gecko) Version/7.0 Mobile/11A465
> > Safari/9537.53 (compatible; bingbot/2.0;
> > +http://www.bing.com/bingbot.htm)"
> >
> > But I've no clue which of the above blather is the "User agent"
> > [...]
>
> It's the sixth field:
>
I don't see an obvious field delimiter in this. Tomas. Is it definable?

>   "Mozilla/5.0 (iPhone; CPU iPhone OS 7_0 like Mac OS X)
>   AppleWebKit/537.51.1 (KHTML, like Gecko) Version/7.0 Mobile/11A465
>   Safari/9537.53 (compatible; bingbot/2.0;
>   +http://www.bing.com/bingbot.htm)"
>
> Yes, a bit long. But focusing on the bingbot part seems reasonable.
>
> Cheers
> -- t
Thanks Tomas.


Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 



Re: fail2ban for apache2

2019-11-10 Thread Gene Heskett
On Sunday 10 November 2019 14:37:58 Brian wrote:

> On Sun 10 Nov 2019 at 10:26:17 -0800, Kushal Kumaran wrote:
> > Brian  writes:
> > > On Sun 10 Nov 2019 at 11:01:07 +0100, Michael wrote:
> > >> On Saturday, November 9, 2019 7:01:00 PM CET, Gene Heskett wrote:
> > >> > I was able, with the help of another responder to carve up some
> > >> > iptables rules to stop the DDOS that semrush, yandex, bingbot,
> > >> > and 2 or 3 others were bound to do to me.
> > >>
> > >> using iptables directly is fine, because you get your results
> > >> fast, but it lacks some advantages over fail2ban, which i think
> > >> outweigh the simplicity of iptables:
> > >> - whith iptables you have to scan your log regularly for
> > >> misbehaving or unwanted clients, whereas fail2ban does this
> > >> automatically, constantly (!), and based on rules. from time to
> > >> time these rules have to be adapted, since bots are evolving, but
> > >> i think it's still less trouble than looking at log files every
> > >> day or so.
> > >> - fail2ban allows you to block only specific ports, in your case
> > >> maybe 80 and/or 443 for the web server.
> > >> - you have to remember which ip address you blocked, why and for
> > >> how long you want to block them. fail2ban does that for you.
> > >> - ... (too lazy right now to write more)
> > >
> > > This accords with my understanding of failtoban with exim. I use
> > > it to keep the logs clean and it is very effective. Offenders are
> > > banned for a year, although I do wonder sometimes whether this
> > > length of time is a little over the top. I also wonder whether, as
> > > the banned list builds up, there is a noticable affect on the
> > > machine's resources.
> >
> > Probably.  But you have to balance that against the resources
> > required if you let the connection through to exim (or whatever
> > service you're protecting).  iptables (even with a few hundred
> > rules) is likely to be more efficient than exim.
>
> Thank you for that, Kushal. I see your point. It is indeed efficiency,
> not security, I am after.
>
> > One thing you could try is to examine the iptables rule counters
> > daily/weekly.  If the counters do not increase during some interval,
> > then the rule is no longer useful to you, so you could delete it. 
> > This should be fairly straightforward to automate, but I don't know
> > if someone has already built this tooling.
>
> I hardly use iptables, so this is the first I have heard about rule
> counters. I'll work something out to accomodate it.

This is something I've been looking at/for, is something that can take this 
report. And something that has not had a hit in say a month, 
should be removed.

#> iptables -L -nv --line-numbers
Chain INPUT (policy ACCEPT 2785K packets, 28G bytes)
num   pkts bytes target prot opt in out source   
destination
10 0 DROP   all  --  *  *   73.229.203.175   
0.0.0.0/0
20 0 DROP   all  --  *  *   77.88.5.200  
0.0.0.0/0
30 0 DROP   all  --  *  *   66.249.64.226
0.0.0.0/0
40 0 DROP   all  --  *  *   40.77.167.82 
0.0.0.0/0
50 0 DROP   all  --  *  *   111.225.149.199  
0.0.0.0/0
60 0 DROP   all  --  *  *   40.77.167.142
0.0.0.0/0
74   240 DROP   all  --  *  *   220.243.136.25   
0.0.0.0/0
8  424 25440 DROP   all  --  *  *   46.229.168.146   
0.0.0.0/0
93   180 DROP   all  --  *  *   141.8.143.160
0.0.0.0/0
10   4   240 DROP   all  --  *  *   111.225.148.159  
0.0.0.0/0
11  72  4320 DROP   all  --  *  *   46.229.168.134   
0.0.0.0/0
12  32  1920 DROP   all  --  *  *   46.229.168.137   
0.0.0.0/0
13   4   240 DROP   all  --  *  *   111.225.148.49   
0.0.0.0/0
14   0 0 DROP   all  --  *  *   220.243.136.54   
0.0.0.0/0
15   0 0 DROP   all  --  *  *   110.249.202.57   
0.0.0.0/0
16 740 44400 DROP   all  --  *  *   111.225.149.0/24 
0.0.0.0/0
17 711 42660 DROP   all  --  *  *   110.249.201.0/24 
0.0.0.0/0
18 658 39480 DROP   all  --  *  *   110.249.202.0/24 
0.0.0.0/0
19 608 36480 DROP   all  --  *  *   111.225.148.0/24 
0.0.0.0/0
20 552 33120 DROP   all  --  *  *   46.229.168.0/24  
0.0.0.0/0
21  78  3744 DROP   all  --  *  *   157.55.39.0/24   
0.0.0.0/0
22 178 10680 DROP   all  --  *  *   220.243.135.0/24 
0.0.0.0/0
23 530 32044 DROP   all  --  *  *   109.95.253.0/24  
0.0.0.0/0

This collection is over less than a week, but looks as if the first 6 could be 
dropped, but I'd do this monthly in case they have a weekly new ip 
that some like bingbot, yandex, and semrush are using 

Re: fail2ban for apache2

2019-11-10 Thread tomas
On Sun, Nov 10, 2019 at 10:55:03AM -0500, Gene Heskett wrote:
> On Sunday 10 November 2019 08:02:46 Michael wrote:

> Which contains such gems as this:
> coyote.coyote.den:80 40.77.167.79 - - 
> [10/Nov/2019:10:44:45 -0500] "GET /gene/fence/18.html HTTP/1.1" 200 
> 1121 "-" "Mozilla/5.0 (iPhone; CPU iPhone OS 7_0 like Mac OS X) 
> AppleWebKit/537.51.1 (KHTML, like Gecko) Version/7.0 Mobile/11A465 
> Safari/9537.53 (compatible; bingbot/2.0; 
> +http://www.bing.com/bingbot.htm)"
> 
> But I've no clue which of the above blather is the "User agent" [...]

It's the sixth field:

  "Mozilla/5.0 (iPhone; CPU iPhone OS 7_0 like Mac OS X) 
  AppleWebKit/537.51.1 (KHTML, like Gecko) Version/7.0 Mobile/11A465 
  Safari/9537.53 (compatible; bingbot/2.0; 
  +http://www.bing.com/bingbot.htm)"

Yes, a bit long. But focusing on the bingbot part seems reasonable.

Cheers
-- t


signature.asc
Description: Digital signature


Re: fail2ban for apache2

2019-11-10 Thread Brian
On Sun 10 Nov 2019 at 10:26:17 -0800, Kushal Kumaran wrote:

> Brian  writes:
> 
> > On Sun 10 Nov 2019 at 11:01:07 +0100, Michael wrote:
> >
> >> On Saturday, November 9, 2019 7:01:00 PM CET, Gene Heskett wrote:
> >> 
> >> > I was able, with the help of another responder to carve up some iptables
> >> > rules to stop the DDOS that semrush, yandex, bingbot, and 2 or 3 others
> >> > were bound to do to me.
> >> 
> >> using iptables directly is fine, because you get your results fast, but it
> >> lacks some advantages over fail2ban, which i think outweigh the simplicity
> >> of iptables:
> >> - whith iptables you have to scan your log regularly for misbehaving or
> >> unwanted clients, whereas fail2ban does this automatically, constantly (!),
> >> and based on rules. from time to time these rules have to be adapted, since
> >> bots are evolving, but i think it's still less trouble than looking at log
> >> files every day or so.
> >> - fail2ban allows you to block only specific ports, in your case maybe 80
> >> and/or 443 for the web server.
> >> - you have to remember which ip address you blocked, why and for how long
> >> you want to block them. fail2ban does that for you.
> >> - ... (too lazy right now to write more)
> >
> > This accords with my understanding of failtoban with exim. I use it to
> > keep the logs clean and it is very effective. Offenders are banned for
> > a year, although I do wonder sometimes whether this length of time is
> > a little over the top. I also wonder whether, as the banned list builds
> > up, there is a noticable affect on the machine's resources.
> 
> Probably.  But you have to balance that against the resources required
> if you let the connection through to exim (or whatever service you're
> protecting).  iptables (even with a few hundred rules) is likely to be
> more efficient than exim.

Thank you for that, Kushal. I see your point. It is indeed efficiency,
not security, I am after.

> One thing you could try is to examine the iptables rule counters
> daily/weekly.  If the counters do not increase during some interval,
> then the rule is no longer useful to you, so you could delete it.  This
> should be fairly straightforward to automate, but I don't know if
> someone has already built this tooling.

I hardly use iptables, so this is the first I have heard about rule
counters. I'll work something out to accomodate it.

-- 
Brian.



Re: fail2ban for apache2

2019-11-10 Thread ghe
On 11/10/19 8:55 AM, Gene Heskett wrote:

> Thats an approximate idea of my understanding how it works, but to 
> gradually transit from manual reading of the logs and applying iptable 
> rules to block the miscreants, the first step would seem to indicate 
> training fail2ban to read the same log file I am. 

Have you looked at Logwatch?

It'll tell you, every morning, the things iptables (and maybe fail2ban)
bounced, the IP, the protocol, the number of hits, and the port. From
that info, and whois on the IP, I can block, in iptables or the router,
entire naughty nets hitting my server (most nets I block are massive
jerks or outside this country).

-- 
Glenn English



Re: fail2ban for apache2

2019-11-10 Thread Kushal Kumaran
Brian  writes:

> On Sun 10 Nov 2019 at 11:01:07 +0100, Michael wrote:
>
>> On Saturday, November 9, 2019 7:01:00 PM CET, Gene Heskett wrote:
>> 
>> > I was able, with the help of another responder to carve up some iptables
>> > rules to stop the DDOS that semrush, yandex, bingbot, and 2 or 3 others
>> > were bound to do to me.
>> 
>> using iptables directly is fine, because you get your results fast, but it
>> lacks some advantages over fail2ban, which i think outweigh the simplicity
>> of iptables:
>> - whith iptables you have to scan your log regularly for misbehaving or
>> unwanted clients, whereas fail2ban does this automatically, constantly (!),
>> and based on rules. from time to time these rules have to be adapted, since
>> bots are evolving, but i think it's still less trouble than looking at log
>> files every day or so.
>> - fail2ban allows you to block only specific ports, in your case maybe 80
>> and/or 443 for the web server.
>> - you have to remember which ip address you blocked, why and for how long
>> you want to block them. fail2ban does that for you.
>> - ... (too lazy right now to write more)
>
> This accords with my understanding of failtoban with exim. I use it to
> keep the logs clean and it is very effective. Offenders are banned for
> a year, although I do wonder sometimes whether this length of time is
> a little over the top. I also wonder whether, as the banned list builds
> up, there is a noticable affect on the machine's resources.

Probably.  But you have to balance that against the resources required
if you let the connection through to exim (or whatever service you're
protecting).  iptables (even with a few hundred rules) is likely to be
more efficient than exim.

One thing you could try is to examine the iptables rule counters
daily/weekly.  If the counters do not increase during some interval,
then the rule is no longer useful to you, so you could delete it.  This
should be fairly straightforward to automate, but I don't know if
someone has already built this tooling.

-- 
regards,
kushal



Re: fail2ban for apache2

2019-11-10 Thread Gene Heskett
On Sunday 10 November 2019 08:02:46 Michael wrote:

> On Sunday, November 10, 2019 1:39:24 PM CET, to...@tuxteam.de wrote:
> > On Sun, Nov 10, 2019 at 07:04:12AM -0500, Gene Heskett wrote:
> >> On Sunday 10 November 2019 06:19:51 to...@tuxteam.de wrote:
> >>> On Sun, Nov 10, 2019 at 06:08:52AM -0500, Gene Heskett wrote:
> >
> > But... you can just configure your Apache to deny that user agent
> > itself. One less moving part (fail2ban) with all its configuration
> > joy.
>
> and, i think it's worth mentioning, the apache2 config denies the
> request __before__ it sends any data, whereas fail2ban has to wait
> until __after__ apache2 has finished handling the request.
>
> but: if fail2ban immediately (i.e. after the first request) invokes
> iptables and blocks the ip, then the data flow should be interrupted,
> and not too much data should be uploaded. correct me if i'm wrong.
>
>
Thats an approximate idea of my understanding how it works, but to 
gradually transit from manual reading of the logs and applying iptable 
rules to block the miscreants, the first step would seem to indicate 
training fail2ban to read the same log file I am. And I have read the 
installed files without getting the clarity needed to do that.  So that 
would be step #1.  The log file I am reading is:other_vhosts_access.log.

Which contains such gems as this:
coyote.coyote.den:80 40.77.167.79 - - 
[10/Nov/2019:10:44:45 -0500] "GET /gene/fence/18.html HTTP/1.1" 200 
1121 "-" "Mozilla/5.0 (iPhone; CPU iPhone OS 7_0 like Mac OS X) 
AppleWebKit/537.51.1 (KHTML, like Gecko) Version/7.0 Mobile/11A465 
Safari/9537.53 (compatible; bingbot/2.0; 
+http://www.bing.com/bingbot.htm)"

But I've no clue which of the above blather is the "User agent", but 
bingbot sure looks like a likely suspect.

> greetings...


Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 



Re: fail2ban for apache2

2019-11-10 Thread Michael

On Sunday, November 10, 2019 1:39:24 PM CET, to...@tuxteam.de wrote:

On Sun, Nov 10, 2019 at 07:04:12AM -0500, Gene Heskett wrote:

On Sunday 10 November 2019 06:19:51 to...@tuxteam.de wrote:

On Sun, Nov 10, 2019 at 06:08:52AM -0500, Gene Heskett wrote:



But... you can just configure your Apache to deny that user agent
itself. One less moving part (fail2ban) with all its configuration
joy.


and, i think it's worth mentioning, the apache2 config denies the request 
__before__ it sends any data, whereas fail2ban has to wait until __after__ 
apache2 has finished handling the request.


but: if fail2ban immediately (i.e. after the first request) invokes 
iptables and blocks the ip, then the data flow should be interrupted, and 
not too much data should be uploaded. correct me if i'm wrong.



greetings...



Re: fail2ban for apache2

2019-11-10 Thread Brian
On Sun 10 Nov 2019 at 11:01:07 +0100, Michael wrote:

> On Saturday, November 9, 2019 7:01:00 PM CET, Gene Heskett wrote:
> 
> > I was able, with the help of another responder to carve up some iptables
> > rules to stop the DDOS that semrush, yandex, bingbot, and 2 or 3 others
> > were bound to do to me.
> 
> using iptables directly is fine, because you get your results fast, but it
> lacks some advantages over fail2ban, which i think outweigh the simplicity
> of iptables:
> - whith iptables you have to scan your log regularly for misbehaving or
> unwanted clients, whereas fail2ban does this automatically, constantly (!),
> and based on rules. from time to time these rules have to be adapted, since
> bots are evolving, but i think it's still less trouble than looking at log
> files every day or so.
> - fail2ban allows you to block only specific ports, in your case maybe 80
> and/or 443 for the web server.
> - you have to remember which ip address you blocked, why and for how long
> you want to block them. fail2ban does that for you.
> - ... (too lazy right now to write more)

This accords with my understanding of failtoban with exim. I use it to
keep the logs clean and it is very effective. Offenders are banned for
a year, although I do wonder sometimes whether this length of time is
a little over the top. I also wonder whether, as the banned list builds
up, there is a noticable affect on the machine's resources.

-- 
Brian.



Re: fail2ban for apache2

2019-11-10 Thread tomas
On Sun, Nov 10, 2019 at 07:04:12AM -0500, Gene Heskett wrote:
> On Sunday 10 November 2019 06:19:51 to...@tuxteam.de wrote:
> 
> > On Sun, Nov 10, 2019 at 06:08:52AM -0500, Gene Heskett wrote:

[...]

> >  - assess client behaviour

[...]

> Humm.  That would take a user-agent trigger [...]

Bingo. You can let fail2ban pick up the UA off the log, block that
source IP.

But... you can just configure your Apache to deny that user agent
itself. One less moving part (fail2ban) with all its configuration
joy.

Fail2ban would come in whenever the traffic generated by the (now
rejected) attempts clog your Apache (or your connection). But I
don't think it'll come that far.

C'mon, Gene. Try to grok your web server's config (Apache's is
ugly, but hey, you chose it). You'll have to bite that bullet
sooner or later. Their docs are actually very good.

Even if you decide to fail2ban later, it makes sense to master
your web server config to munge your logs in a way that fail2ban
has something to chew on.

Start here: https://httpd.apache.org/docs/

Cheers
-- tomás


signature.asc
Description: Digital signature


Re: fail2ban for apache2

2019-11-10 Thread Gene Heskett
On Sunday 10 November 2019 06:19:51 to...@tuxteam.de wrote:

> On Sun, Nov 10, 2019 at 06:08:52AM -0500, Gene Heskett wrote:
>
> [...]
>
> > But, I'm getting the impression that it has to fail before fail2ban
> > kicks in [...]
>
> No. It has to "succeed" once before fail2ban can do its job. It is:
>
>  - assess client behaviour
>  - http server writes a log entry (or a set thereof) which fail2ban
> can feed on - magic (i.e. fail2ban rules)
>  - fail2ban blocks offending address.
>
> It's the same process you're doing manually now. If you can codify
> the decisions you take in the form of fail2ban rules, then fail2ban
> is for you.
>
Humm.  That would take a user-agent trigger else I'd be killing joe 
blpstks attempt at downloading the .debs for linuxcnc for his spanking 
new rpi4, have to be carefull there I think.  This code is running my 
big lathe flawlessly but its had support for 2 more of the mesa 
interface cards added, and that has not been tested yet. Broadcom is not 
exactly a blabbermouthed company when something gets changed.  The 
realtime kernel is the whole stacks 2.9 gigabyte image and needs a 
serious stripping down to just what needs copied to /boot in the sd card 
it boots from.  That would produce a 20 meg tarball.

> Cheers
> -- t


Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 



Re: fail2ban for apache2

2019-11-10 Thread tomas
On Sun, Nov 10, 2019 at 06:08:52AM -0500, Gene Heskett wrote:

[...]

> But, I'm getting the impression that it has to fail before fail2ban kicks 
> in [...]

No. It has to "succeed" once before fail2ban can do its job. It is:

 - assess client behaviour
 - http server writes a log entry (or a set thereof) which fail2ban can feed on
 - magic (i.e. fail2ban rules)
 - fail2ban blocks offending address.

It's the same process you're doing manually now. If you can codify
the decisions you take in the form of fail2ban rules, then fail2ban
is for you.

Cheers
-- t


signature.asc
Description: Digital signature


Re: fail2ban for apache2

2019-11-10 Thread Gene Heskett
On Sunday 10 November 2019 05:01:07 Michael wrote:

> On Saturday, November 9, 2019 7:01:00 PM CET, Gene Heskett wrote:
> > Whats this "jail"? The beginners tut seems to assume we've all had
> > cs101 thru cs401 and Just Know all the secret handshakes bs already.
>
> no idea what you're talking about... i almost never read any tutorial,
> just man pages. that's what i think they're here for (althuogh i have
> to admit the quality varies a lot!).
>
> so, a jail is just a name for a set of blocking rules, filters and
> actions. - the rule (a file in /etc/fail2ban/jail.d/, e.g.
> genes-apache.conf) describes what should be blocked, why, and for how
> long.
> - the filter (located in /etc/fail2ban/filter.d/) describes (whith a
> python regular expression) which log file entry triggers the rule to
> act upon. in your case it could be something somebody described here
> in another post with the semrush bot. or just anything you desire.
> - actions are defined in /etc/fail2ban/action.d/, and, well, they
> define what should happen if a rule is to be executed. one might say,
> the triggering ip address goes into jail.
>
> sorry, if you already know that, but i felt like you didn't quite.
>
> > Sorry,
> > I've been hiding behind dd-wrt for about 2 decades and never had to
> > worry about it before.
>
> nothing to be ashamed about. in fact, quite the opposite! i use an
> openwrt router, too. so...
>
> > Besides that the jail.d subdir of the install is empty.
>
> hm, after installing fail2ban i had a 'defaults-debian.conf' in
> jail.d, which enables the jail for sshd.
>
> > No jail.example
> > file to give one an inkling of what its supposed to be like.
>
> RTFM!
>
> man jail.conf
>
> and /etc/fail2ban/jail.conf is a perfectly valid example of many
> jails.
>
> > Theres zero tutorial value in that.
>
> i'm old school, so sorry for me repeating: RTFM!
>
> > I was able, with the help of another
> > responder to carve up some iptables rules to stop the DDOS that
> > semrush, yandex, bingbot, and 2 or 3 others were bound to do to me.
>
> using iptables directly is fine, because you get your results fast,
> but it lacks some advantages over fail2ban, which i think outweigh the
> simplicity of iptables:

But, I'm getting the impression that it has to fail before fail2ban kicks 
in.  Thats not the situation here, they are downloading the whole site, 
one file at a time, some of which are install iso's I haven't totalled 
up but which I'd estimate could exceed 20 gigabytes and never satisfied 
they have got a good copy so they'll restart at the top of the Nitrous9 
build and cycle thru it all over again.

> - whith iptables you have to scan your log regularly for misbehaving
> or unwanted clients, whereas fail2ban does this automatically,
> constantly (!), and based on rules. from time to time these rules have
> to be adapted, since bots are evolving, but i think it's still less
> trouble than looking at log files every day or so.
> - fail2ban allows you to block only specific ports, in your case maybe
> 80 and/or 443 for the web server.

As an homage to the hitachi HD6309, my pages are running on port 6309, a 
cmos replacement for the moto 6809, caught in a legal black hole because 
hitachi has perms to make a workalike in cmos.  Except it isn't, its 
opcode map has been filled in with lots of stuff the 6809 can't do, 
including 32 bit loads and stores, mul's and div's 32 bits wide. Add in 
a change in how it pipelines, and you have a 8 bit cpu thats around 20% 
faster than the 6809 just when running the 6809 opcode map. Judicious 
rewriting of the old os9 operating system has fixed some bugs and about 
doubled the speed of a trs-80 color computer with one of these cpu's 
transplanted into it. But we've had to find all that stuff ourselves as 
hitachi's perms prevent them from ever confirming that they've made an 
improved version. I even had a hand in some of that rewrite, doing a 
version of its random block file manager and raising the maximum drive 
size from 131 megabytes to 4 gigabytes. My own trs-80 color computer has 
a pair of 1G drives to play in and is running in the basement as I type 
this, but those drives are so old now that if I turn it off for 6 
months, I may never get them started again.  Those drives have been 
spinning for about 30 years now, and neither has a bad sector yet.  
Drive failures happen when you turn them off and on.  Leave then 
running, and they don't fail.  I have an early 1T drive here, works 
great, nearly 100,000 spinning hours on it.

I know, TL;DR, but thats who I am.
> - you have to remember which ip address you blocked, why and for how
> long you want to block them. fail2ban does that for you.
> - ... (too lazy right now to write more)
>
>
> greetings...


Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law 

Re: fail2ban for apache2

2019-11-10 Thread Michael

On Saturday, November 9, 2019 7:01:00 PM CET, Gene Heskett wrote:
Whats this "jail"? The beginners tut seems to assume we've all had cs101 
thru cs401 and Just Know all the secret handshakes bs already.


no idea what you're talking about... i almost never read any tutorial, just 
man pages. that's what i think they're here for (althuogh i have to admit 
the quality varies a lot!).


so, a jail is just a name for a set of blocking rules, filters and actions.
- the rule (a file in /etc/fail2ban/jail.d/, e.g. genes-apache.conf) 
describes what should be blocked, why, and for how long.
- the filter (located in /etc/fail2ban/filter.d/) describes (whith a python 
regular expression) which log file entry triggers the rule to act upon. in 
your case it could be something somebody described here in another post 
with the semrush bot. or just anything you desire.
- actions are defined in /etc/fail2ban/action.d/, and, well, they define 
what should happen if a rule is to be executed. one might say, the 
triggering ip address goes into jail.


sorry, if you already know that, but i felt like you didn't quite.


Sorry, 
I've been hiding behind dd-wrt for about 2 decades and never had to 
worry about it before.


nothing to be ashamed about. in fact, quite the opposite! i use an openwrt 
router, too. so...




Besides that the jail.d subdir of the install is empty.


hm, after installing fail2ban i had a 'defaults-debian.conf' in jail.d, 
which enables the jail for sshd.



No jail.example 
file to give one an inkling of what its supposed to be like.


RTFM!

man jail.conf

and /etc/fail2ban/jail.conf is a perfectly valid example of many jails.



Theres zero tutorial value in that.


i'm old school, so sorry for me repeating: RTFM!


I was able, with the help of another 
responder to carve up some iptables rules to stop the DDOS that semrush, 
yandex, bingbot, and 2 or 3 others were bound to do to me.


using iptables directly is fine, because you get your results fast, but it 
lacks some advantages over fail2ban, which i think outweigh the simplicity 
of iptables:
- whith iptables you have to scan your log regularly for misbehaving or 
unwanted clients, whereas fail2ban does this automatically, constantly (!), 
and based on rules. from time to time these rules have to be adapted, since 
bots are evolving, but i think it's still less trouble than looking at log 
files every day or so.
- fail2ban allows you to block only specific ports, in your case maybe 80 
and/or 443 for the web server.
- you have to remember which ip address you blocked, why and for how long 
you want to block them. fail2ban does that for you.

- ... (too lazy right now to write more)


greetings...



Re: fail2ban for apache2

2019-11-09 Thread Gene Heskett
On Saturday 09 November 2019 15:07:51 mick crane wrote:

> On 2019-11-09 18:01, Gene Heskett wrote:
> > On Saturday 09 November 2019 08:59:14 Michael wrote:
> >> > Rather then to use fail2ban for this, I would create un ipset
> >> > that fail2ban can populate then use that ipset in iptables.
> >>
> >> i agree, but:
> >> > One advantage of this is that you can add/delete ip from the
> >> > ipset without having to restart fail2ban/iptables.
> >>
> >> RTFM
> >>
> >> fail2ban allows you to 'unban' an ip address as well:
> >> > man fail2ban-client
> >>
> >> set  unbanip 
> >> manually Unban  in 
> >
> > Whats this "jail"? The beginners tut seems to assume we've all had
> > cs101
> > thru cs401 and Just Know all the secret handshakes bs already. 
> > Sorry, I've been hiding behind dd-wrt for about 2 decades and never
> > had to worry about it before.
> >
> > Besides that the jail.d subdir of the install is empty. No
> > jail.example file to give one an inkling of what its supposed to be
> > like.  Theres zero tutorial value in that. I was able, with the help
> > of another responder to carve up some iptables rules to stop the
> > DDOS that semrush,
> > yandex, bingbot, and 2 or 3 others were bound to do to me.
> >
> > Understand I have no objections to those folks indexing my site so
> > their
> > search engines can find stuff, but to just repeatedly download the
> > whole
> > thing, copying it forever, reaching into nooks and crannies I don't
> > even
> > link to, using all my upload bandwidth for weeks at a time, will
> > bring me to battle stations. And we both will suffer because of
> > their poor behavior.
> >
> >> greetings...
> >
> > Cheers, Gene Heskett
>
> I like Gene, he is trying to make something work.

Something I have been extra-ordinarily good in the electronics field 
since quitting school early in my freshman year to go fix tv's for a 
living in '48. 100% self educated, I have taught more school than I have 
attended as a student since. I know the physics behind the electronics 
and can be a decent mechanic, my interests are best described as 
eclectic.

Finishing my working time out as the CE at a tv station here in WV, 18 
years occasionally behind an office door, but 98% of the time fixing 
what news could tear up, or keeping an old GE transmitter making a 
better pix than it did new. For lots longer at a time too.

> When all this stuff started there seemed to be some sort of logic to
> it and I can't say I understood much of it but the thing seems to be
> now that there seems to be layers and layers of obscurity which makes
> it trickier to figure out what is going on.
> mick

To help clarify that, fail2ban has been stopped and the battle is now 
being waged with iptables only. And I have about got the bots locked 
out.

I just shut down someone pulling a linuxcnc stretch based install .iso 
because I know for a fact that my copy is now old, they should be 
getting that from wiki.linuxcnc.org to get a link to the latest.

So I just nuked that and 2 or 3 other instances of outdated stuff.  No 
sense spreading old code.

Does that clarify things any?

Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 



Re: fail2ban for apache2

2019-11-09 Thread Brian
On Sat 09 Nov 2019 at 20:07:51 +, mick crane wrote:

> I like Gene, he is trying to make something work.

The "something" is what is at issue.

> When all this stuff started there seemed to be some sort of logic to it and
> I can't say I understood much of it but the thing seems to be now that there
> seems to be layers and layers of obscurity which makes it trickier to figure
> out what is going on.

Deconstruction: I do not know what is going on. But neither does Gene.

-- 
Brian.



Re: fail2ban for apache2

2019-11-09 Thread mick crane

On 2019-11-09 18:01, Gene Heskett wrote:

On Saturday 09 November 2019 08:59:14 Michael wrote:


> Rather then to use fail2ban for this, I would create un ipset that
> fail2ban can populate then use that ipset in iptables.

i agree, but:
> One advantage of this is that you can add/delete ip from the ipset
> without having to restart fail2ban/iptables.

RTFM

fail2ban allows you to 'unban' an ip address as well:
> man fail2ban-client

set  unbanip 
manually Unban  in 

Whats this "jail"? The beginners tut seems to assume we've all had 
cs101

thru cs401 and Just Know all the secret handshakes bs already.  Sorry,
I've been hiding behind dd-wrt for about 2 decades and never had to
worry about it before.

Besides that the jail.d subdir of the install is empty. No jail.example
file to give one an inkling of what its supposed to be like.  Theres
zero tutorial value in that. I was able, with the help of another
responder to carve up some iptables rules to stop the DDOS that 
semrush,

yandex, bingbot, and 2 or 3 others were bound to do to me.

Understand I have no objections to those folks indexing my site so 
their
search engines can find stuff, but to just repeatedly download the 
whole
thing, copying it forever, reaching into nooks and crannies I don't 
even

link to, using all my upload bandwidth for weeks at a time, will bring
me to battle stations. And we both will suffer because of their poor
behavior.


greetings...



Cheers, Gene Heskett



I like Gene, he is trying to make something work.
When all this stuff started there seemed to be some sort of logic to it 
and I can't say I understood much of it but the thing seems to be now 
that there seems to be layers and layers of obscurity which makes it 
trickier to figure out what is going on.

mick
--
Key ID4BFEBB31



Re: fail2ban for apache2

2019-11-09 Thread Andy Smith
Hello,

On Sat, Nov 09, 2019 at 01:34:11PM -0500, Gene Heskett wrote:
> On Saturday 09 November 2019 10:10:53 Andy Smith wrote:
> > You've repeatedly been advised to block these bots in Apache by
> > their UserAgent. Have you tried that yet? It would be a lot simpler
> > than fail2ban or trying to keep up with their IP addresses.
> >
> Maybe, but semrush has a variation in the user agent spelling that makes 
> a block of xx.xx.xx.xx/24 more effective.

Really?

$ cat /var/log/apache2/access.log{,.1} | awk -F '[()]' 'tolower($0) ~ /semrush/ 
{ print $2 }' | sort | uniq -c | sort -rn
 95 compatible; SemrushBot/6~bl; +http://www.semrush.com/bot.html
 80 compatible; SemrushBot/3~bl; +http://www.semrush.com/bot.html
 29 compatible; SemrushBot-BA; +http://www.semrush.com/bot.html

I'll suggest once more just blocking UserAgents that match
"SemrushBot" but I realise I am just howling into the void.

Regards,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



Re: fail2ban for apache2

2019-11-09 Thread Gene Heskett
On Saturday 09 November 2019 10:37:09 john doe wrote:

> On 11/9/2019 2:43 PM, Gene Heskett wrote:
> > On Saturday 09 November 2019 03:36:49 john doe wrote:
> >> On 11/9/2019 8:30 AM, Gene Heskett wrote:
> >>> I have a list of ipv4's I want fail2ban to block. But amongst the
> >>> numerous subdirs for fail2ban, I cannot find one that looks
> >>> suitable to put this list of addresses in so the are blocked
> >>> forever.  Can someone more familiar with how fail2ban works give
> >>> me a hand?  These are the ipv4 addresses of bingbot, semrush,
> >>> yandex etc etc that are DDOSing me by repeatedly downloading my
> >>> whole site and using up 100% of my upload bandwidth.
> >>>
> >>> Thanks all.
> >>>
> >>> Cheers, Gene Heskett
> >>
> >> Rather then to use fail2ban for this, I would create un ipset that
> >> fail2ban can populate then use that ipset in iptables.
> >>
> >> One advantage of this is that you can add/delete ip from the ipset
> >> without having to restart fail2ban/iptables.
> >
> > I've done that with the help of a previous responder and now have
> > 99% of the pigs that ignore my robots.txt blocked. semrush is
> > extremely determined and has switched to a 4th address I've not seen
> > before, but is no longer DDOSing my site.
>
> Then, I don't understand your question, if you have fail2ban
> populating an ipset and that ipset is used in iptables.
> You can simply add those set of IPs to the ipset manually.

Fail2ban might be running and I likely should stop it, but ATM I'm manually 
adding rules to iptables.  And I am about down to seeing 
only the fetchmail scans that actually find something to download. Tracking 
actual net traffic with gkrellm.

> Note that using IPs directly is an red herring; you need to use other
> means (UserAgent ...) to identify those bots.

I'll repeat that semrush has at least 6 variations of their User-agent names, 
maybe more.  Easier to use the ip's with a broad /24 
brush.  They can name it anything they want, but the ip isn't phony. Hit them 
with a /24 and you've got everything I've seen so far 
except bytespider. They cover 2 /24 blocks.

> By the sound of it, you cleerly need to learn the httpd server you are
> using, then if it is not enough, add fail2ban and iptables into the
> mix.

Agreed, but the man pages for both apache2 and fail2ban are a poor tut. 
iptables is better.
 Adding these on the fly:
root@coyote:action.d$ iptables -L -nv --line-numbers
Chain INPUT (policy ACCEPT 103 packets, 12830 bytes)
num   pkts bytes target prot opt in out source   
destination
10 0 DROP   all  --  *  *   73.229.203.175   
0.0.0.0/0
20 0 DROP   all  --  *  *   77.88.5.200  
0.0.0.0/0
30 0 DROP   all  --  *  *   66.249.64.226
0.0.0.0/0
40 0 DROP   all  --  *  *   40.77.167.82 
0.0.0.0/0
50 0 DROP   all  --  *  *   111.225.149.199  
0.0.0.0/0
60 0 DROP   all  --  *  *   40.77.167.142
0.0.0.0/0
74   240 DROP   all  --  *  *   220.243.136.25   
0.0.0.0/0
8  416 24960 DROP   all  --  *  *   46.229.168.146   
0.0.0.0/0
93   180 DROP   all  --  *  *   141.8.143.160
0.0.0.0/0
10   0 0 DROP   all  --  *  *   111.225.148.159  
0.0.0.0/0
11  48  2880 DROP   all  --  *  *   46.229.168.134   
0.0.0.0/0
12   0 0 DROP   all  --  *  *   46.229.168.137   
0.0.0.0/0
13   0 0 DROP   all  --  *  *   111.225.148.49   
0.0.0.0/0
14   0 0 DROP   all  --  *  *   220.243.136.54   
0.0.0.0/0
15   0 0 DROP   all  --  *  *   110.249.202.57   
0.0.0.0/0
16  68  4080 DROP   all  --  *  *   111.225.149.0/24 
0.0.0.0/0
17  50  3000 DROP   all  --  *  *   110.249.201.0/24 
0.0.0.0/0
18  35  2100 DROP   all  --  *  *   110.249.202.0/24 
0.0.0.0/0
19   8   480 DROP   all  --  *  *   111.225.148.0/24 
0.0.0.0/0
20   8   480 DROP   all  --  *  *   46.229.168.0/24  
0.0.0.0/0

obviously a bit dirty, but its stopping the DDOS. Which is what I came here to 
do.
> --
> John Doe


Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page <http://geneslinuxbox.net:6309/gene>



Re: fail2ban for apache2

2019-11-09 Thread Gene Heskett
On Saturday 09 November 2019 10:10:53 Andy Smith wrote:

> Hello,
>
> On Sat, Nov 09, 2019 at 08:43:25AM -0500, Gene Heskett wrote:
> > I've done that with the help of a previous responder and now have
> > 99% of the pigs that ignore my robots.txt blocked. semrush is
> > extremely determined and has switched to a 4th address I've not seen
> > before, but is no longer DDOSing my site.
>
> You've repeatedly been advised to block these bots in Apache by
> their UserAgent. Have you tried that yet? It would be a lot simpler
> than fail2ban or trying to keep up with their IP addresses.
>
Maybe, but semrush has a variation in the user agent spelling that makes 
a block of xx.xx.xx.xx/24 more effective.  I am now adding rules to 
block the whole /24 for some of the more obnoxious. bytespider in fact 
needs a /16, they've apparently 2 whole /24 blocks full of bots.

> Regards,
> Andy


Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 



Re: fail2ban for apache2

2019-11-09 Thread Gene Heskett
On Saturday 09 November 2019 08:59:14 Michael wrote:

> > Rather then to use fail2ban for this, I would create un ipset that
> > fail2ban can populate then use that ipset in iptables.
>
> i agree, but:
> > One advantage of this is that you can add/delete ip from the ipset
> > without having to restart fail2ban/iptables.
>
> RTFM
>
> fail2ban allows you to 'unban' an ip address as well:
> > man fail2ban-client
>
> set  unbanip 
> manually Unban  in 
>
Whats this "jail"? The beginners tut seems to assume we've all had cs101 
thru cs401 and Just Know all the secret handshakes bs already.  Sorry, 
I've been hiding behind dd-wrt for about 2 decades and never had to 
worry about it before.

Besides that the jail.d subdir of the install is empty. No jail.example 
file to give one an inkling of what its supposed to be like.  Theres 
zero tutorial value in that. I was able, with the help of another 
responder to carve up some iptables rules to stop the DDOS that semrush, 
yandex, bingbot, and 2 or 3 others were bound to do to me.

Understand I have no objections to those folks indexing my site so their 
search engines can find stuff, but to just repeatedly download the whole 
thing, copying it forever, reaching into nooks and crannies I don't even 
link to, using all my upload bandwidth for weeks at a time, will bring 
me to battle stations. And we both will suffer because of their poor 
behavior.

> greetings...


Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 



Re: fail2ban for apache2

2019-11-09 Thread Curt
On 2019-11-09, john doe  wrote:
>
> Note that using IPs directly is an red herring; you need to use other
> means (UserAgent ...) to identify those bots.

Over at semrush they advise the following (with robots.txt in the top
directory of the server):

 To stop SEMrushBot from crawling your site, add the following rules to your
 "robots.txt" file:

 To block SEMrushBot from crawling your site for web graph of links, add:
 User-agent: SemrushBot Disallow: /
 
 Please note that there might be a delay up to two weeks before SEMrushBot
 discovers the changes you made to robots.txt.

 To remove SEMrushBot from crawling your site for different SEO and technical
 issues, add: 
 User-agent: SemrushBot-SA
 Disallow: /

They note that it may require up to two weeks for the cure to take effect.

> By the sound of it, you cleerly need to learn the httpd server you are
> using, then if it is not enough, add fail2ban and iptables into the mix.
>
> --
> John Doe
>
>


-- 
“The cradle rocks above an abyss, and common sense tells us that our existence
is but a brief crack of light between two eternities of darkness.” 
"Speak, Memory," Vladimir Nabokov



Re: fail2ban for apache2

2019-11-09 Thread john doe
On 11/9/2019 2:43 PM, Gene Heskett wrote:
> On Saturday 09 November 2019 03:36:49 john doe wrote:
>
>> On 11/9/2019 8:30 AM, Gene Heskett wrote:
>>> I have a list of ipv4's I want fail2ban to block. But amongst the
>>> numerous subdirs for fail2ban, I cannot find one that looks suitable
>>> to put this list of addresses in so the are blocked forever.  Can
>>> someone more familiar with how fail2ban works give me a hand?  These
>>> are the ipv4 addresses of bingbot, semrush, yandex etc etc that are
>>> DDOSing me by repeatedly downloading my whole site and using up 100%
>>> of my upload bandwidth.
>>>
>>> Thanks all.
>>>
>>> Cheers, Gene Heskett
>>
>> Rather then to use fail2ban for this, I would create un ipset that
>> fail2ban can populate then use that ipset in iptables.
>>
>> One advantage of this is that you can add/delete ip from the ipset
>> without having to restart fail2ban/iptables.
>
> I've done that with the help of a previous responder and now have 99% of
> the pigs that ignore my robots.txt blocked. semrush is extremely
> determined and has switched to a 4th address I've not seen before, but
> is no longer DDOSing my site.
>

Then, I don't understand your question, if you have fail2ban populating
an ipset and that ipset is used in iptables.
You can simply add those set of IPs to the ipset manually.

Note that using IPs directly is an red herring; you need to use other
means (UserAgent ...) to identify those bots.
By the sound of it, you cleerly need to learn the httpd server you are
using, then if it is not enough, add fail2ban and iptables into the mix.

--
John Doe



Re: fail2ban for apache2

2019-11-09 Thread Andy Smith
Hello,

On Sat, Nov 09, 2019 at 08:43:25AM -0500, Gene Heskett wrote:
> I've done that with the help of a previous responder and now have 99% of 
> the pigs that ignore my robots.txt blocked. semrush is extremely 
> determined and has switched to a 4th address I've not seen before, but 
> is no longer DDOSing my site.

You've repeatedly been advised to block these bots in Apache by
their UserAgent. Have you tried that yet? It would be a lot simpler
than fail2ban or trying to keep up with their IP addresses.

Regards,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



Re: fail2ban for apache2

2019-11-09 Thread Michael

Rather then to use fail2ban for this, I would create un ipset that
fail2ban can populate then use that ipset in iptables.


i agree, but:


One advantage of this is that you can add/delete ip from the ipset
without having to restart fail2ban/iptables.


RTFM

fail2ban allows you to 'unban' an ip address as well:

   > man fail2ban-client
   set  unbanip 
   manually Unban  in 

greetings...



Re: fail2ban for apache2

2019-11-09 Thread Gene Heskett
On Saturday 09 November 2019 04:01:32 to...@tuxteam.de wrote:

> On Sat, Nov 09, 2019 at 03:36:49AM -0500, Gene Heskett wrote:
> > On Saturday 09 November 2019 02:49:16 mett wrote:
> > > On 2019年11月9日 16:30:57 JST, Gene Heskett  
wrote:
> > > >I have a list of ipv4's I want fail2ban to block. But amongst the
> > > >numerous subdirs for fail2ban, I cannot find one that looks
> > > > suitable to
> > > >
> > > >put this list of addresses in so the are blocked forever.  Can
> > > > someone more familiar with how fail2ban works give me a hand? 
> > > > These are the ipv4 addresses of bingbot, semrush, yandex etc etc
> > > > that are DDOSing me by repeatedly downloading my whole site and
> > > > using up 100% of my upload bandwidth.
> > > >
> > > >Thanks all.
> > > >
> > > >Cheers, Gene Heskett
> > > >--
> > > >"There are four boxes to be used in defense of liberty:
> > > > soap, ballot, jury, and ammo. Please use in that order."
> > > >-Ed Howdershelt (Author)
> > > >If we desire respect for the law, we must first make the law
> > > >respectable.
> > > > - Louis D. Brandeis
> > > >Genes Web page 
> > >
> > > Hi,
> > >
> > > In this case, better to use iptables
> > > directly:
> > >
> > > iptables -I INPUT 14 -s IP.ADD.RE.SS -j DROP
> >
> > root@coyote:action.d$ iptables -I INPUT 14 -s 73.229.203.175 -j DROP
>
>   ^^
>
> This "14" is probably the culprit.
>
> > doesn't work gets:
> > iptables: Index of insertion too big.  Even as low as 8
>
> This states at which position in the chain this rule is supposed
> to be inserted at (the "rulenum" in the man page). If you haven't
> an INPUT chain with at least 13 rules already in it (which I don't
> think you have), then the error message makes sense.
>
> For a first experiment, just leave that "14" out (-I doesn't
> require a rule number and inserts, by default, at the beginning,
> which in general makes sense). I'd try instead:
>
>   iptables -I INPUT -s IP.ADD.RE.SS -j DROP
>
I went back to 2, and built back to 10, which got enough of them to get 
some peace from their DDOSing.

> Cheers
> -- t


Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 



Re: fail2ban for apache2

2019-11-09 Thread Gene Heskett
On Saturday 09 November 2019 03:36:49 john doe wrote:

> On 11/9/2019 8:30 AM, Gene Heskett wrote:
> > I have a list of ipv4's I want fail2ban to block. But amongst the
> > numerous subdirs for fail2ban, I cannot find one that looks suitable
> > to put this list of addresses in so the are blocked forever.  Can
> > someone more familiar with how fail2ban works give me a hand?  These
> > are the ipv4 addresses of bingbot, semrush, yandex etc etc that are
> > DDOSing me by repeatedly downloading my whole site and using up 100%
> > of my upload bandwidth.
> >
> > Thanks all.
> >
> > Cheers, Gene Heskett
>
> Rather then to use fail2ban for this, I would create un ipset that
> fail2ban can populate then use that ipset in iptables.
>
> One advantage of this is that you can add/delete ip from the ipset
> without having to restart fail2ban/iptables.

I've done that with the help of a previous responder and now have 99% of 
the pigs that ignore my robots.txt blocked. semrush is extremely 
determined and has switched to a 4th address I've not seen before, but 
is no longer DDOSing my site.

Thanks John

> --
> John Doe


Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 



Re: fail2ban for apache2

2019-11-09 Thread tomas
On Sat, Nov 09, 2019 at 03:36:49AM -0500, Gene Heskett wrote:
> On Saturday 09 November 2019 02:49:16 mett wrote:
> 
> > On 2019年11月9日 16:30:57 JST, Gene Heskett  wrote:
> > >I have a list of ipv4's I want fail2ban to block. But amongst the
> > >numerous subdirs for fail2ban, I cannot find one that looks suitable
> > > to
> > >
> > >put this list of addresses in so the are blocked forever.  Can
> > > someone more familiar with how fail2ban works give me a hand?  These
> > > are the ipv4 addresses of bingbot, semrush, yandex etc etc that are
> > > DDOSing me by repeatedly downloading my whole site and using up 100%
> > > of my upload bandwidth.
> > >
> > >Thanks all.
> > >
> > >Cheers, Gene Heskett
> > >--
> > >"There are four boxes to be used in defense of liberty:
> > > soap, ballot, jury, and ammo. Please use in that order."
> > >-Ed Howdershelt (Author)
> > >If we desire respect for the law, we must first make the law
> > >respectable.
> > > - Louis D. Brandeis
> > >Genes Web page 
> >
> > Hi,
> >
> > In this case, better to use iptables
> > directly:
> >
> > iptables -I INPUT 14 -s IP.ADD.RE.SS -j DROP
> root@coyote:action.d$ iptables -I INPUT 14 -s 73.229.203.175 -j DROP
  ^^

This "14" is probably the culprit.

> doesn't work gets:
> iptables: Index of insertion too big.  Even as low as 8

This states at which position in the chain this rule is supposed
to be inserted at (the "rulenum" in the man page). If you haven't
an INPUT chain with at least 13 rules already in it (which I don't
think you have), then the error message makes sense.

For a first experiment, just leave that "14" out (-I doesn't
require a rule number and inserts, by default, at the beginning,
which in general makes sense). I'd try instead:

  iptables -I INPUT -s IP.ADD.RE.SS -j DROP

Cheers
-- t


signature.asc
Description: Digital signature


Re: fail2ban for apache2

2019-11-09 Thread john doe
On 11/9/2019 8:30 AM, Gene Heskett wrote:
> I have a list of ipv4's I want fail2ban to block. But amongst the
> numerous subdirs for fail2ban, I cannot find one that looks suitable to
> put this list of addresses in so the are blocked forever.  Can someone
> more familiar with how fail2ban works give me a hand?  These are the
> ipv4 addresses of bingbot, semrush, yandex etc etc that are DDOSing me
> by repeatedly downloading my whole site and using up 100% of my upload
> bandwidth.
>
> Thanks all.
>
> Cheers, Gene Heskett
>

Rather then to use fail2ban for this, I would create un ipset that
fail2ban can populate then use that ipset in iptables.

One advantage of this is that you can add/delete ip from the ipset
without having to restart fail2ban/iptables.

--
John Doe



Re: fail2ban for apache2

2019-11-09 Thread Gene Heskett
On Saturday 09 November 2019 02:55:45 darb wrote:

> * Gene Heskett wrote:
> > I have a list of ipv4's I want fail2ban to block. But amongst the
> > numerous subdirs for fail2ban, I cannot find one that looks suitable
> > to put this list of addresses in so the are blocked forever.  Can
> > someone more familiar with how fail2ban works give me a hand?  These
> > are the ipv4 addresses of bingbot, semrush, yandex etc etc that are
> > DDOSing me by repeatedly downloading my whole site and using up 100%
> > of my upload bandwidth.
>
> Not sure that fail2ban is the best tool for the job. Where you already
> have a list of IPs that you want to block why not just directly create
> the iptables rules?

just did that, got most of them but semrush apparently has fallback addys 
to use.  But I'm no longer being DDOSed, which was the point.  Thanks.

Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 



Re: fail2ban for apache2

2019-11-09 Thread Gene Heskett
On Saturday 09 November 2019 02:49:16 mett wrote:

> On 2019年11月9日 16:30:57 JST, Gene Heskett  wrote:
> >I have a list of ipv4's I want fail2ban to block. But amongst the
> >numerous subdirs for fail2ban, I cannot find one that looks suitable
> > to
> >
> >put this list of addresses in so the are blocked forever.  Can
> > someone more familiar with how fail2ban works give me a hand?  These
> > are the ipv4 addresses of bingbot, semrush, yandex etc etc that are
> > DDOSing me by repeatedly downloading my whole site and using up 100%
> > of my upload bandwidth.
> >
> >Thanks all.
> >
> >Cheers, Gene Heskett
> >--
> >"There are four boxes to be used in defense of liberty:
> > soap, ballot, jury, and ammo. Please use in that order."
> >-Ed Howdershelt (Author)
> >If we desire respect for the law, we must first make the law
> >respectable.
> > - Louis D. Brandeis
> >Genes Web page 
>
> Hi,
>
> In this case, better to use iptables
> directly:
>
> iptables -I INPUT 14 -s IP.ADD.RE.SS -j DROP
root@coyote:action.d$ iptables -I INPUT 14 -s 73.229.203.175 -j DROP
doesn't work gets:
iptables: Index of insertion too big.  Even as low as 8

> -where I is for "Insert"
> -14 is the line nber of insertion
> -where s is for "source"
> -where j is for "jump to"
> -also, u can check current table
>  with line-number by issuing:
>  iptables -L -nv --line-numbers
>
returns:
root@coyote:action.d$ iptables -L -nv --line-numbers
Chain INPUT (policy ACCEPT 15M packets, 186G bytes)
num   pkts bytes target prot opt in out source   
destination
10 0 f2b-sshd   tcp  --  *  *   0.0.0.0/0
0.0.0.0/0multiport dports 22

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
num   pkts bytes target prot opt in out source   
destination

Chain OUTPUT (policy ACCEPT 14M packets, 182G bytes)
num   pkts bytes target prot opt in out source   
destination

Chain f2b-sshd (1 references)
num   pkts bytes target prot opt in out source   
destination
10 0 RETURN all  --  *  *   0.0.0.0/0
0.0.0.0/0

> u can even script it for availability
> across reboot;

That was automatic the last time I actually used it.

> by the way
> depending debian version,
> iptables might have been
> replaced by nft.

Stretch, still iptables.
And I got it by starting at 2.
>
> hth!
root@coyote:action.d$ iptables -L -nv --line-numbers
Chain INPUT (policy ACCEPT 32 packets, 3143 bytes)
num   pkts bytes target prot opt in out source   
destination
10 0 f2b-sshd   tcp  --  *  *   0.0.0.0/0
0.0.0.0/0multiport dports 22
20 0 DROP   all  --  *  *   73.229.203.175   
0.0.0.0/0
30 0 DROP   all  --  *  *   77.88.5.200  
0.0.0.0/0
40 0 DROP   all  --  *  *   66.249.64.226
0.0.0.0/0
50 0 DROP   all  --  *  *   40.77.167.82 
0.0.0.0/0
60 0 DROP   all  --  *  *   111.225.149.199  
0.0.0.0/0
70 0 DROP   all  --  *  *   40.77.167.142
0.0.0.0/0
80 0 DROP   all  --  *  *   220.243.136.25   
0.0.0.0/0
90 0 DROP   all  --  *  *   46.229.168.146   
0.0.0.0/0
10   0 0 DROP   all  --  *  *   141.8.143.160
0.0.0.0/0

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
num   pkts bytes target prot opt in out source   
destination

Chain OUTPUT (policy ACCEPT 28 packets, 1939 bytes)
num   pkts bytes target prot opt in out source   
destination

Chain f2b-sshd (1 references)
num   pkts bytes target prot opt in out source   
destination
10 0 RETURN all  --  *  *   0.0.0.0/0
0.0.0.0/0

Thats not all of them but its a good start and I can get lots more ip's from 
the logs.

Thanks a bunch. Now maybe folks interested in running linuxcnc on an rpi4
can get to a preempt-rt kernel or linuxcnc stuffs to run their machinery with.

One last question, does this take ad.dr.ess.es/24 for mat as I can block 4 of
the semrush bots in one swell foop that way

Thanks a bunch, we got most of them in 10 new lines.

Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 



Re: fail2ban for apache2

2019-11-08 Thread mett
On 2019年11月9日 16:30:57 JST, Gene Heskett  wrote:
>I have a list of ipv4's I want fail2ban to block. But amongst the 
>numerous subdirs for fail2ban, I cannot find one that looks suitable to
>
>put this list of addresses in so the are blocked forever.  Can someone 
>more familiar with how fail2ban works give me a hand?  These are the 
>ipv4 addresses of bingbot, semrush, yandex etc etc that are DDOSing me 
>by repeatedly downloading my whole site and using up 100% of my upload 
>bandwidth.
>
>Thanks all.
>
>Cheers, Gene Heskett
>-- 
>"There are four boxes to be used in defense of liberty:
> soap, ballot, jury, and ammo. Please use in that order."
>-Ed Howdershelt (Author)
>If we desire respect for the law, we must first make the law
>respectable.
> - Louis D. Brandeis
>Genes Web page 

Hi,

In this case, better to use iptables
directly:

iptables -I INPUT 14 -s IP.ADD.RE.SS -j DROP 

-where I is for "Insert"
-14 is the line nber of insertion
-where s is for "source"
-where j is for "jump to"
-also, u can check current table 
 with line-number by issuing:
 iptables -L -nv --line-numbers

u can even script it for availability
across reboot;

by the way
depending debian version,
iptables might have been
replaced by nft.

hth!

Re: fail2ban for apache2

2019-11-08 Thread darb
* Gene Heskett wrote:
> I have a list of ipv4's I want fail2ban to block. But amongst the 
> numerous subdirs for fail2ban, I cannot find one that looks suitable to 
> put this list of addresses in so the are blocked forever.  Can someone 
> more familiar with how fail2ban works give me a hand?  These are the 
> ipv4 addresses of bingbot, semrush, yandex etc etc that are DDOSing me 
> by repeatedly downloading my whole site and using up 100% of my upload 
> bandwidth.
 
Not sure that fail2ban is the best tool for the job. Where you already
have a list of IPs that you want to block why not just directly create
the iptables rules?


signature.asc
Description: PGP signature


fail2ban for apache2

2019-11-08 Thread Gene Heskett
I have a list of ipv4's I want fail2ban to block. But amongst the 
numerous subdirs for fail2ban, I cannot find one that looks suitable to 
put this list of addresses in so the are blocked forever.  Can someone 
more familiar with how fail2ban works give me a hand?  These are the 
ipv4 addresses of bingbot, semrush, yandex etc etc that are DDOSing me 
by repeatedly downloading my whole site and using up 100% of my upload 
bandwidth.

Thanks all.

Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page