> -----Original Message-----
> From: Bakken, Luke [mailto:[EMAIL PROTECTED]
> Sent: Thursday, March 06, 2003 4:47 PM
> To: Michael Weber; [EMAIL PROTECTED]
> Subject: RE: Handling race conditions
> 
> 
> It sounds like you need two things:
> 
> 1. A faster way of storing "seen" IPs.
> 2. A lock mechanism to keep perl processes queued up waiting to write
> new iptables entries.
> 
> Having the perl script write a lock file as it is updating 
> the iptables

I have found that when timing is critical, even a lock file doesn't get
written fast enough.  I usually handle these types of things with a
client-server approach.

> should be easy - subsequent scripts can wait to ensure that 
> this file is
> gone before validating their IPs and updating iptables. 
> Storing the IPs
> for quick retrieval is another thing - it sounds as though you are
> re-reading a log file every time to see which IPs have already been
> blocked. Solutions to this range from a flat file of unique 
> IP addresses
> (simple) to using a database to store seen IP addresses, to having a
> dedicated daemon process running to which you send messages to when an
> IP needs to be checked/blocked.

This daemon process is the idea I would go with.  I don't think any solution
that relies on file system performance will work in all cases.

> 
> Luke
> 
> > I have a mail server with swatch examining the log files looking for
> > root.exe, /winnt/system32, etc.  The idea is finding anyone who is
> > scanning for root kits on my mail server gets blocked at the 
> > mail server
> > and the firewall with an iptables command.
> > 
> > What I have is swatch executing a perl script whenever a 
> > match is found
> > on a known bad-guy request.  That perl script exec's the 
> iptables, but
> > only if the same IP has not been found in the log file.  That way I
> > don't have a bunch of netfilter table entries with the same 
> IP number.
> > 
> > Here's the problem.  Script kiddies hit the server so fast the perl
> > script can't decide the IP number is unique, log the entry 
> and update
> > netfilter before 10 more copies of the script fire off.  I 
> > wind up with
> > 10-12 entries in less than a second.  
> > 
> > Anyone have a way of quickly determining that another copy of 
> > myself is
> > running and I need to shut down?  ps -ax | grep 
> > <program-name> is far to
> > slow to react to an attack.

Check out "Network Programming with Perl".  It can show you how to set up an
easy client-server system, where your mail server can call the client, and
the client contacts the server through a socket.  Even if many clients fire
at once, the socket connection forces serialization of the requests, and
ensures you will have enough time to determine if the IP is a duplicate.
The multiple clients will have to wait in turn for the socket to be
available, but at the same time, will free up your mail server to move on to
other tasks.

> > 
> > Or, am I being stupid and missing the easy answer?

I've been doing these kinds of timing things for years, and at least perl
makes the answer easier than C.

-Mark

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to