Re: [Simple-evcorr-users] 20th birthday of SEC

2021-03-26 Thread MILLS, ROCKY
Wow! 20 years of SEC!

Superb lightweight tool suitable for many applications and excellent support 
via this user group.

And each year SEC continues to evolve with new capabilities.


Thank you very much Risto!


And many thanks too for those that participate in this SEC user group across 
the years.

High Regards,
Rocky

-Original Message-
From: Brian Parent via Simple-evcorr-users 
 
Sent: Tuesday, March 23, 2021 11:38 AM
To: Frazier, Jon 
Cc: simple-evcorr-users@lists.sourceforge.net
Subject: Re: [Simple-evcorr-users] 20th birthday of SEC

Yes, thanks Risto, and everyone for helping make/keep this super valuable and 
powerful software available.

I've been using it at UCSD for nearly 20 years to detect brute force ssh 
scanning, and feed that info to the security folks to facilitate automated 
campus edge blocking.

Re:
> From: "Frazier, Jon" 
> Date: Tue, 23 Mar 2021 14:34:52 +
> Subject: Re: [Simple-evcorr-users] 20th birthday of SEC
> To: Risto Vaarandi ,
>  "simple-evcorr-users@lists.sourceforge.net"
>  
> 
> Thank you Risto as without you this tool would not be available.
> I know it personally helped me in two different shops to more easily perform 
> certain job requirements.
> Back in the early days there were more discussions on how to use it as well 
> as some benchmarking on number of events over time.
> 
> Regards,
> Jon Frazier
> 
> 
> -Original Message-
> From: Risto Vaarandi 
> Sent: Tuesday, March 23, 2021 9:28 AM
> To: simple-evcorr-users@lists.sourceforge.net
> Subject: [External] [Simple-evcorr-users] 20th birthday of SEC
> 
> ___
> Caution: This email originated from outside of GM Financial and may contain 
> unsafe content.
> ___
> 
> hi all,
> 
> on March 23 2001, SEC version 1.0 was released into public domain. I would 
> like to take the opportunity to thank all SEC users for creative discussions 
> in this mailing list during the last two decades. I would also like to thank 
> all people who have suggested new features or supplied software and 
> documentation fixes. I am especially grateful to John P. Rouillard for many 
> design proposals and new ideas that are now part of the SEC code. Finally, my 
> thanks will also go to long term SEC package maintainers for their continuous 
> work during more than a decade -- Jaakko Niemi (Debian and Ubuntu), Stefan 
> Schulze Frielinghaus (RHEL, CentOS and Fedora), Malcolm Lewis (SLE and 
> openSUSE), Okan Demirmen (OpenBSD), and all other package maintainers for 
> platforms I might not be aware of.
> 
> Thank you all!
> 
> risto
> 
> 
> ___
> Simple-evcorr-users mailing list
> Simple-evcorr-users@lists.sourceforge.net
> https://urldefense.com/v3/__https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users__;!!Mih3wA!SbQ4ewMYzRZDhxnV511cyWjrrcQuNzENaHvgw74cGabUQkyo8i0Bpo7LzMv7yRA$
>  
> 
> 
> 
> 
> Notice to all users The information contained in this email, including any 
> attachment(s) is confidential and intended solely for the addressee and may 
> contain privileged, confidential or restricted information. If you are not 
> the intended recipient or responsible to deliver to the intended recipient, 
> you are hereby notified that any dissemination, distribution or copying of 
> this communication is strictly prohibited. If you received this message in 
> error please notify the originator and then delete. Neither, the sender or 
> GMF's network will be liable for direct, indirect or consequential infection 
> by viruses associated with this email.
> 
> 
> ___
> Simple-evcorr-users mailing list
> Simple-evcorr-users@lists.sourceforge.net
> https://urldefense.com/v3/__https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users__;!!Mih3wA!SbQ4ewMYzRZDhxnV511cyWjrrcQuNzENaHvgw74cGabUQkyo8i0Bpo7LzMv7yRA$
>  

-- 
Brian Parent
Information Technology Services Department
ITS Computing Infrastructure Operations Group
its-ci-ops-h...@ucsd.edu (team email address for Service Now)
UC San Diego
(858) 534-6090


___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://urldefense.com/v3/__https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users__;!!BhdT!1_XcnKKZAir-6VcnHSqUnD3Dy0V8kUmh9Uqks0Bt5aLhqrgG29MJpZYxigT1Wn0$
 


___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] Accessing A Perl Hash From Pattern1 In Pattern 2

2019-10-04 Thread MILLS, ROCKY
Hi David,

Last night I only tested for a single event with a matching follow up event.

This morning I tested with two different events for the first pattern and two 
correlating events to follow.  I was missing the proper event correlation 
(desc) and (desc2) to line up the first event with the matching follow up event.

Here's my update that seems to work (notice the use of $1 and $2 in pattern2 
and the use of %1, %2 and %3 in desc2):

type=pair
ptype=regexp
pattern=User <([^\s]+)>.+IP 
<([0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3})>.+IPv4 Address 
<([0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3})>
desc=Get Name $1 - Global Address $2 - Local Address $3
action=write - tcpsock 10.3.0.85:514 LzEC VPN Address Mapping - User="$1" - 
Global Address ="$2" - Local Address = "$3"%{.nl}
ptype2=regexp
pattern2=Username = $1.+IP = $2.+Duration: 
([0-9]{1,2}h:[0-9]{1,2}m:[0-9]{1,2}s).+xmt: ([0-9]+).+rcv: ([0-9]+)
desc2=Name %1 - Global Address %2 - Add Local Address To Disconnect Message %3
action2=write - tcpsock 10.3.0.85:514 LzEC VPN Disconnect - User="%1" Global 
Address="%2" Local Address="%3" Duration="$1" Xmit Bytes="$2 Rcv 
Bytes="$3"%{.nl}

If you go this route, then you'll need to further test.  Since once again I 
only tested with a minimal set of test data of my own concoction.

Regards,
Rock

From: MILLS, ROCKY
Sent: Thursday, October 03, 2019 6:50 PM
To: 'simple-evcorr-users@lists.sourceforge.net' 

Subject: Re: [Simple-evcorr-users] Accessing A Perl Hash From Pattern1 In 
Pattern 2

***Security Advisory: This Message Originated Outside of AT ***
Reference http://cso.att.com/EmailSecurity/IDSP.html for more information.
Hi David,

Here's your same rule and same regular expressions using ptype=regexp instead 
of using perlfunc:

type=pair
ptype=regexp
pattern=User <([^\s]+)>.+IP 
<([0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3})>.+IPv4 Address 
<([0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3})>
desc=Get Name - Global Address - Local Address
action=tcpsock 10.3.0.85:514 LzEC VPN Address Mapping - User="$1" - Global 
Address ="$2" - Local Address = "$3"%{.nl};
ptype2=regexp
pattern2=Username = ($1).+IP = ($2).+Duration: 
([0-9]{1,2}h:[0-9]{1,2}m:[0-9]{1,2}s).+xmt: ([0-9]+).+rcv: ([0-9]+)
desc2=Add Local Address To Disconnect Message
action2=tcpsock 10.3.0.85:514 LzEC VPN Disconnect - User="$1" Global 
Address="$2" Local Address="%3" Duration="$3" Xmit Bytes="$4 Rcv 
Bytes="$5"%{.nl};

Notice action2 with local address from the first pattern as %3, and for the 
second pattern, $3 is used for Duration.

Regards,
Rock

From: David Thomas [mailto:dtho...@kwiktrip.com]
Sent: Thursday, October 03, 2019 3:35 PM
To: 
simple-evcorr-users@lists.sourceforge.net<mailto:simple-evcorr-users@lists.sourceforge.net>
Subject: [Simple-evcorr-users] Accessing A Perl Hash From Pattern1 In Pattern 2

I'm running into an issue with a correlation I'm trying to implement and I'm 
hoping you can help.

Event 1 happens when a user logs into a vpn.  It has the user's name the global 
address and the local address assigned by the vpn.
Event 2 happens when the user logs off the vpn.  It has the user's name, the 
global address, the duration and amount of traffic.

My objective is to get the local address from event 1 and combine it with the 
information from event 2.

I'm using a hash to get the name and both addresses from event 1.  Then in 
pattern 2 I reference that to see if the user name and global address match and 
add the local address from the hash.  What I'm trying now is below.

I'm getting messages from action2 tcp sock so it seems like I'm matching the 
pattern but the values of the hash keys that come from pattern 1 are empty.

Here is an example of what I'm getting:
VPN Disconnect - User="" Global Address="" Local Address="" 
Duration="0h:03m:07s" Xmit Bytes="1689622 Rcv Bytes="34370"

Here is the .sec file I'm currently using.  I'm hoping someone can point out 
what I'm doing wrong.  Thanks!

type=pair
ptype=PerlFunc
pattern=sub { my(%var); \
if ($_[0] !~ /User <([^\s]+)>.+IP 
<([0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3})>.+IPv4 Address 
<([0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3})>/) { return 0; } \
$var{"user"} = $1; \
$var{"global_address"} = $2; \
$var{"local_address"} = $3; \
return \%var; }
desc=Get Name - Global Address - Local Address
action=tcpsock 10.3.0.85:514 LzEC VPN Address Mapping - User="$+{user}" - 
Global Address ="$+{global_address}" - Local Address = 
"$+{local_address}"%{.nl};
ptype2=PerlFunc
pattern2=sub { my(%var); \
if ($_[0] !~ /Username = $+{user}.+IP = $+{global_address}.+Duration: 
([0-9]{1,2}h:[0-9]{1,2}m

Re: [Simple-evcorr-users] Accessing A Perl Hash From Pattern1 In Pattern 2

2019-10-03 Thread MILLS, ROCKY
Hi David,

Here's your same rule and same regular expressions using ptype=regexp instead 
of using perlfunc:

type=pair
ptype=regexp
pattern=User <([^\s]+)>.+IP 
<([0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3})>.+IPv4 Address 
<([0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3})>
desc=Get Name - Global Address - Local Address
action=tcpsock 10.3.0.85:514 LzEC VPN Address Mapping - User="$1" - Global 
Address ="$2" - Local Address = "$3"%{.nl};
ptype2=regexp
pattern2=Username = ($1).+IP = ($2).+Duration: 
([0-9]{1,2}h:[0-9]{1,2}m:[0-9]{1,2}s).+xmt: ([0-9]+).+rcv: ([0-9]+)
desc2=Add Local Address To Disconnect Message
action2=tcpsock 10.3.0.85:514 LzEC VPN Disconnect - User="$1" Global 
Address="$2" Local Address="%3" Duration="$3" Xmit Bytes="$4 Rcv 
Bytes="$5"%{.nl};

Notice action2 with local address from the first pattern as %3, and for the 
second pattern, $3 is used for Duration.

Regards,
Rock

From: David Thomas [mailto:dtho...@kwiktrip.com]
Sent: Thursday, October 03, 2019 3:35 PM
To: simple-evcorr-users@lists.sourceforge.net
Subject: [Simple-evcorr-users] Accessing A Perl Hash From Pattern1 In Pattern 2

I'm running into an issue with a correlation I'm trying to implement and I'm 
hoping you can help.

Event 1 happens when a user logs into a vpn.  It has the user's name the global 
address and the local address assigned by the vpn.
Event 2 happens when the user logs off the vpn.  It has the user's name, the 
global address, the duration and amount of traffic.

My objective is to get the local address from event 1 and combine it with the 
information from event 2.

I'm using a hash to get the name and both addresses from event 1.  Then in 
pattern 2 I reference that to see if the user name and global address match and 
add the local address from the hash.  What I'm trying now is below.

I'm getting messages from action2 tcp sock so it seems like I'm matching the 
pattern but the values of the hash keys that come from pattern 1 are empty.

Here is an example of what I'm getting:
VPN Disconnect - User="" Global Address="" Local Address="" 
Duration="0h:03m:07s" Xmit Bytes="1689622 Rcv Bytes="34370"

Here is the .sec file I'm currently using.  I'm hoping someone can point out 
what I'm doing wrong.  Thanks!

type=pair
ptype=PerlFunc
pattern=sub { my(%var); \
if ($_[0] !~ /User <([^\s]+)>.+IP 
<([0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3})>.+IPv4 Address 
<([0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3})>/) { return 0; } \
$var{"user"} = $1; \
$var{"global_address"} = $2; \
$var{"local_address"} = $3; \
return \%var; }
desc=Get Name - Global Address - Local Address
action=tcpsock 10.3.0.85:514 LzEC VPN Address Mapping - User="$+{user}" - 
Global Address ="$+{global_address}" - Local Address = 
"$+{local_address}"%{.nl};
ptype2=PerlFunc
pattern2=sub { my(%var); \
if ($_[0] !~ /Username = $+{user}.+IP = $+{global_address}.+Duration: 
([0-9]{1,2}h:[0-9]{1,2}m:[0-9]{1,2}s).+xmt: ([0-9]+).+rcv: ([0-9]+)/) { return 
0; } \
$var{"duration"} = $1; \
$var{"xmit_bytes"} = $2; \
$var{"rcv_bytes"} = $3; \
return \%var; }
desc2=Add Local Address To Disconnect Message
action2=tcpsock 10.3.0.85:514 LzEC VPN Disconnect - User="$+{user}" Global 
Address="$+{global_address}" Local Address="$+{local_address}" 
Duration="$+{duration}" Xmit Bytes="$+{xmit_bytes} Rcv 
Bytes="$+{rcv_bytes}"%{.nl};


___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] user poll: changing default values for some command line options

2015-01-19 Thread MILLS, ROCKY
If the bufsize=1 default occurs, upon loading its rules would SEC print a 
warning or an error message for each rule that is using pattern types like 
RegExp3, NRegExp2, and PerlFunc5?  If so, then existing rules could be checked 
in advance for any bufsize=1 issues by running SEC with --testonly and 
searching its output for the related warning messages.

Regards,
Rock

-Original Message-
From: Risto Vaarandi [mailto:risto.vaara...@seb.ee] 
Sent: Monday, January 19, 2015 8:24 AM
To: simple-evcorr-users@lists.sourceforge.net
Subject: [Simple-evcorr-users] user poll: changing default values for some 
command line options

Hi all,

I am currently working on the 2.7.7 version, and a recent e-mail exchange with 
one of the users has inspired me to think about changing default values for 
--bufsize and --jointbuf/--nojointbuf options.

Currently, the default for --bufsize is 10 which means that SEC keeps 10 last 
lines from input sources in input buffer, in order to facilitate multiline 
matching. However, many rulesets are written for processing single-line events 
(e.g., from syslog log files), and rulesets for multiline events are clearly a 
minority. In current pattern matching routines, all of the code is written in a 
generic way for both single-line and multi-line case. Nevertheless, when 
bufsize=1 would be default, some of the code for the single-line case could be 
factored out and written more efficiently, which would allow for some 
performance gains for single-line scenario. The downside of changing the 
default from bufsize=10 to bufsize=1 would be the need to set --bufsize 
explicitly on command line, in order to make pattern types like RegExp3, 
NRegExp2, and PerlFunc5 to work. So far, there has been rarely a need for this, 
since --bufsize=10 has been sufficient for most of the cases.

Also, currently SEC assumes --jointbuf option by default which means that in 
the case of multi-line matching all events are stored into the same input 
buffer. Nevertheless, in this case --nojointbuf would make more sense, since 
that creates a separate buffer for each input source, allowing multiline 
patterns to work on data from one source only. Since with bufsize=1 there is no 
difference between --jointbuf and --nojointbuf, the --nojointbuf option would 
be a more reasonable default.

To summarize, I would like to hear user opinions on these matters, and whether 
it would make sense to you to change default values for these command line 
options.

Best regards,
risto

--
New Year. New Location. New Benefits. New Data Center in Ashburn, VA.
GigeNET is offering a free month of service with a new server in Ashburn.
Choose from 2 high performing configs, both with 100TB of bandwidth.
Higher redundancy.Lower latency.Increased capacity.Completely compliant.
http://p.sf.net/sfu/gigenet
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users

--
New Year. New Location. New Benefits. New Data Center in Ashburn, VA.
GigeNET is offering a free month of service with a new server in Ashburn.
Choose from 2 high performing configs, both with 100TB of bandwidth.
Higher redundancy.Lower latency.Increased capacity.Completely compliant.
http://p.sf.net/sfu/gigenet
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] how to get pattern variable $1 to action ?

2014-11-21 Thread MILLS, ROCKY
Hi Andrew,

You can use 'eval' action to reformat the $1 timestamp.  Same perl code (except 
you need %% for month_hash):

eval %time ( my $str = $1;\
 my 
@months=('jan','feb','mar','apr','may','jun','jul','aug','sep','oct','nov','dec');\
 my ($day,$mon,$date,$year,$time) = split(' ',lc($str));\
 my %%month_hash;\
 @month_hash{@months} = (1 .. 12);\
 return $year-$month_hash{$mon}-$date $time;\
   )

Regards,
Rock

From: andrewarnier [mailto:andrewarn...@gmail.com]
Sent: Friday, November 21, 2014 1:26 AM
To: simple-evcorr-users@lists.sourceforge.net
Subject: [Simple-evcorr-users] how to get pattern variable $1 to action ?

Hi all,
I want to get the trap time ,but the trap time format is  Fri Nov 21 2014 
15:04:32  ,how to change the format to 2014-11-21 15:04:32 in my single rule 
?

I try to convert the datetime format in my sec fule, but my rule action can't 
get the variable $1,
Anyone knows what's wrong with my rule ? how to fix ?



type=Single
ptype=Regexp
pattern=(\S+) .1.3.6.1.4.1.3607.2.20.0.430 192.168.11.15 Loss Of Signal in 
(\S+) \(criticalServiceAffecting\),ifIndex=(.+)
desc= CA -15600 Loss of signal events for interface $2($3)
action=lcall %time - ( sub { my $str = '$1';\
my @months 
=('jan','feb','mar','apr','may','jun','jul','aug','sep','oct','nov','dec');\
my ($day,$mon,$date,$year,$time) = split(' ',lc($str));\
my %month_hash;\
@month_hash{@months} = (1 .. 12);\
return $year-$month_hash{$mon}-$date $time; } );shellcmd 
/home/andrew/code/sendmail.sh Loss Of Signal CA-15600 Loss of signal events 
for interface $2($3) %time




cheers,
Andrew
--
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration  more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=157005751iu=/4140/ostg.clktrk___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] Add new input files on the fly

2014-08-20 Thread MILLS, ROCKY
Hi Natalia,

I recently used these two rules to add a new daily input file (e.g. 
/myappx/log/appx.log.20140820) by creating a link to the daily created log file:

type=Single
continue=dontcont
ptype=regexp
pattern=SEC_STARTUP
desc=$0
action=eval %logfileprefix (return /myappx/log/appx.log;); \
   shellcmd ln -sf %logfileprefix.`date +%%Y%%m%%d` /tmp/mylink-appx.log

type=Calendar
time=0 0 * * *
desc=set up a link for new log file
action=eval %logfileprefix (return /myappx/log/appx.log;); \
   shellcmd ln -sf %logfileprefix.`date +%%Y%%m%%d` /tmp/mylink-appx.log

I used the -intevents arg upon SEC startup with -input /tmp/mylink-appx.log.

Regards,
Rock

From: Natalia Iglesias [mailto:nigles...@lookwisesolutions.com]
Sent: Wednesday, August 20, 2014 3:24 AM
To: simple-evcorr-users@lists.sourceforge.net; 'Risto Vaarandi'
Subject: [Simple-evcorr-users] Add new input files on the fly

Hello everyone,
We are experiencing the following problem: we start SEC once a day at 00:00 
hours, so all new files created at that point will be considered as input files 
for SEC rules. However it is possible that new files are created later on 
during the day, so how can we let SEC automatically include them for rule 
processing without stopping/starting SEC (which would cause as side-effect that 
previously processed input files would be processed again).

Thanks in advance!

Best regards,

Natalia Iglesias
--
Slashdot TV.  
Video for Nerds.  Stuff that matters.
http://tv.slashdot.org/___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] Nagios and SEC

2014-02-26 Thread MILLS, ROCKY
Hi John,

This reply is not related to your notes below but I am curious about how (and 
why) you're using SEC with Nagios.  Could you elaborate at a high level?  I'm 
not very familiar with Nagios but from my reading it contains a lot of 
functionality.  Doesn't Nagios support SEC-like functionality?

Regards,
Rock

-Original Message-
From: John P. Rouillard [mailto:rou...@cs.umb.edu] 
Sent: Monday, February 24, 2014 7:03 PM
To: simple-evcorr-users@lists.sourceforge.net
Subject: [Simple-evcorr-users] Embedding ; inside of a pipe's string parameter


Hi all:

I am trying to create a passive nagios event. To do this I am using:

   action= pipe '[%u] PROCESS_SERVICE_CHECK_RESULT;myhost;SyslogInfo;1;message 
%s' \
  ssh nagios use_forced_command

This fails to pass syntax check (sec -testonly). I also tried:

  action=  pipe ('[%u] PROCESS_SERVICE_CHECK_RESULT;myhost;SyslogInfo;1;message 
%s') \
  ssh nagios use_forced_command

and this failed as well. This seems to work:

   action=  pipe '([%u] 
PROCESS_SERVICE_CHECK_RESULT;myhost;SyslogInfo;1;message %s)' \
  ssh nagios use_forced_command

I think it would be worth mentioning this in the pipe action
documentation since the normal rule of wrapping the argument in ()'s
doesn't seem to hold.

--
-- rouilj
John Rouillard
===
My employers don't acknowledge my existence much less my opinions.

--
Flow-based real-time traffic analytics software. Cisco certified tool.
Monitor traffic, SLAs, QoS, Medianet, WAAS etc. with NetFlow Analyzer
Customize your own dashboards, set traffic alerts and generate reports.
Network behavioral analysis  security monitoring. All-in-one tool.
http://pubads.g.doubleclick.net/gampad/clk?id=126839071iu=/4140/ostg.clktrk
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users

--
Flow-based real-time traffic analytics software. Cisco certified tool.
Monitor traffic, SLAs, QoS, Medianet, WAAS etc. with NetFlow Analyzer
Customize your own dashboards, set traffic alerts and generate reports.
Network behavioral analysis  security monitoring. All-in-one tool.
http://pubads.g.doubleclick.net/gampad/clk?id=126839071iu=/4140/ostg.clktrk
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] Nagios and SEC

2014-02-26 Thread MILLS, ROCKY
John,

Thanks for the detailed info.

I work in a group that has implemented companywide (internal/homegrown, 
non-Nagios, non-SEC) monitoring that is our *NX monitor standard going forward. 
 Before this internal tool (standard) was promoted in our group I briefly 
researched using Nagios as a monitoring framework (as some of our other 
departments do use), but it was too complicated for what we wanted and it 
seemed to require too much training and expertise to maintain.  So we use SEC 
for specialized monitoring (outside of our standard monitoring).  Our standard 
monitoring tools simply don't have the capabilities of what we can easily do 
with SEC in a lightweight and low cost development effort.

Every now and then I think Nagios is going to reach the point to where it 
becomes easy to configure and cost effectively fulfill complicated and 
distributed monitoring requirements without requiring some Nagios expert(s).  
But I don't know enough about it too decide or become an advocate for Nagios.  
So far we're good with what we have (given SEC as a supplement to fill in the 
gaps).

Seems you use a similar hybrid approach with your monitoring scheme with Nagios 
and SEC.

Appreciatively,
Rock

Aside:  Per the link you provided for the Nagios-SEC manual, it referenced 
Esper as a java based correlation engine.  I've sometimes wondered if there was 
such an event correlation engine that could be used within our java apps to 
handle some more complicated event handling without creating so much code in a 
low cost and lightweight way.  Something high quality yet open-source like SEC. 
 Grin!  You've indirectly given me a lead that I'll check into.  Thanks again.

-Original Message-
From: John P. Rouillard [mailto:rou...@cs.umb.edu] 
Sent: Wednesday, February 26, 2014 3:18 PM
To: simple-evcorr-users@lists.sourceforge.net
Subject: Re: [Simple-evcorr-users] Nagios and SEC


Hi Rocky:

In message
2490B3D57700AD4BA03D09581F3DDFC9014EF4EB@GAALPA1MSGUSR9K.ITServices
.sbc.com, MILLS, ROCKY writes:

This reply is not related to your notes below but I am curious about how
(and why) you're using SEC with Nagios.  Could you elaborate at a high
level?  
I'm not very familiar with Nagios but from my reading it contains a 
lot of functionality.  Doesn't Nagios support SEC-like functionality?

In the case you cite, I am doing correlation on log events that is far
beyond what nagios can handle (its log analysis plugins are very
simple). I have a SEC daemon running that generates security alerts
into nagios for notification, escalation etc.

Unrelated to the case you cited, I also have an event broker for
nagios 2 that provides better correlation of nagios events including:

  delay clear of event until you have had 2 or more successful polls
(nagios can require a certain number of failing event before
 going into a hard state, sec is needed for the reverse.)

  I never had flap detection working quite as I expected in nagios. I
 disabled it and used the SEC integration instead.

  apply different alarming thresholds to a single service depending
  on:
  time of day,
  other external events
  (nagios can do some of  this on it's own, but it requires
  rewriting plugins to be time aware, or creating a different
  service for each set of time dependent thresholds)

 change the severity of an event according to local requirements, or
 specific types of failures in other services. Nagios's whole
 mechnism for suppressing actions (notifications, polling) on a
 dependent service depends on the critical/warning state of the
 main service. Many plugins return critical for multiple states.
 Say I want to suppress a warning on service A when service B
 fails to respond, but not when service B exceeds a critial
 response time. Both of these events are critical to the operation
 staff and should be paged. If the plugin only has the critical
 state to indicate both the failure modes, nagios can't
 differentiate between service B not responding and service B
 exceeding a threshold.

 Using SEC I can differentiate between the two states and get the
 nuanced dependency I want. If I wanted to do that in pure nagios,
 I would need to run two versions of service check B (B1 and B2)
 where A depends only on B2 that will generate a critial only on
 threshold succeeeded and never on not responding. So I need to
 write more plugins, generate more load on the services (since
 both B1 and B2 will have to poll it) etc.

 rewrite the output from a plugin into a readable form when it
 recognizes a particular failure mode (without having to mess with
 the plugin which is a nightmare of spaghetti code). (e.g. when
 ssh host keys change you get a line of @@@, SEC
 rewrites that to host key has changed in the nagios interface)

See also:

  https://www.usenix.org/legacy/events/lisa06/wips/rouillard.pdf

  http

Re: [Simple-evcorr-users] Input field within rule definition

2011-04-07 Thread MILLS, ROCKY (ATTSI)
Risto, et al,

Advantages of input field within rule sets:

1. reduce or eliminate command line option inputs
Allowing input sources to be specified within rule sets
2. introduce events at specific points into rule sets
Without needing to setup sometimes complicated dependencies
  Various contexts, jumps and goto's.
  Perhaps this would simplify some rule sets
  And be more directly defined instead of indirect contexts
3. allows for a service defined endpoint
Provides yet another mechanism to introduce and control
the flow of events (where a rule's context would determine
  whether or not to even read from the input --
  i.e. programmatic input control)
4. eliminates extraneous perl code to extract input file/source name
5. one could change the rule to point to another input
And then reload rule sets per SEC signals
  could open new input after closing old input.
This could be useful for several reasons including
changing input streams, input-rule versioning, introducing
  new inputs and new rules without losing rule states, etc.

Since I have no immediate needs (since SEC currently supports current
input needs very well), I have not really thought through this and there
may be other advantages not noted.  It's simply been on my mind to
discuss with you and others.

Regards,
Rock

-Original Message-
From: Risto Vaarandi [mailto:risto.vaara...@seb.ee] 
Sent: Thursday, April 07, 2011 4:27 AM
To: simple-evcorr-users@lists.sourceforge.net
Subject: Re: [Simple-evcorr-users] Input field within rule definition

On 04/05/2011 11:54 PM, MILLS, ROCKY (ATTSI) wrote:
 For discussion only -- not an immediate need to be addressed.

 ~


Well, the 'input' field looks like a synonym to the file context to 
me... Maybe I haven't got all the details for the 'input' field, though.

However, there is one danger related to the use of 'input' that file 
contexts don't involve -- if the locations/names of input files change, 
you have to modify many rules of the rule base. In contrast, with file 
contexts there is no such issue, since the context name does not reflect

the physical location of the input in the file system.
But again, maybe I haven't picked up every detail of your proposal 
properly... What other benefits would 'input' have over file contexts? I

have to admit that I'd personally rely on Jump rules and 'goto' continue

statements.
kind regards,
risto

 For some rule sets I've defined context per multiple files per the
SEC
 command line options.  E.g. -input=/app/myapp/log/my.log=mylog.  In
this
 way I can specify rules per specific files by simply using the defined
 context such as context=mylog.  This works well but every rule
preceding
 the mylog rule will attempt to process input from all input sources
 and I have some rather noisy logs to monitor.  And I know the jump and
 goto rules could also be used to address this.

 As an optimization (I'm guessing but wouldn't really know enough about
 the SEC internals without looking), I'm thinking a rule could have an
 input field such as input=/app/myapp/log/my.log or even non-file types
 of inputs.

 The obvious advantage is that specific inputs would be separated into
 their own streams and target an initial rule.  And given that TakeNext
 was assigned to the rule, I would expect the following rule to process
 the input even if it doesn't have an option= defined.  This would
 allow inputs to effectively be injected midstream into a ruleset
 depending on where the input was placed.

 Given that two rules have input defined, I would expect the sequence
(as
 currently defined) to be followed with the first rule and only passed
 along to the next such rule if TakeNext was defined.  Though I
haven't
 thought this through.

 And of course there are multiple rule files and potential issues and
 decisions that would need to be considered.

 Anyway, just brainstorming and perhaps other folks may recognize some
 other benefits from such an approach.

 Regards,
 Rock



--
 Xperia(TM) PLAY
 It's a major breakthrough. An authentic gaming
 smartphone on the nation's most reliable network.
 And it wants your games.
 http://p.sf.net/sfu/verizon-sfdev
 ___
 Simple-evcorr-users mailing list
 Simple-evcorr-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users




--
Xperia(TM) PLAY
It's a major breakthrough. An authentic gaming
smartphone on the nation's most reliable network.
And it wants your games.
http://p.sf.net/sfu/verizon-sfdev
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users

[Simple-evcorr-users] Input field within rule definition

2011-04-05 Thread MILLS, ROCKY (ATTSI)
For discussion only -- not an immediate need to be addressed.

~

For some rule sets I've defined context per multiple files per the SEC
command line options.  E.g. -input=/app/myapp/log/my.log=mylog.  In this
way I can specify rules per specific files by simply using the defined
context such as context=mylog.  This works well but every rule preceding
the mylog rule will attempt to process input from all input sources
and I have some rather noisy logs to monitor.  And I know the jump and
goto rules could also be used to address this.

As an optimization (I'm guessing but wouldn't really know enough about
the SEC internals without looking), I'm thinking a rule could have an
input field such as input=/app/myapp/log/my.log or even non-file types
of inputs.

The obvious advantage is that specific inputs would be separated into
their own streams and target an initial rule.  And given that TakeNext
was assigned to the rule, I would expect the following rule to process
the input even if it doesn't have an option= defined.  This would
allow inputs to effectively be injected midstream into a ruleset
depending on where the input was placed.

Given that two rules have input defined, I would expect the sequence (as
currently defined) to be followed with the first rule and only passed
along to the next such rule if TakeNext was defined.  Though I haven't
thought this through.

And of course there are multiple rule files and potential issues and
decisions that would need to be considered.

Anyway, just brainstorming and perhaps other folks may recognize some
other benefits from such an approach.

Regards,
Rock

--
Xperia(TM) PLAY
It's a major breakthrough. An authentic gaming
smartphone on the nation's most reliable network.
And it wants your games.
http://p.sf.net/sfu/verizon-sfdev
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] Invalid SEC context doesn't raise an error

2009-08-12 Thread Mills, Rocky
I vaguely recall that I have one or two multi-word contexts but it's not
a big deal to change to single-word contexts.  At first I think I didn't
understand why those rules worked (since I didn't realize multi-word
contexts were supported), but it was apparent that the rule worked so I
think I left it alone.  Perhaps I need to review those rules.

I like the idea of printing a deprecated warning identifying such
rules and allowing them to be fixed in a future release.

Regards,
Rock

-Original Message-
From: John P. Rouillard [mailto:rou...@cs.umb.edu] 
Sent: Wednesday, August 12, 2009 10:31 AM
To: Risto Vaarandi
Cc: simple-evcorr-users@lists.sourceforge.net
Subject: Re: [Simple-evcorr-users] Invalid SEC context doesn't raise an
error


In message 450488.31672...@web33002.mail.mud.yahoo.com,
Risto Vaarandi writes:
hi John,

Hi Risto:

after giving it some thought, I realized why I the parser currently
allows for multiword context names -- although context actions only
accept a single word name (largely for parsing purposes), the name
could become a multiword name after variable substitution (as you
have mentioned). In order to allow for matching multiword names with
context expressions, the parser currently supports them (for example,
someone might want to create a constant for matching a particular
multiword name).

Right, but the constant would be a single word from the parser's
perspective. It expands to multiple words, but the specification is a
single word.

In order to recall that, I had to look at the code
for some time :) I have to admit, though, that I have never used
multiword names myself.

I claim it's a historical artifact. How about this:

  have the parser recognize a multi-whitespace-delimited word
  and emit a WARNING: Multi-word contexts are depricated and
  will be removed in a future release.

This should allow SEC upgrades to occur seamlessly and will allow
people using multiword context specification to convert to single word
names by adding _'s or whatever. Like you, I have never used
multi-word context names because a context named that way can't be
used with any of the context actions.

Once the detection/warning is in place, the decision on what to do next:

  remove all support for multiword contexts in context expressions
(since multiword contexts are invalid in all other places)

  create a newer parser and keep the current parser, selecting the
 context parser from the command line with the newer more strict
 parser as the default

  ...

can be deferred until we see how many people complain about the
deprication warning 8-). Just having the warning would have saved me
hours of effort.

For the purpose of my class, a context specification will always be a
single word 8-).

--
-- rouilj
John Rouillard

===
My employers don't acknowledge my existence much less my opinions.




--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008
30-Day 
trial. Simplify your report design, integration and deployment - and
focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users

*

The information transmitted is intended only for the person or entity to which 
it is addressed and may contain confidential, proprietary, and/or privileged 
material. Any review, retransmission, dissemination or other use of, or taking 
of any action in reliance upon this information by persons or entities other 
than the intended recipient is prohibited. If you received this in error, 
please contact the sender and delete the material from all computers. GA623



--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] Logpp and SEC input sources

2009-04-23 Thread Mills, Rocky
Risto,

The numerous types of logs currently needing a single probe are created and 
written to by log4j, so as you noted to consolidate the many logs into a single 
file, I could configure log4j to write to a single log file.  Or we could also 
publish disparate events to a single JMS queue/topic.  I can then monitor a 
single data stream rather than deal with monitoring so many log files.  The 
problem is that some folks prefer to have a separate log file for the many 
different parts of the application.  We'll figure it out.  Logpp still may be a 
pre-processing monitoring option, if we leave it as is.  We'll see how it ends 
up.

As for using logpp with multiple ssh tail files, we're not using syslog or any 
other logger like log4j for those logs.  And the files are located across 
several servers too.  I think this is a common problem where people often pull 
up multiple xterms and tail -f separate files grepping for specific sets of 
messages.  I was curious if logpp could support input from something other than 
files for a simple central grep of sorts.  If it turned out to be robust, it 
could then feed SEC to generate notifications as problems are detected.

Thanks for the feedback.

Regards,
Rock

-Original Message-

From: Risto Vaarandi [mailto:rvaara...@yahoo.com] 
Sent: Thursday, April 23, 2009 3:23 PM
To: simple-evcorr-users@lists.sourceforge.net; Mills, Rocky
Subject: Re: [Simple-evcorr-users] Logpp and SEC input sources



 From: Mills, Rocky rx4...@att.com
 Subject: [Simple-evcorr-users] Logpp and SEC input sources
 To: simple-evcorr-users@lists.sourceforge.net
 Date: Wednesday, April 22, 2009, 2:23 AM
 Risto, Anyone,
 
 I was considering counting various string matches using SEC
 across
 numerous (over potentially 20) logs simultaneously. 
 There should be few
 string matches, but when combined at peak times there could
 be perhaps
 several thousand extraneous lines to parse per
 second.  SEC may be able
 to handle it but I recall you noting another utility
 logpp.  I
 reviewed its man page and it seems to be a good fit to more
 efficiently
 trim the logs before evaluating them with SEC.
 
 From the logpp output I need to know its input
 source.  I haven't tried
 it but I'm thinking there is no glob-like loading of a
 dynamic list of
 log files via logpp config like this:
 
 input app-log-input {
   file /app/log/*.log
 }
 
 So I'm thinking I could build the input part of my logpp
 config file
 dynamically (before logpp startup) and then load it with
 each file
 explicitly specified like this:
 
 input app-logs-input {
   file /app/log/A.log
      file /app/log/B.log
   and so on...
 }
 
 My filter's template could then prefix each log line with
 the filename
 like this:
 
 filter app-logs-filter {
   regexp something X
   regexp another thing Y
   template filename $~: $0
 }
 
 Where $~ is the filename and $0 is the log's line of text.
 
 I could then use SEC to extract the filename from each line
 and setup my
 counts and such using it.
 
 Any other solution you'd recommend?

how are these log files created? If they are created by syslogd/syslog-ng, then 
maybe you could set up a single file instead of many, and process this file 
with logpp.

 
 
 Another logpp question (with a similar need for SEC to
 determine the
 input sources):
 
 Any preferred/simple way to setup logpp to read multiple
 ssh inputs?
 For example, could I configure input from ssh m...@serverx
 tail -f
 /app/log/X.log and ssh m...@servery tail -f /app/log/X.log
 to be read
 by logpp? Actually I have potentially 12 separate hosts
 with ssh inputs
 I'd like to gather into a single event stream to feed SEC
 such that SEC
 can also extract the input source (hostname in this
 case).  This is not
 a high volume scenario.  Just curious if logpp could
 easily consolidate
 distributed logs that could be monitored from a central
 location.

Have you thought about another scenario -- logpp can also convert non-syslog 
logs into syslog format, and you could have logpp running on 12 hosts for 
sending input events to central host with syslog protocol. If you need to 
encrypt the data exchange, then you could use ssh/stunnel for that. It might be 
somewhat more complex to implement, but the events will be converted to syslog 
format early on and you have the flexibility that comes with syslog-style 
logging.

br,
risto

 
 
 Regards,
 Rock
 
 
 
 
 *
 
 The information transmitted is intended only for the person
 or entity to which it is addressed and may contain
 confidential, proprietary, and/or privileged material. Any
 review, retransmission, dissemination or other use of, or
 taking of any action in reliance upon this information by
 persons or entities other than the intended recipient is
 prohibited. If you received this in error, please contact
 the sender and delete the material from all computers.
 GA625
 
 
 
 --
 Stay on top

[Simple-evcorr-users] Logpp and SEC input sources

2009-04-21 Thread Mills, Rocky
Risto, Anyone,

I was considering counting various string matches using SEC across
numerous (over potentially 20) logs simultaneously.  There should be few
string matches, but when combined at peak times there could be perhaps
several thousand extraneous lines to parse per second.  SEC may be able
to handle it but I recall you noting another utility logpp.  I
reviewed its man page and it seems to be a good fit to more efficiently
trim the logs before evaluating them with SEC.

From the logpp output I need to know its input source.  I haven't tried
it but I'm thinking there is no glob-like loading of a dynamic list of
log files via logpp config like this:

input app-log-input {
  file /app/log/*.log
}

So I'm thinking I could build the input part of my logpp config file
dynamically (before logpp startup) and then load it with each file
explicitly specified like this:

input app-logs-input {
  file /app/log/A.log
 file /app/log/B.log
  and so on...
}

My filter's template could then prefix each log line with the filename
like this:

filter app-logs-filter {
  regexp something X
  regexp another thing Y
  template filename $~: $0
}

Where $~ is the filename and $0 is the log's line of text.

I could then use SEC to extract the filename from each line and setup my
counts and such using it.

Any other solution you'd recommend?


Another logpp question (with a similar need for SEC to determine the
input sources):

Any preferred/simple way to setup logpp to read multiple ssh inputs?
For example, could I configure input from ssh m...@serverx tail -f
/app/log/X.log and ssh m...@servery tail -f /app/log/X.log to be read
by logpp? Actually I have potentially 12 separate hosts with ssh inputs
I'd like to gather into a single event stream to feed SEC such that SEC
can also extract the input source (hostname in this case).  This is not
a high volume scenario.  Just curious if logpp could easily consolidate
distributed logs that could be monitored from a central location.


Regards,
Rock




*

The information transmitted is intended only for the person or entity to which 
it is addressed and may contain confidential, proprietary, and/or privileged 
material. Any review, retransmission, dissemination or other use of, or taking 
of any action in reliance upon this information by persons or entities other 
than the intended recipient is prohibited. If you received this in error, 
please contact the sender and delete the material from all computers. GA625



--
Stay on top of everything new and different, both inside and 
around Java (TM) technology - register by April 22, and save
$200 on the JavaOne (SM) conference, June 2-5, 2009, San Francisco.
300 plus technical and hands-on sessions. Register today. 
Use priority code J9JMT32. http://p.sf.net/sfu/p
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] detecting LDAP authentication failures (long)

2009-04-15 Thread Mills, Rocky
Instead of adding values to a context you could save the values in a
perl hash formatting as you go along.

 

For example (not tested):

 

Rule action collecting IP per connection:

  action=eval %ip_msg ($ldap_conn{$1} = from IP address $2; return
$ldap_conn{$1}; )

 

Rule action collecting UID per connection (notice concatenation period
before '=' sign):

  action=eval %uid_msg ($ldap_conn{$1} .=  per user $2; return
$ldap_conn{$1}; )

 

Success rule action:

  action=eval %success_msg (my $msg = conn=$1 authentication succeeded
$ldap_conn{$1};  delete $ldap_conn{$1}; return $msg;)

 

Failure rule action:

  action=eval %failure_msg (my $msg = conn=$1 authentication failed
$ldap_conn{$1};  delete $ldap_conn{$1}; return $msg;)

 

You'd save the preferred log's timestamp somewhere in there. 

 

Regards,

Rock

 

 

From: Don Faulkner [mailto:dfaulk...@pobox.com] 
Sent: Monday, April 13, 2009 12:58 PM
To: Simple-evcorr-users@lists.sourceforge.net
Subject: [Simple-evcorr-users] detecting LDAP authentication failures
(long)

 

Good morning all.

 

I'm working with an older version of an LDAP server that doesn't support
syslog in any form. As a result I'm having to read through the LDAP
server's access logs. I'm trying to detect successful  failed
authentication attempts and then write an event to syslog (so our
central loghost can read it).

 

What I want out are syslog entries that look more or less like this:

 

Mar 7 04:30:50 ldap-server ldap: [conn=14758663] Authentication
succeeded for username1 from 1.1.1.2

Mar 7 04:43:43 ldap-server ldap: [conn=14758706] Authentication failed
for username2 from 1.1.1.3

 

 

Here's my problem. I can find the conn#, the ip, the username, and
detect success/failure. I'm currently doing that by dumping all that
info into a context in NAME=value pairs. To write it out, I've had to
call an external perl script to parse the context dump and return a
reasonable one-line string. There has to be a better way.

 

I'd appreciate any advice.  Below, I've listed a sample success 
failure, as well as the rules I'm currently using.

 

==

 

 

Here's a successful authentication (note that err=0):

[07/Mar/2009:04:31:50 -0600] conn=14758663 op=-1 msgId=-1 - fd=53
slot=53 LDAP connection from 1.1.1.2 to 1.1.1.1

[07/Mar/2009:04:31:50 -0600] conn=14758663 op=0 msgId=1 - BIND dn=
method=128 version=3

[07/Mar/2009:04:31:50 -0600] conn=14758663 op=0 msgId=1 - RESULT err=0
tag=97 nentries=0 etime=0 dn=

[07/Mar/2009:04:31:50 -0600] conn=14758663 op=1 msgId=2 - SRCH
base=ou=myou,o=domain.com scope=2 filter=(uid=username1) attrs=ALL

[07/Mar/2009:04:31:50 -0600] conn=14758663 op=1 msgId=2 - RESULT err=0
tag=101 nentries=1 etime=0

[07/Mar/2009:04:31:50 -0600] conn=14758663 op=2 msgId=3 - ABANDON
targetop=NOTFOUND msgid=2

[07/Mar/2009:04:31:50 -0600] conn=14758663 op=3 msgId=4 - BIND
dn=uid=username1,ou=myou,o=domain.com method=128 version=3

[07/Mar/2009:04:31:50 -0600] conn=14758663 op=3 msgId=4 - RESULT err=0
tag=97 nentries=0 etime=0 dn=uid=username1,ou=myou,o=domain.com

[07/Mar/2009:04:31:50 -0600] conn=14758663 op=4 msgId=5 - UNBIND

[07/Mar/2009:04:31:50 -0600] conn=14758663 op=4 msgId=-1 - closing - U1

[07/Mar/2009:04:31:50 -0600] conn=14758663 op=-1 msgId=-1 - closed.

 

Here's an unsuccessful authentication (note that err=49):

[07/Mar/2009:04:43:43 -0600] conn=14758706 op=-1 msgId=-1 - fd=91
slot=91 LDAP connection from 1.1.1.3 to 1.1.1.1

[07/Mar/2009:04:43:43 -0600] conn=14758706 op=0 msgId=1 - BIND dn=
method=128 version=3

[07/Mar/2009:04:43:43 -0600] conn=14758706 op=0 msgId=1 - RESULT err=0
tag=97 nentries=0 etime=0 dn=

[07/Mar/2009:04:43:43 -0600] conn=14758706 op=1 msgId=2 - SRCH
base=ou=myou,o=domain.com scope=2 filter=(uid=username2) attrs=ALL

[07/Mar/2009:04:43:43 -0600] conn=14758706 op=1 msgId=2 - RESULT err=0
tag=101 nentries=1 etime=0

[07/Mar/2009:04:43:43 -0600] conn=14758706 op=2 msgId=3 - ABANDON
targetop=NOTFOUND msgid=2

[07/Mar/2009:04:43:43 -0600] conn=14758706 op=3 msgId=4 - BIND
dn=uid=username2,ou=myou,o=domain.com method=128 version=3

[07/Mar/2009:04:43:43 -0600] conn=14758706 op=3 msgId=4 - RESULT err=49
tag=97 nentries=0 etime=0

[07/Mar/2009:04:43:43 -0600] conn=14758706 op=4 msgId=5 - UNBIND

[07/Mar/2009:04:43:43 -0600] conn=14758706 op=4 msgId=-1 - closing - U1

[07/Mar/2009:04:43:43 -0600] conn=14758706 op=-1 msgId=-1 - closed.

 

 

I've almost got this. Here's the ruleset so far:

 

# notice the beginning of a connection.

# create a context named for the conn#, add timestamp and source ip.

type=single

continue=takenext

ptype=regexp

pattern=\[([^ ]+) .*\] conn=(\d+) .* LDAP connection from
(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}) to

desc=LDAP session opened from $3

action=create ldap_conn_$2;\

add ldap_conn_$2 LDAP_STAMP=$1;\

add ldap_conn_$2 LDAP_IP=$3; 

 

# notice the bind attempt, add the uid to the context.

type=single

continue=takenext

ptype=regexp

pattern=conn=(\d+) .*BIND 

Re: [Simple-evcorr-users] Q - Post-hoc, non-realtime logfile processing

2009-04-02 Thread Mills, Rocky
Jeroen,

Perhaps some simulation/analysis could be done without modifying SEC
internals but by changing how you input events into SEC and scaling back
the times you specified in your rules.

You could setup a reader-feeder program that reads your logs and feeds
SEC the events with delays/sleeps in between each log line as needed.
In this way I'd think most of your rules should work (except some of
your calendar rules).  Though then your analysis would take the actual
time the log times span.  Not sure that is acceptable or not.

Another option, again without modifying SEC internals, is to modify your
rules with time specific properties and make them non-time sensitive by
removing the time element from the rule (where you can).   Or if you
can't remove the time element you could scale the times down by 10 for
example, so a 300 second value would then be changed to 3 seconds in its
rule and your reader-feeder program would only sleep 1 tenth of the
time (as described above) between events being sent to SEC.

Perhaps for some of the scenarios you're interested to analyze or
simulate with SEC this may apply.

Regards,
Rock


-Original Message-
From: Jeroen Scheerder [mailto:j...@on2it.net] 
Sent: Tuesday, March 31, 2009 4:51 AM
To: simple-evcorr-users@lists.sourceforge.net
Subject: [Simple-evcorr-users] Q - Post-hoc, non-realtime logfile
processing

Hi,

I'm a relative newcomer to SEC.  I've been exploring it with good  
results so far.

Yet there's one thing.  SEC's timestamps lines it reads with the  
current time.  This is excellent for real-time analysis, but for later  
analysis that's not so hot.

Syslog files are timestamped, and I'd like to use these timestamps  
instead of $time = time().  Has anybody done this before, and will  
Pair/PairWithWindow work if I modify the read_line function to extract  
timestamps from loglines?

Or is this a Very Bad Idea for some or other reason?


Regards, Jeroen.
-- 
Jeroen Scheerder
ON2IT B.V.
Steenweg 17 B
4181 AJ WAARDENBURG
T: +31 418-653818 | F: +31 418-653716
W: www.on2it.nl   | E: jeroenscheer...@on2it.net



--
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users

*

The information transmitted is intended only for the person or entity to which 
it is addressed and may contain confidential, proprietary, and/or privileged 
material. Any review, retransmission, dissemination or other use of, or taking 
of any action in reliance upon this information by persons or entities other 
than the intended recipient is prohibited. If you received this in error, 
please contact the sender and delete the material from all computers. GA622



--
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users