In message
<cab3_bpmc5elbs9-az_fytb87ykoeq+6sguk4pvv9da9nxq8...@mail.gmail.com> ,
"Justin J. Novack" writes:

>Great idea, however, now all 432 ports on my device would send out an email
>on flap, rather than the 60 important ones.  This would be perfect if an
>entire switch needed friendly names.

I think adding a 

  context = system_name_$2

or

  context =($perl_hash{$2})

to your rules will fix that nicely. If the contexts/hash entry's
aren't defined the rules don't fire. Since you would check to see if
the switch port has a pretty name you can simply the actions by not
emptying the variables first since there will always be a value if the
action triggers.

You could also move this to the action's shell command:

  pipe ... 'var=`grep $2 /file/mapping` && /bin/mail ... -s "ERROR $$var "' 

(I think $$ is the substitution token to put a single $ in the actual
string.)  But if you are ignoring that many ports the fork/exec time
for all those processes that are going to exit seems excessive.

>As for David's suggestion, this would also be the case, however, I could
>error out (silently) if it doesn't match something in the hash.  I would
>still need to call a shellcmd, I don't just email, I also trigger additional
>alerts like sounds and phones with the shellcmd announce.php, I'm happy to
>call that separately.  At that point, I might as well just offload EVERY
>event to different perl files and fail silently if the switch/port
>combination is not in a hash/map.
>
>Are these ways any safer(?) or less performance intensive than 60+ rules?

Using David's or my methods are a lot less intensive than 60+ rules as
the majority of the computation effort in most SEC installations is
the pattern/regexp match for the rule. Assuming a server that you care
about can randomly error, you will have to compare and fail to match
30 regexps (1/2 the servers won't match on average) to find the server
that matches. If the port reporting the error doesn't match any
server, you would have to do 60 regexp comparisons to reach that
conclusion.

With the methods David and I suggest it's one regexp match to extract
data then a lookup to see what has to be done next. Much much less
computational effort.

>My initial thought was to write a template and seed file (ala Section 4.2
>http://sixshooter.v6.thrupoint.net/SEC-examples/article-part2.html#SECPERFORMANCE)

On that page, look at the performance differences between 1 rule and
50 rules in 'Table 2. SEC Performance With Data Processed Through
syslogd'. With the same input the 50 rule case takes 2-5 times more
wall clock run time and 10-~18 times the actual time spent working the
cpu in user time.

Also see section 4 "Strategies to improve performance" of
http://www.cs.umb.edu/~rouilj/sec/sec_paper_full.pdf linked from
http://www.cs.umb.edu/~rouilj/sec/.

>and just deal with adding a line (for each friendly named port) and
>recompiling the rules file every time I want to change.

If your port churn rate is low enough you can do this. But remember
every time you recompile the rules files, you destroy any pending
correlation operations that came from rules in that rule file.  There
are ways around that, but they really aren't worth the trouble if
there is another way to accomplish the same thing.

--
                                -- rouilj
John Rouillard
===========================================================================
My employers don't acknowledge my existence much less my opinions.

------------------------------------------------------------------------------
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2d-oct
_______________________________________________
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users

Reply via email to