In message <>
=?UTF-8?Q?Richard_Ostrochovsk=C3=BD?= writes:
>Hello friends,
>I have SEC monitoring over 50 log files with various correlations, and it
>is consuming 100% of single CPU (luckily on 10-CPU machine, so not whole
>system affected, as SEC is single-CPU application).
>This could mean, that SEC does not prosecute processing of all rules, and I
>am curious, what are possible effects, if this means increasing delays
>(first in, processing, first out), or skipping some lines from input files,
>or anything other (?).
>And how to troubleshoot, finding bottlenecks. I can see quantities of log
>messages per contexts or log files in sec.dump, this is some indicator. Are
>there also other indicators? Is it possible, somehow, see also processing
>times of patterns (per rules)?

I have found two operations that are large users of cpu in SEC.

1) If you have a lot of contexts, context lookups can eat cpu. I
   solved a problem where sec used 50% of a cpu by actively deleting
   contexts as soon as I could rather than lettng them time out. You
   can use the USR1 signal (IIRC see manpage) to get a dump of all
   contexts.  I don't remember how many I had but it was a lot. Once I
   started actively deleting contexts the cpu dropped to 1-2% with
   ~100-500 events per second though the system.

2) Inefficient regexps that require a lot of backtracking can kill
   your cpu.  I don't know how you would find these.

You don't say how your rulesets are constructed, but if you have say
100 rules and the most often hit rules are at position 99 and 100, you
are going to waste a lot of time applying regexps (even if they are
efficient). Again a dump file will tell you which rules are being
hit. Move the most used rules so they are applied earlier in you
rulesets. Also you can create rules so they are applied in a tree like
structure. Look at the manpage for rulesets. This structure allows you
to have a layer of filter rules (e.g. look for ssh in the log line)
and only apply rulesets for ssh to the log event. This can cut down
dramaticaly on the number of regexps that have to be applied and

Also you can parse the event in one rule and then use more efficient
means to trigger rules (matching on a subset of the original event).

I mentiond as an offhand remark to Risto a profile mode that would
count not only every rule that lead to an action, but every time the
rule executed its regular expression. Having some sort of profile mode
(not to be run in production) would help identify these sorts of

Good luck and when you figure it out let us know how you found and
fixed it.

                                -- rouilj
John Rouillard
My employers don't acknowledge my existence much less my opinions.

Simple-evcorr-users mailing list

Reply via email to