hi Richard,

if CPU utilization has reached 100%, no rules or log file events would be
skipped, but SEC would simply not be able to process events at their
arrival rate and fall behind of events. If your events include timestamps,
you would probably see events with past timestamps in dump file (among
other data, SEC dump file reports the last processed event for each input
file). As for debugging the reasons of high CPU utilization, I would
recommend to have a look into rule match statistics, and make sure that
rules with most matches appear in the top of their rule files (if
possible). However, current versions of SEC do not report the CPU time
spent on matching each pattern against events.

Just out of curiosity -- how many rules are you currently having in your
rule base, and are all these rules connected to each other? How many events
are you currently receiving per second? Also, are all 50 input files
containing the same event types (e.g., httpd events) that need to be
processed by all rules? If this is not the case, and each input file
contains different events which are processed by different rules, I would
strongly recommend to consider a hierarchical setup for your rule files.
The principles of hierarchical setup have been described in SEC official
documentation, for example: http://simple-evcorr.github.io/man.html#lbBE.
Also, there is a recent paper which provides a relevant example:
https://ristov.github.io/publications/cogsima15-sec-web.pdf. In addition,
you could also consider running several instances of SEC for your input
files. For example, if some input files contain messages from a specific
application which are processed by few specific rule files, a separate SEC
process could be started for handling these messages with given rule files.
In that way, it might be possible to divide the rule files and input files
into several independent groups, and having a separate SEC process for each
group allows to balance the load across several CPU's.

hope this helps,
risto

Kontakt Richard Ostrochovský (<richard.ostrochov...@gmail.com>) kirjutas
kuupäeval K, 25. märts 2020 kell 17:07:

> Hello friends,
>
> I have SEC monitoring over 50 log files with various correlations, and it
> is consuming 100% of single CPU (luckily on 10-CPU machine, so not whole
> system affected, as SEC is single-CPU application).
>
> This could mean, that SEC does not prosecute processing of all rules, and
> I am curious, what are possible effects, if this means increasing delays
> (first in, processing, first out), or skipping some lines from input files,
> or anything other (?).
>
> And how to troubleshoot, finding bottlenecks. I can see quantities of log
> messages per contexts or log files in sec.dump, this is some indicator. Are
> there also other indicators? Is it possible, somehow, see also processing
> times of patterns (per rules)?
>
> Thank you in advance.
>
> Richard
> _______________________________________________
> Simple-evcorr-users mailing list
> Simple-evcorr-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users
>
_______________________________________________
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users

Reply via email to