Hi Risto,

On 03/25/2015 09:21 AM, Risto Vaarandi wrote:
hi Leonard,

2015-03-25 2:49 GMT+02:00 Leonard Lawton <leonard.law...@gmail.com>:
Hi Risto,

On 03/24/2015 03:37 PM, Risto Vaarandi wrote:
hi Leonard,

when sec is connected to a syslog server over a pipe, there is always a theoretical chance of data loss, since syslog servers usually write to pipes in a non-blocking way. If bytes are written to pipe in a faster rate than the reader is able to fetch them, the pipe will eventually become full, and the following writes into pipe will fail with data loss. This problem was more frequent on older platforms where pipes could only accommodate 4KB. However, on more recent Linux (and other) platforms pipes can take significantly more bytes (like 256KB) and occasional data transfer peaks can be seamlessly handled.
This is a RHEL 5.10 machine with 3GB of RAM, it's a VM running on a lightly loaded R620(pretty beefy hardware). I do notice that CPU usage of SEC(not syslog-ng) is pegged at 99% fairly often.

 Having CPU utilization of 99% is a sign that sec is overloaded. It is not a multithreaded tool and thus can't go above 100%. Also, from your description below I have understood that you have more than 400 rules which is not the smallest rulebase. Have you considered running several instances of sec, dividing the rules between those instances, and using syslog-ng to distribute load between them? 

Also, if you would like to keep sec running as one process, would it be possible to arrange rules hierarchically, so that a smaller number of first-layer Jump rules would recognize different event classes, and forward them to relevant rulesets only? For example, you could have one file which looks like this:

type=Jump 
ptype=RegExp 
pattern=sshd\[\d+\]: 
desc=route sshd events
cfset=sshd-events 

type=Jump 
ptype=RegExp 
pattern=ntpd\[\d+\]: 
desc=route ntpd events
cfset=ntpd-events 

You can then have another rule files for sshd events which always begin with this Options rule:

type=Options 
procallin=no 
joincfset=sshd-events

This Options rule indicates that the following rules in the file will only receive input from Jump rules. In other words, events which have not matched the Jump rule with the regular _expression_ 
sshd\[\d+\]: 
will not be matched against sshd rules in the given file. Using this strategy of directing events to relevant rules only could reduce the CPU load quite a bit.
If you don't want to have multiple sec processes and haven't tried this approach yet, I'd recommend it.
I can happily have more than SEC process running, I think I overlooked on the documentation on how to achieve this- can you point me to that? After I try that, I'll also attempt to implement the rule change recommendation.

kind regards,
risto




You mentioned an event rate of 100 messages per second -- is this the rate of messages for the rule you have included in your post, or an overall message rate for all rules? If it's the overall rate, sec should be able to handle this easily, but a lot depends on the actual configuration. Just out of curiosity, what is the total number of rules in your ruleset, how many matches they typically produce per minute (or hour), and do the rules run expensive actions (like calling computationally expensive Perl functions with 'lcall' or 'eval' actions)? Also, you mentioned 15 hours of reporting data within 15 minutes -- does this mean that a past data for 15 hour time frame are submitted for processing to sec within 15 minutes?
The rate is the amount of events coming to syslog-ng, which should in turn go to SEC; so for all rules. I have 433 rules. In regards to the 15 hours, that's correct(I should have worded it better). For example, at 7PM exactly I'd expect to get 10 syslog messages destined for 10 different clamav rules, but from 7PM-7:15PM, I'd expect a total of 200 messages for the same clamav rules. During this timeframe I would easily get several thousand non-clamav related messages for processing.

Also, may I offer you a small advise on starting sec from syslog-ng -- I'd recommend to include the --notail option in sec command line, since this will ensure that sec exits when syslog-ng closes its end of the data transfer pipe. Otherwise, orphaned instances may stay around in the process table which consume system resources. There is also a FAQ entry which attempts to provide a small syslog-ng config example and an explanation of the issue: http://simple-evcorr.sourceforge.net/FAQ.html#3
I have not seen this happen before, but I'll look into this.


kind regards,
risto


2015-03-24 22:59 GMT+02:00 Leonard Lawton <leonard.law...@gmail.com>:
I'm using syslog-ng to pipe events to SEC 2.7.4. In some times in what
seems to be higher volumes of syslog traffic(Maybe 100 log
messages/sec), I don't see SEC taking action on some rules(it's not
making into the SEC logfile). I do not have any rate limiting setup for
said rules.

Here's an example of a rule that seems to processed intermittently:

type=Single
continue=DontCont
ptype=RegExp
pattern=\S+\s+\S+\s+\S+\s+(\S+).domain.com clamscan: Time: (\S+) sec .*
- <user.notice>
desc=$0
action="" /usr/local/zabbix/

The above rule might have about 15 hours reporting data within a 15
minute period. Additionally, there are no other rules that would match
this(trying to rule out a window)

Syslog-ng config:

destination log_watch {
         program("/usr/local/sbin/sec.pl -input=\"-\" -conf
/etc/sec.conf -debug=5 -log=/var/log/sec.log -dump=/tmp/sec.dump"
template(t_fp));
};


------------------------------------------------------------------------------
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the
conversation now. http://goparallel.sourceforge.net/
_______________________________________________
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users




------------------------------------------------------------------------------
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the 
conversation now. http://goparallel.sourceforge.net/
_______________________________________________
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users

Reply via email to