On Tue, 23 Feb 2016, David Lang wrote:
My intention is to create log rate metrics for different types of event
(eg ASA connection build/break rates etc.) and filtering before sending
forward to the central log collection point.
I actually do this sort of thing via omprog sending logs to sec (simple event
correlator)
I create an output format that contains just the data I need to create rate
metrics on in an easy to parse format (pipe delimited), and then I have a
simple sec ruleset that parses the input and accumulates values into perl
hash arrays. I then have a calendar rule in SEC that outputs and resets the
counters every minute, resulting in per minute rate data for various things.
the sec rulset looks horribly ugly at first glance, but it's really the same
thing repeated for each type of data (I feed it hostname, fromhost-ip and
programname)
I'll post the full configs (including the sec config) when I get into the
office tomorrow.
now, if you already need redis for something else, that's fine. But if you
thought you needed it just to get rate info, there are other, arguably
simpler or at least lighter-weight options.
as far as ASA build/break rates, you can use the %ASA-#-#### program IDs, or
you can do like I do and parse/tag the logs with mmnormalize, producing a
nice, parsed JSON of the important info that is well suited for going into
Elasticsearch or doing further analysis on, and also tagging messages so that
all the different build variations yield the same tag, as do all the breal
variations, etc.
simplifying things a little bit (one parameter instead of a pipe delimited list)
in /etc/rsyslog.conf
$template counter,"%programname%\n"
action(type="omprog" queue.type="FixedArray" name="sec-counters"
binary="/usr/bin/sec --conf=/etc/sec/source-summary --intevents --intcontexts
--dump=/tmp/dumpfile.sec-counters --notail --debug=2 --input -"
template="sources2" hup.signal="USR2")
in /etc/sec/source-summary
type=single
ptype=perlfunc
context=[!SEC_INTERNAL_EVENT]
pattern= sub { $counter{$_}++; return 1;}
desc=itemcount
action=none
type=calendar
time=* * * * *
desc=dump stats every min
action= lcall %o -> (sub { @out=(); foreach $i (keys %counter) { push
@out,($counter{$i}.' '.$i); } return @out; }); write
/var/log/counter-summary-messages %o
I currently rotate the files on my central server every minute, so each rotated
summary file only holds one minute of summary data. I'm thinking about modifying
this to only rotate this file daily, and add HHMM before the count. This will
take adding standard perl time functions to extract the hour and minute.
to handle multiple pipe delimited things, the above gets complicated to:
type=single
ptype=perlfunc
context=[!SEC_INTERNAL_EVENT]
pattern= sub { @item=split(',',$_[0]); $server{$item[0]}++; $relay{$item[1]}++;
if ($item[2] =~ /ASA-/ ) {$i='cisco-fw' } else {$i=$item[2]} ; $program{$i}++;
return 1;}
desc=itemcount
action=none
and then the action for the calendar event repeats the two statements for each
type of counter you have (in my case, server, relay, program)
David Lang
_______________________________________________
rsyslog mailing list
http://lists.adiscon.net/mailman/listinfo/rsyslog
http://www.rsyslog.com/professional-services/
What's up with rsyslog? Follow https://twitter.com/rgerhards
NOTE WELL: This is a PUBLIC mailing list, posts are ARCHIVED by a myriad of
sites beyond our control. PLEASE UNSUBSCRIBE and DO NOT POST if you DON'T LIKE
THAT.