On Sat, May 7, 2016 12:29 am, Predrag Punosevac wrote:
> Michael Shirk wrote:
>
>> On May 23, 2015 10:42, "Predrag Punosevac" <[email protected]>
>> wrote:
>> >
>> > 5. Finally I am open for simpler ideas. Any opinions on
>> sysutils/logfmon
>> > Is it possible to visualize on the web output from logfmon?
>> >
>> > Best,
>> > Predrag Punosevac
>> >
>>
>> There is another aspect to log analysis tools that bothers me the most,
>> why
>> must we risk system security to review log files?
>>
>> Any of the tools that "work well" open you up to web vulnerabilities, or
>> cost money in the case of Splunk. I have not had time to work on it, but
>> I
>> would like to create a tool that avoids all of the issues of running a
>> web
>> service or requiring java.
>>
>> My interest is in UNIX system logs and IDS/IPS events, with full packet
>> captures. The simplest form I have used is with automated processing of
>> IDS
>> events, firewall logs, and full pcap data as static files shared on a
>> webserver. I would be interested in a CLI log viewer with ncurses, or
>> scripted output (maybe using pipecut to process data as you search for
>> what
>> you want in the simplest UNIX way).
>>
>> --
>> Michael Shirk
>> Daemon Security, Inc.
>> http://www.daemon-security.com
>
>
> I am resurrecting this old thread I started almost a year ago in an
> attempt to learn how other OpenBSD users are managing their centralized
> logging servers. I also wanted to revisit the issues raised by
> Mr. Shirk.
>
> Namely the problem I am trying to solve seems very common. I am running
> centralized logging server (syslog-ng) an OpenBSD host. This server
> receives log files from my heterogeneous network consisting of OpenBSD
> machines (running syslogd) Red Hat machines (rsyslog), and FreeBSD
> machines running FreeBSD version of syslogd. I noticed that sending log
> files generates lots of traffic on my monitoring server in part due to
> the fact that I am recording lots of noise like
>
> last message repeated 10 times
>
> Next problem is properly rotating, archiving, and deleting monthly
> directories containing log files of all my servers. For example
> directory
>
> /var/log/syslog-ng/HOSTS/2016-05
>
> contains log files of all my servers for this month. That is not too
> useful. Storing them per day would be probably better but having fewer
> log files just for important things would be even better.
>
> Log files are useless unless some kind analytics is run on them.
> I would like to be able to do real time monitoring for anomalies using a
> daemon for. The following seems obvious anomalies:
>
> 1 . SMART errors (I am big data/machine learning guy so I want to
> replace failed HDD in timely fashion) even though SMART deamon is
> sending separate e-mail
>
> 2. failing hardware (sensors, IPMI, mcelog)
>
> 3. firewall logs
>
> 4. IDS/IPS events
>
>
>
> A daemon should be able to send me an e-mail every couple of hours
> containing as little noise as possible.
>
> So far I have found in ports the following daemons:
>
> 1. security/logsurfer (package exists only for i386 and I use amd64)
>
> 2. sysutils/logfmon (From looking at /etc/logfmon.conf it looks like it
> is written to monitor log files on the single OpenBSD machine running
> syslogd. I don't see how I could monitor entire syslog-ng directories)
>
> 3. I noticed that syslog daemons do not work very well as SQL databases
> as a storage backends. For example LibreNMS has the interface for
> displaying and searching (manually which makes it useless) syslog files.
> But MariaDB has to be restarted quite frequently and on the top of it.
>
> 4. I am not sure what to think of ELK anymore. The more I learn the less
> i like it.
>
> 5. Finally I stumbled upon echofish
>
> https://echothrust.github.io/echofish/
>
> which seems to be repeating old pattern. Using SQL database as a backend
> and providing UI for searching messages (I can do that using grep) but
> no e-mail notification when troubles are found.
>
>
> What am I missing here? How do people monitor their log files in the
> real time. That would seems such an obvious topic for people who care
> about security.
>
> Predrag
>

Note: I don't centralize my logs, and don't do realtime monitoring.  I
don't have a NOC, and I'm not on call 24x7.  So most of the time, there is
no one to respond anyway.  I run a couple dozen servers hosting a handful
of internal tools.  Maybe there is something you can learn from my
experiance anyway.

I looked into ELK and found you had to teach it how to parse the logs to
extract useful information.  A coworker set it up for another serice and
did none of that so it was no different than just 'cat'ing everything into
one big log file.  I also tried fluentd and it was the same.  I figured if
I have to teach the tool how to read a log, I could just write my own
thing to read the logs.  So I rolled my own.  I use a perl script to mask
out the stuff I don't care about, keeping track of how many times they
were seen.  I get a rpeort of new log lines not in the ignore list and how
many times they were seen (there is some scrubbing of unique data like
PIDs and session IDs so I get a useful count), and any ignored lines that
didn't fall into the expected range of counts.  Pushing that into a
database for historic info and visualization wouldn't be too hard.

I followed the ideas found here:
http://undeadly.org/cgi?action=article&sid=20091215120606
http://www.ranum.com/security/computer_security/papers/ai/
And much more info:
http://www.ranum.com/security/computer_security/archives/logging-notes.pdf

Tim.

Reply via email to