Hi,

i just committed some updates for the livestatus module. 
(Background: the livestatus module saves logging data in a sqlite database
file instead of flat text files. The problem was that the database file
would grow and grow especially if your Shinken installation produces lots of
log messages. Until now you had only the option to remove older data and
shrink the database file with a "delete from logs where time < ...;
vacuum;".
This was of course not an acceptable way to handle the logging.
Now there will be multiple sqlite database files, one for each day. It works
like this:
- Every day at 00:05 the log events of yesterday (and maybe the days before)
will be transferred to a separate database (situated in an
archive-directory) called livestatus-<year>-<month>-<day>.db
- After the transfer the events will be deleted from the main database file.
(The reason why this is done at 00:05 and not at 00:00 is: 
in a distributed environment it can happen, that log-broks with a timestamp
of 23:5* might arrive at 00:0* at the broker host. So we wait here a bit to
catch all those events dating from the past day)
- Whenever a request for logs is received, the livestatus modules decides
from the timestamp limits which database files contain the data necessary to
fulfill the request and mount them accordingly.

In the shinken-specific.cfg section of the livestatus module you have the
parameter "database_file" which tells the module where to create the (main)
sqlite database.
Now there is also a parameter "archive_path". You can specify a directory
where the database files with the log data from yesterday and older will be
created. This parameter is not mandatory. The default is
<dirname(database_file)+"/archives">
The livestatus module will create the directory for you, no need for a
mkdir.

The parameter max_logs_age is no longer used and will be ignored. You are
responsible for the housekeeping and removing of old archive files. Maybe in
the future this parameter will be used to automatically delete database
files which contain ancient log data. 

Several people already have very huge databases. They will be split up
automatically when you start the broker process. But, as this can take
really long time (during which the broker is blocked) I do recommend to
split the database manually (please make a backup of your database file).
In contrib/livestatus you find the script splitlivelogs.py, which you call
with:
contrib/livestatus/splitlivelogs.py --database ....../var/livestatus.db

It will then create the archives directory and move the log events into
their own datafiles on a daily basis.

Gerhard



------------------------------------------------------------------------------
BlackBerry&reg; DevCon Americas, Oct. 18-20, San Francisco, CA
http://p.sf.net/sfu/rim-devcon-copy2
_______________________________________________
Shinken-devel mailing list
Shinken-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/shinken-devel

Reply via email to