hi Richard,

That's an interesting question. Dump files in JSON format have been
supported only across few recent versions (from 2.8.0 to 2.8.2) and don't
have long history behind them, but so far their format has stayed the same.
As for dump files in text format, there have been a number of changes since
early versions of SEC, but most of these changes have been related to
adding sections with new data into dump file, not changing the format of
existing information. I did a quick test with version 2.6.2 (released in
2012) and version 2.8.2 (most recent version), and the only difference
which can influence parsing is how rule usage statistics is represented.
With version 2.6.2, a single space character is used for separating words
in each printed line, and rule usage statistics looks like follows:

Statistics for the rules from test.sec
(loaded at Tue Feb  4 17:54:34 2020)
------------------------------------------------------------
Rule 1 at line 1 (test) has matched 1 events
Rule 2 at line 7 (test) has matched 0 events
Rule 3 at line 16 (test) has matched 0 events
Rule 4 at line 23 (test) has matched 0 events

However, in version 2.8.2 the numerals in rule usage statistics are
space-padded on the left, and the width of each numeric field is determined
by the number of characters in largest numeral (that format change was
introduced in 2013 for readability reasons:
https://sourceforge.net/p/simple-evcorr/mailman/message/30185995/). For
example, if you have three rules that have matched 1, 823 and 7865 times,
the first numeral is printed as "   1", the second one as " 823", and the
third one as "7865". Here is an example of rule usage statistics data for
version 2.8.2:

Statistics for the rules from test.sec
(loaded at Tue Feb  4 17:53:30 2020)
------------------------------------------------------------
Rule 1 line  1 matched 1 events (test)
Rule 2 line  7 matched 0 events (test)
Rule 3 line 16 matched 0 events (test)
Rule 4 line 23 matched 0 events (test)

The other differences between dump files are some extra data that is
printed for version 2.8.2 (for example, effective user and group IDs of the
SEC process). So one can say that the format of textual dump file has not
gone through major changes during the last 7-8 years, and also, there are
no plans for such changes in upcoming versions.

kind regards,
risto


Hi Risto,
>
> thank you for positive answer - dumping period in minutes in enough, not
> needed many times per second. And also for useful tips - I already noticed
> JSON option, just considering, that default is maybe more universally
> usable, because extra modules (JSON.pm) may not be installed or installable
> on any systems (not only due technical reasons, but also security policies
> - more code auditing needed). Maybe the question also could be, how
> frequent are changes of dump file formats during SEC development, which
> could break monitoring implemented for particular version of SEC.
>
> Richard
>
> ut 28. 1. 2020 o 12:40 Risto Vaarandi <risto.vaara...@gmail.com>
> napísal(a):
>
>> hi Richard,
>>
>> as I understand from your post, you would like to create SEC dump files
>> periodically, in order to monitor performance of SEC based on these dump
>> files. Let me first provide some comments on performance related question.
>> Essentially, creation of dump file involves a pass over all major internal
>> data structures, so that summaries about internal data structures can be
>> written into dump file. In fact, SEC traverses all internal data structures
>> for housekeeping purposes anyway once a second (you can change this
>> interval with --cleantime command line option). Therefore, if you re-create
>> the dump file after reasonable time intervals (say, after couple of
>> minutes), it wouldn't noticeably increase CPU consumption (things would
>> become different if dumps would be generated many times a second).
>>
>> For production use, I would provide couple of recommendations. Firstly,
>> you could consider generating dump files in JSON format which might make
>> their parsing easier. That feature was requested couple of years ago
>> specifically for monitoring purposes (
>> https://sourceforge.net/p/simple-evcorr/mailman/message/36334142/), and
>> you can activate JSON format for dump file with --dumpfjson command line
>> option. You could also consider using --dumpfts option which adds numeric
>> timestamp suffix to dump file names (e.g., /tmp/sec.dump.1580210466). SEC
>> does not overwrite the dump file if it already exists, so having timestamps
>> in file names avoids the need for dump file rotation (you would still need
>> to delete old dump files from time to time, though).
>>
>> Hopefully this information is helpful.
>>
>> kind regards,
>> risto
>>
>> Kontakt Richard Ostrochovský (<richard.ostrochov...@gmail.com>) kirjutas
>> kuupäeval E, 27. jaanuar 2020 kell 22:04:
>>
>>> Greetings, friends,
>>>
>>> there is an idea to monitor high-volume event flows (per rules or log
>>> files, e.g. in bytes per time unit) from dump files from *periodic
>>> dumping*.
>>>
>>> The question is, if this is recommended for *production use* - answer
>>> depends at least on if dump creation somehow slows down or stops SEC
>>> processing, or it has not significant overhead (also for thousands of
>>> rules, complex configurations).
>>>
>>> Are there some best practises for (or experiences with) performance
>>> self-monitoring and tuning of SEC?
>>>
>>> Thank you.
>>>
>>> Richard
>>> _______________________________________________
>>> Simple-evcorr-users mailing list
>>> Simple-evcorr-users@lists.sourceforge.net
>>> https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users
>>>
>>
_______________________________________________
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users

Reply via email to