On Thu, February 20, 2014 16:16, Alan McKinnon wrote:
> On 20/02/2014 11:16, Yuri K. Shatroff wrote:
>>
>>
>> 20.02.2014 09:24, Canek Peláez Valdés пишет:
>>> [ snip ]
>>>> but I do not see the point, beyond as a nice gimmick.
>>>
>>> Well, I *do* see a point. Many points, actually. You want the logs for
>>> SSH, from February 12 to February 15? Done:
>>>
>>> journalctl  --since=2014-02-12 --until=2014-02-15 -u sshd.service
>>>
>>> No grep. No cat. No hunting logrotated logs (the journal will rotate
>>> automatically its logs, and will search on all logs available). You
>>> can have second-precision intervals.
>>>
>>> Also, the binary format that the journal uses is indexed (hence the
>>> binary part); therefore, the search is O(log n), no O(n). With a log
>>> with a million entries, that's about 20 steps.
>>>
>>> Perhaps it's just a gimmick to you. For me is a really usefull
>>
>> Clearly, it's reinventing a wheel. All that indexing stuff and O(log(n))
>> if really needed is easily achieved with databases.
>> Not using cat and grep is not something one'd boast; rather, again, a
>> waste of resources to recreate already existing tools.
>> BTW, I wonder if anyone does really have logs with millions of lines in
>> one single file, not split into files by date, service etc, so that the
>> whole O(n) issue is moot.
>
> I have logs like that. It's not an uncommon scenario.

I've seen logdirectories containing a a few hundred MB of logs on a test
environment with a single user doing just one thing.
Fortunately, there was a single file which indicated which of the 200+
files contained the actual error message I was looking for.

>> I believe it would be a 5-minutes job to add the capability of printing
>> last N log entries for a service to `rc-service status`. Using cat, grep
>> and the like.
>
>
> No, that will not work easily for all definitions of easily.
>
> rc-something has zero control over where the logs go and no standard
> method to provide "hints" to the logger. Gentoo ships syslog* configs
> that basically stick everything in messages, where grepping them out is
> a PITA. I usually rewrite that config more to my taste and needs and
> rc-service cannot know what I did. So the idea fails at step 1 as the
> code does not know where the logs are.

Would journald?

>> Not reinventing wheels. Not spending super-talented
>> super-highly paid developers' time on doing tasks one had done about 30
>> years ago. I believe, not having this option is due to its simple
>> uselessness.
>
> 30 years ago we had isolated stand-alone machines without nothing like
> the logging needs we have today. Whilst I agree with you that systemd's
> logging tools may not be the solution, I can assure you (as someone who
> has to deal with this shit) that syslogging in the modern world is a mess.
>
> Try this: Decide you cannot afford Splunk, so do it yourself. Now get
> your Apache access logs into the same searchable database your other
> stuff is in, and do it in such a way that you can SELECT what you want
> out in obvious ways.
>
> Repeat for every other app you have that logs stuff. Remember to find
> the really important logs which are usually sitting in /opt/ and
> produced by Log4Perl or something equally abominable.

Replace "perl" for a different 4-letter world depicting a language
commonly used for enterprise applications supported on multiple platforms
and you get what I have to deal with.

One of those has the more commonly needed logs in 4 or 5 locations. This
can easily end up being a lot more, depending on how it is being used. A
script to find all those would need admin-level permissions into the
application itself to query information needed to find the logfiles.

Another application I worked with in the past had 20+ locations. A few of
which contained 100+ logfiles after a few days of use. At least 5 of those
didn't even have time-stamps.

For those, a clever utility would be useful, but if I could write that,
I'd use those AI-routines to take over the world ;)

--
Joost





Reply via email to