On Thu, Feb 20, 2014 at 3:16 AM, Yuri K. Shatroff <yks-...@yandex.ru> wrote:
>
>
> 20.02.2014 09:24, Canek Peláez Valdés пишет:
>>
>> [ snip ]
>>
>>> but I do not see the point, beyond as a nice gimmick.
>>
>>
>> Well, I *do* see a point. Many points, actually. You want the logs for
>> SSH, from February 12 to February 15? Done:
>>
>> journalctl  --since=2014-02-12 --until=2014-02-15 -u sshd.service
>>
>> No grep. No cat. No hunting logrotated logs (the journal will rotate
>> automatically its logs, and will search on all logs available). You
>> can have second-precision intervals.
>
>>
>>
>> Also, the binary format that the journal uses is indexed (hence the
>> binary part); therefore, the search is O(log n), no O(n). With a log
>> with a million entries, that's about 20 steps.
>>
>> Perhaps it's just a gimmick to you. For me is a really usefull
>
>
> Clearly, it's reinventing a wheel.

Where I come from, doing something that takes O(n) in O(log n) is nor
reinventing the wheel, but, OK, see it that way if you want to. Simply
don't use it.

> All that indexing stuff and O(log(n)) if
> really needed is easily achieved with databases.

The journal is a specialized database for logs.

> Not using cat and grep is not something one'd boast; rather, again, a waste
> of resources to recreate already existing tools.

Are those *your* resources? If not, what's the problem?

> BTW, I wonder if anyone does really have logs with millions of lines in one
> single file, not split into files by date, service etc, so that the whole
> O(n) issue is moot.

Oh boy, you haven't worked much in enterprise, right?

Also, even if *one* machine doesn't have logs with a million lines
(which I've seen it, in real life, in *production*, but whatever), the
journal can send (automatically, of course, if so configured) logs to
a central server. So you can coalesce the logs from *all* your network
in a single place, and with the journal you can merge them when doing
queries. Again, Everything in O(log n).

Si right now I have a little server with logs of ~75,000 lines. If I
had 20 (nothing weird in enterprise, may would call that a really
small operation), that would be logs of 1,500,000 lines. With the
journal, you could check *all* your servers with a single command, and
all the queries could be done in O(log n).

So, yeah, moot.

> Well, maybe it'd be nice to have a collection of log management tools
> all-in-one but beyond that I don't see any advantages of systemd-journald.

Then, again, don't use it.

>> Its raison d'être is the new features it brings.
>
>
> I didn't notice any new features. It's not features that are new, but just a
> new implementation of old features in a more obtrusive way IMO.

Again, O(n) vs. O(log n). Coalescing logs from different machines. A
single powerful tool with well define semantics to query the logs.

So, yeah, no new features.

>>> Additionally, the use of "tail -f" and "grep" allows me to check the logs
>>> real-time for debugging purposes.
>>
>>
>> journalctl -f
>>
>> Checks the logs in real time. Again, [1].
>
>
> Again, a brand new Wheel(c)

I never said that was a new feature. Roeleveld said that he could use
"tail -f" and grep, like that was not possible with the journal. I was
proving him it could be done with the journal.

>> systemctl status apache2.service
>>
>> (see [2]) will print the status of the Apache web server, and also the
>> last lines from the logs. You can control how many lines. You can
>> check also with the journal, as I showed up.
>
>
> I believe it would be a 5-minutes job to add the capability of printing last
> N log entries for a service to `rc-service status`. Using cat, grep and the
> like. Not reinventing wheels. Not spending super-talented super-highly paid
> developers' time on doing tasks one had done about 30 years ago. I believe,
> not having this option is due to its simple uselessness.

Others have chimed in on the infeasibility of this claim. However, if
you don't want to use the journal, and can emulate everything it does
in 5 minutes, then don't use the journal and write your little shell
scripts in 5 minutes.

I'd rather see cats with Wolverine claws in YouTube with those 5
minutes, and let the journal do the thing. But that's me.

> This way I really wonder if at some point the super talented systemd
> programmers decide that all shell tools are obsolete and every program
> should know how to index or filter or tail its output in its own, though,
> open, binary format. I can't get rid of the idea that systemd uses the MS
> Windows approach whatever you say about its open source.

Again, the journal can export an output (and really fast, since it has
everything indexed) that is 100% identical to the output of any other
logger. And you can use on it shell, grep and sed to your heart's
desire.

But if you don't want to, then don't use the journal. Nobody is
forcing it on you.

Regards.
-- 
Canek Peláez Valdés
Posgrado en Ciencia e Ingeniería de la Computación
Universidad Nacional Autónoma de México

Reply via email to