On Fri, 07.09.12 18:59, Holger Winkelmann (h...@travelping.com) wrote: > Hi, > > We are just wondering how scalable the journal will be on a single > host. This means in terms of number of journal files and the maintenance > of the index to query the journal. Think of a server with a lot of syslog > activities.
Well, I am not aware of anybody having done measurements recently about this. But I am not aware of anybody running into scalability issues so far. We recently made some changes that should make it much more scalable (for example, we dropped a lot of unnecessary checksum verification of data objects and replaced it with an explicit verification tool). In general all the algorithms for indexing and for looking things up should be O(log(n)) for n being the number of journal entries. However things scale O(m) if m is the number of journal files. That means that things should continue to be fast if journal files get large, but if you interleave many journal files things might get slow. [You might get many journal files as a result of using the networked/container journal logic (where you get a set of files for each machine), or because you configured the journal to rotate frequently.] Note that the journal code is not particularly optimized. There are a lot of low hanging fruits to pick here, to bring quick speed improvements should that be necessary. The data structures should be all well designed in regards to performance, but the code accessing them could certainly benefit from more optimizations, as so far the goal was to get things right, not necessarily optimize the hell out of it. Hope this is useful, even though vague, Lennart -- Lennart Poettering - Red Hat, Inc. _______________________________________________ systemd-devel mailing list systemd-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/systemd-devel