Hi,

An update: The version I'm using (SLES 15 SP6 systemd-254.24) spends most of 
the time in fstat() cals, and I really wonder what's going on there (say there 
are fewer than 20 files to process):

A perf example:



41245567 % time     seconds  usecs/call     calls    errors syscall

41245568 ------ ----------- ----------- --------- --------- ----------------

41245569  99.94   64.453476           1  41230003           fstat

41245570   0.04    0.023192           1     14234           write

41245571   0.01    0.007399          29       253           munmap

41245572   0.00    0.002771           6       432           mmap

41245573   0.00    0.000908           5       166        17 openat

41245574   0.00    0.000549           2       200           fcntl

41245575   0.00    0.000528           5       103           fstatfs

41245576   0.00    0.000262           8        32           mprotect

41245577   0.00    0.000120           2        49           close

41245578   0.00    0.000112           3        35           read

41245579   0.00    0.000033           2        14           getrandom

41245580   0.00    0.000016           4         4           brk

41245581   0.00    0.000012           6         2         1 statfs

41245582   0.00    0.000011           1         6         4 prctl

41245583   0.00    0.000011           1         6           getdents64
...

So fstat is called 41 million times. Probably more often than the whole journal 
has blocks...

Kind regards,
Ulrich Windl

From: systemd-devel <systemd-devel-boun...@lists.freedesktop.org> On Behalf Of 
Windl, Ulrich
Sent: Wednesday, April 9, 2025 11:48 AM
To: systemd-devel@lists.freedesktop.org
Subject: [EXT] [systemd-devel] slowness reading persistent journal

Hi!

In SLES15 SP6 (systemd-254.24-150600.4.28.1.x86_64) I noticed that collecting 
logs took quite long, so I checked.
# time journalctl |wc -l
2705448

real    2m23.008s
user    1m25.666s
sys     0m57.389s

The journal is persistent and it's located on a BtrFS subvolume using a device 
that is provided by VMware from a high-performance SSD SAN storage:
# lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sda      8:0    0   40G  0 disk
├─sda1   8:1    0  512M  0 part /boot/efi
├─sda2   8:2    0   21G  0 part /root
│                               /srv
│                               /usr/local
│                               /var
│                               /opt
│                               /tmp
│                               /boot/grub2/x86_64-efi
│                               /boot/grub2/i386-pc
│                               /.snapshots
│                               /
├─sda3   8:3    0 10.7G  0 part /home
└─sda4   8:4    0  7.8G  0 part [SWAP]
# hdparm -t /dev/sda2

/dev/sda2:
Timing buffered disk reads: 1680 MB in  3.00 seconds = 559.48 MB/sec
# hdparm -T /dev/sda2

/dev/sda2:
Timing cached reads:   18612 MB in  2.00 seconds = 9322.49 MB/sec

During boot this statistics had been logged:
System Journal (/var/log/journal/41d6ce7a6cfc467297b777b8d87f071d) is 2.0G, max 
2.1G, 103.1M free.

Obvious question: Why is it so slow?

Kind regards,
Ulrich Windl

Reply via email to