** Changed in: linux (Ubuntu Focal)
Status: Triaged => Won't Fix
** Changed in: linux (Ubuntu)
Status: Triaged => Won't Fix
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
I performed a bisect between the 5.4 and 5.15 kernels. The performance
regression was introduced by a stable update in 5.15.57 by the following
commit:
62b4db57eefec ("x86/entry: Add kernel IBRS implementation")
This commit applies IBRS kernel mitigation for Spectre_v2. IBRS is:
Indirect
** Changed in: linux (Ubuntu)
Assignee: (unassigned) => Joseph Salisbury (jsalisbury)
** Changed in: linux (Ubuntu Focal)
Assignee: (unassigned) => Joseph Salisbury (jsalisbury)
** Changed in: linux (Ubuntu Focal)
Importance: Undecided => Medium
** Changed in: linux (Ubuntu)
Work is still ongoing on reproducing this issue in a more predictable
environment.
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2042564
Title:
Performance regression in the 5.15
We are still looking into this issue. While we can reproduce the test
case and see difference in the performance, the delta is not as
significant and our results have not very consistent. I'm taking the
approach of setting up a more comprehensive test environment to run more
tests faster.
I have reproduced with @amalmostafa's updated script with a separate
disk too. I see no segfault and no EXT4 errors but the regression in
performance is still present but not as great as in my previous tests.
```
### Ubuntu 20.04 with 5.4 kernel and data disk
ubuntu@cloudimg:~$ sudo fio
I have identified the cause of the fio/glusterfs crash and the EXT4-fs
errors: The test procedure runs fio write tests on /dev/sda (which is
in fact the rootfs itself). I modified Phil's launch script to create a
smaller 10GB boot disk for the rootfs but also create a 50GB "junk" disk
(/dev/sdb)
I am NOT seeing the same on 22.04
```
ubuntu@cloudimg:~$ uname --kernel-release
5.15.0-87-generic
ubuntu@cloudimg:~$ cat /etc/os-release
PRETTY_NAME="Ubuntu 22.04.3 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.3 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
@kamalmostafa indeed yes. I had missed this.
See below for the output from 20.04 with 5.15 kernel.
```
ubuntu@cloudimg:~$ sudo fio --ioengine=libaio --blocksize=4k --readwrite=write
--filesize=40G --end_fsync=1 --iodepth=128 --direct=1 --group_reporting
--numjobs=8 --name=fiojob1
Interesting finding:
Using Phil's setup outlined in Comment #5 I am able to reproduce the
performance problem (e.g. 665MiB/s with stock v5.4 kernel versus
581MiB/s after upgrading to v5.15). But I also note that with the v5.15
kernel, fio always gets a segfault in libglusterfs.so while running
Note for testers: I had to modify Phil's launch-qcow2-image-qemu-40G.sh
script as follows. Without this change qemu drops me into the UEFI
Interactive Shell.
diff launch-qcow2-image-qemu-40G.sh.orig launch-qcow2-image-qemu-40G.sh
194c194
< -device scsi-hd,drive=boot-disk \
---
> -device
Providing exact reproducer steps using Qemu locally -
Launch script:
https://gist.github.com/philroche/8242106415ef35b446d7e625b6d60c90 and
cloud image I used for testing @ http://cloud-
images.ubuntu.com/minimal/releases/focal/release/
```
# Download the VM launch script
wget
Google have provided a non synthetic `fio` impact
> Performance was severally degraded when accessing Persistent Volumes
provided by Portworx/PureStorage.
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
I'm currently bisecting the releases and kernels, using releases between
5.4 and 5.15 on an LXD VM. Releases 5.4 through 5.13 exhibit similar
speeds, with 5.15 showing a drop-off. The numbers aren't quite the same
as above, but that's likely due to setup:
5.4: 744MiB/s average
5.11: 785MiB/s
For reference, I also tried on a 22.04 cloud image with 5.15 kernel
```
ubuntu@cloudimg:~$ sudo fio --ioengine=libaio --blocksize=4k --readwrite=write
--filesize=40G --end_fsync=1 --iodepth=128 --direct=1 --group_reporting
--numjobs=8 --name=fiojob1 --filename=/dev/sda
fiojob1: (g=0): rw=write,
15 matches
Mail list logo