[Kernel-packages] [Bug 2047153] Re: Intel TPM: tpm_crb: probe of MSFT0101:00 failed with error 378

2024-01-14 Thread Dan Podeanu
Agreed, fixed in linux-generic-hwe-22.04 6.5.0.14.14~22.04.7 -
resolving.

** Changed in: linux-signed-hwe-6.2 (Ubuntu)
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-signed-hwe-6.2 in Ubuntu.
https://bugs.launchpad.net/bugs/2047153

Title:
  Intel TPM: tpm_crb: probe of MSFT0101:00 failed with error 378

Status in linux-signed-hwe-6.2 package in Ubuntu:
  Fix Released

Bug description:
  Linux kernel 6.2.0-37 introduces a regression which breaks Intel TPM
  detection at boot. This was reported in the upstream kernel, and a fix
  is available.

  https://bugzilla.kernel.org/show_bug.cgi?id=217804
  
https://github.com/torvalds/linux/commit/8f7f35e5aa6f2182eabcfa3abef4d898a48e9aa8

  # dmesg|grep -i tpm
  [0.00] efi: ACPI=0x7564 ACPI 2.0=0x75640014 
TPMFinalLog=0x7560f000 SMBIOS=0x75cc8000 SMBIOS 3.0=0x75cc7000 
MEMATTR=0x6e49a018 ESRT=0x6e8b3198 RNG=0x754fe018 TPMEventLog=0x6e2a2018 
  [0.014972] ACPI: TPM2 0x7550 4C (v04 ALASKA A M I
0001 AMI  )
  [0.015001] ACPI: Reserving TPM2 table memory at [mem 
0x7550-0x7550004b]
  [0.391710] tpm_crb: probe of MSFT0101:00 failed with error 378
  [1.119989] ima: No TPM chip found, activating TPM-bypass!

  ProblemType: Bug
  DistroRelease: Ubuntu 22.04
  Package: linux-image-6.2.0-39-generic 6.2.0-39.40~22.04.1
  ProcVersionSignature: Ubuntu 6.2.0-39.40~22.04.1-generic 6.2.16
  Uname: Linux 6.2.0-39-generic x86_64
  ApportVersion: 2.20.11-0ubuntu82.5
  Architecture: amd64
  CasperMD5CheckResult: pass
  CloudArchitecture: x86_64
  CloudID: none
  CloudName: none
  CloudPlatform: none
  CloudSubPlatform: config
  Date: Fri Dec 22 00:01:29 2023
  InstallationDate: Installed on 2023-04-09 (256 days ago)
  InstallationMedia: Ubuntu-Server 22.04.2 LTS "Jammy Jellyfish" - Release 
amd64 (20230217.1)
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  SourcePackage: linux-signed-hwe-6.2
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-signed-hwe-6.2/+bug/2047153/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 2047153] [NEW] Intel TPM: tpm_crb: probe of MSFT0101:00 failed with error 378

2023-12-21 Thread Dan Podeanu
Public bug reported:

Linux kernel 6.2.0-37 introduces a regression which breaks Intel TPM
detection at boot. This was reported in the upstream kernel, and a fix
is available.

https://bugzilla.kernel.org/show_bug.cgi?id=217804
https://github.com/torvalds/linux/commit/8f7f35e5aa6f2182eabcfa3abef4d898a48e9aa8

# dmesg|grep -i tpm
[0.00] efi: ACPI=0x7564 ACPI 2.0=0x75640014 TPMFinalLog=0x7560f000 
SMBIOS=0x75cc8000 SMBIOS 3.0=0x75cc7000 MEMATTR=0x6e49a018 ESRT=0x6e8b3198 
RNG=0x754fe018 TPMEventLog=0x6e2a2018 
[0.014972] ACPI: TPM2 0x7550 4C (v04 ALASKA A M I
0001 AMI  )
[0.015001] ACPI: Reserving TPM2 table memory at [mem 0x7550-0x7550004b]
[0.391710] tpm_crb: probe of MSFT0101:00 failed with error 378
[1.119989] ima: No TPM chip found, activating TPM-bypass!

ProblemType: Bug
DistroRelease: Ubuntu 22.04
Package: linux-image-6.2.0-39-generic 6.2.0-39.40~22.04.1
ProcVersionSignature: Ubuntu 6.2.0-39.40~22.04.1-generic 6.2.16
Uname: Linux 6.2.0-39-generic x86_64
ApportVersion: 2.20.11-0ubuntu82.5
Architecture: amd64
CasperMD5CheckResult: pass
CloudArchitecture: x86_64
CloudID: none
CloudName: none
CloudPlatform: none
CloudSubPlatform: config
Date: Fri Dec 22 00:01:29 2023
InstallationDate: Installed on 2023-04-09 (256 days ago)
InstallationMedia: Ubuntu-Server 22.04.2 LTS "Jammy Jellyfish" - Release amd64 
(20230217.1)
ProcEnviron:
 TERM=xterm-256color
 PATH=(custom, no user)
 XDG_RUNTIME_DIR=
 LANG=en_US.UTF-8
 SHELL=/bin/bash
SourcePackage: linux-signed-hwe-6.2
UpgradeStatus: No upgrade log present (probably fresh install)

** Affects: linux-signed-hwe-6.2 (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-bug jammy uec-images

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-signed-hwe-6.2 in Ubuntu.
https://bugs.launchpad.net/bugs/2047153

Title:
  Intel TPM: tpm_crb: probe of MSFT0101:00 failed with error 378

Status in linux-signed-hwe-6.2 package in Ubuntu:
  New

Bug description:
  Linux kernel 6.2.0-37 introduces a regression which breaks Intel TPM
  detection at boot. This was reported in the upstream kernel, and a fix
  is available.

  https://bugzilla.kernel.org/show_bug.cgi?id=217804
  
https://github.com/torvalds/linux/commit/8f7f35e5aa6f2182eabcfa3abef4d898a48e9aa8

  # dmesg|grep -i tpm
  [0.00] efi: ACPI=0x7564 ACPI 2.0=0x75640014 
TPMFinalLog=0x7560f000 SMBIOS=0x75cc8000 SMBIOS 3.0=0x75cc7000 
MEMATTR=0x6e49a018 ESRT=0x6e8b3198 RNG=0x754fe018 TPMEventLog=0x6e2a2018 
  [0.014972] ACPI: TPM2 0x7550 4C (v04 ALASKA A M I
0001 AMI  )
  [0.015001] ACPI: Reserving TPM2 table memory at [mem 
0x7550-0x7550004b]
  [0.391710] tpm_crb: probe of MSFT0101:00 failed with error 378
  [1.119989] ima: No TPM chip found, activating TPM-bypass!

  ProblemType: Bug
  DistroRelease: Ubuntu 22.04
  Package: linux-image-6.2.0-39-generic 6.2.0-39.40~22.04.1
  ProcVersionSignature: Ubuntu 6.2.0-39.40~22.04.1-generic 6.2.16
  Uname: Linux 6.2.0-39-generic x86_64
  ApportVersion: 2.20.11-0ubuntu82.5
  Architecture: amd64
  CasperMD5CheckResult: pass
  CloudArchitecture: x86_64
  CloudID: none
  CloudName: none
  CloudPlatform: none
  CloudSubPlatform: config
  Date: Fri Dec 22 00:01:29 2023
  InstallationDate: Installed on 2023-04-09 (256 days ago)
  InstallationMedia: Ubuntu-Server 22.04.2 LTS "Jammy Jellyfish" - Release 
amd64 (20230217.1)
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  SourcePackage: linux-signed-hwe-6.2
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-signed-hwe-6.2/+bug/2047153/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1969482] Re: zfs-2.1.4+ sru

2022-07-28 Thread Dan Podeanu
The updated kernel was released, and it fixes the performance
regression. Thank you!

# uname -a
Linux arpa 5.15.0-43-generic #46-Ubuntu SMP Tue Jul 12 10:30:17 UTC 2022 x86_64 
x86_64 x86_64 GNU/Linux

# zfs --version
zfs-2.1.4-0ubuntu0.1
zfs-kmod-2.1.4-0ubuntu0.1

# grep . /sys/module/icp/parameters/*impl*
/sys/module/icp/parameters/icp_aes_impl:cycle [fastest] generic x86_64 aesni 
/sys/module/icp/parameters/icp_gcm_impl:cycle [fastest] avx generic pclmulqdq 

# dd if=14GBfile.tmp of=/dev/null bs=1M
13411+1 records in
13411+1 records out
14062902185 bytes (14 GB, 13 GiB) copied, 12.6139 s, 1.1 GB/s

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1969482

Title:
  zfs-2.1.4+ sru

Status in zfs-linux package in Ubuntu:
  Fix Released
Status in zfs-linux source package in Jammy:
  Fix Released

Bug description:
  [Impact]

   * Upstream stable point release update with bugfixes, performance
  fixes, and newer kernel support as needed already in the OEM kernel
  and will be needed in the future HWE kernels.

  [Test Plan]

   * autopkgtest pass

   * kernel regression zfs testsuite pass

   * zsys integration test pass

  [Where problems could occur]

   * The stable branches maintain api/abi. Certain bugfixes do change
  userspace visible behavior of either succeeeding (when previously
  operations failed), or return errors when previously succeeding in
  error. For example there are changes when unlinking files in full
  volumes; changes to fallocate behaviour, etc. Overall they are minor
  corner cases, and bugfixes to correct the bahaviour to what is
  universally expected and how things behave on other filesystems (i.e.
  ext4).

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1969482/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1977699] Re: zfs icp has deselected all optimized aes & gcm impls

2022-07-15 Thread Dan Podeanu
*** This bug is a duplicate of bug 1969482 ***
https://bugs.launchpad.net/bugs/1969482

** This bug has been marked a duplicate of bug 1969482
   zfs-2.1.4+ sru

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1977699

Title:
  zfs icp has deselected all optimized aes & gcm impls

Status in linux package in Ubuntu:
  Incomplete
Status in zfs-linux package in Ubuntu:
  Confirmed

Bug description:
  In an upgrade from Jammy kernel 5.15.0-27-generic to 5.15.0-35-generic
  on x86_64 (AMD threadripper pro 39x5wx series), a 40x performance
  regression in the first read of cached writes to an encrypted dataset
  revealed that zfs is no longer configured to choose any
  implementations from advanced instruction sets:

  $ grep . /sys/module/icp/parameters/*impl*
  /sys/module/icp/parameters/icp_aes_impl:cycle [fastest] generic x86_64
  /sys/module/icp/parameters/icp_gcm_impl:cycle [fastest] generic

  With correct configuration, the output should read as follows:

  $ grep . /sys/module/icp/parameters/*impl*
  /sys/module/icp/parameters/icp_aes_impl:cycle [fastest] generic x86_64 aesni
  /sys/module/icp/parameters/icp_gcm_impl:cycle [fastest] avx generic pclmulqdq

  The immediate ill effects are the use of gcm_generic_mul instead of
  the dedicated instruction, consuming 50% CPU and slowing reads of data
  cached in ram to less than 20% of what they would be even reading
  directly from disk.

  openzfs changed its configure process to detect cpu features
  differently recently to adapt to the kernel api change. It seems that
  the upstream changes that unexport the needed symbols and the
  downstream changes in openzfs that stop using them were not
  cherrypicked in sync.

  https://github.com/openzfs/zfs/pull/13147
  https://github.com/openzfs/zfs/pull/13236

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1977699/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1978347] Re: horrible IO degradation with encrypted zfs root on kernels past 5.15.0-27

2022-07-15 Thread Dan Podeanu
*** This bug is a duplicate of bug 1969482 ***
https://bugs.launchpad.net/bugs/1969482

** This bug is no longer a duplicate of bug 1977699
   zfs icp has deselected all optimized aes & gcm impls
** This bug has been marked a duplicate of bug 1969482
   zfs-2.1.4+ sru

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1978347

Title:
  horrible IO degradation with encrypted zfs root on kernels past
  5.15.0-27

Status in linux package in Ubuntu:
  Confirmed
Status in zfs-linux package in Ubuntu:
  Confirmed

Bug description:
  IO on encrypted zfs root has fallen off a cliff in kernel versions
  after 5.15.0-27 (the degradation is observed since version 5.15.0-30,
  also seen on -33 and -37, -25 and -27 work like a charm). Heavy usage
  almost hangs a new laptop (building large singularity images or
  synthetic testing with dd).

  I have confirmed that things are working as expected on a default +
  upgraded Ubuntu 22.04 desktop LVM+LUKS installation using another NVMe
  SSD on the same laptop. There seems to be a regression when using the
  native zfs encryption (did aes-ni acceleration get turned off?)

  How to reproduce:

  - install ubuntu 22.04 desktop from iso, don't install web updates, check use 
zfs and encryption
  - sudo apt update && apt install dstat htop
  - create a dataset with compression disabled so that dd actually writes 
things to disk

  * sudo zfs create rpool/dummy
  * sudo zfs set compress=off rpool/dummy
  * sudo chown -R myusername. /dummy

  - start dstat and htop in the background (show kernel threads in the htop 
config)
  - dd if=/dev/zero of=/dummy/bigfile bs=1M count=16384

  - sudo apt upgradeand reboot on the latest kernel, repeat

  Expected: some cpu load, dstat reports write speeds about as much as
  the SSD can sustain (2.9-3GiB/s with a 2TiB Samsung 970 EVO Plus for a
  16GiB write test, 1.4GiB/s for a few seconds then 800MiB/s sustained
  for whatever WD 512GiB model I had laying around).

  Observed on versions -30 and later: 700% or more system cpu load,
  mostly in z_wr_iss threads, writes top at around 150-180MiB/s, the
  system becomes somewhat unresponsive. Reads are also not good but I
  have not benchmarked. Booting the system and launching apps seems
  about normal due to the low IO load.

  ProblemType: Bug
  DistroRelease: Ubuntu 22.04
  Package: linux-image-5.15.0-37-generic 5.15.0-37.39
  ProcVersionSignature: Ubuntu 5.15.0-37.39-generic 5.15.35
  Uname: Linux 5.15.0-37-generic x86_64
  NonfreeKernelModules: zfs zunicode zcommon znvpair zavl icp
  ApportVersion: 2.20.11-0ubuntu82.1
  Architecture: amd64
  AudioDevicesInUse:
   USERPID ACCESS COMMAND
   /dev/snd/controlC0:  laperlej   4100 F pulseaudio
  CRDA: N/A
  CasperMD5CheckResult: pass
  CurrentDesktop: ubuntu:GNOME
  Date: Fri Jun 10 15:52:12 2022
  HibernationDevice: RESUME=none
  InstallationDate: Installed on 2022-05-10 (31 days ago)
  InstallationMedia: Ubuntu 22.04 LTS "Jammy Jellyfish" - Release amd64 
(20220419)
  MachineType: HP HP EliteBook 850 G8 Notebook PC
  ProcFB: 0 i915drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/BOOT/ubuntu_rgwvzq@/vmlinuz-5.15.0-37-generic 
root=ZFS=rpool/ROOT/ubuntu_rgwvzq ro quiet splash vt.handoff=1
  RelatedPackageVersions:
   linux-restricted-modules-5.15.0-37-generic N/A
   linux-backports-modules-5.15.0-37-generic  N/A
   linux-firmware 20220329.git681281e4-0ubuntu3.2
  SourcePackage: linux
  UpgradeStatus: No upgrade log present (probably fresh install)
  dmi.bios.date: 01/11/2022
  dmi.bios.release: 8.0
  dmi.bios.vendor: HP
  dmi.bios.version: T76 Ver. 01.08.00
  dmi.board.name: 8846
  dmi.board.vendor: HP
  dmi.board.version: KBC Version 30.37.00
  dmi.chassis.type: 10
  dmi.chassis.vendor: HP
  dmi.ec.firmware.release: 48.55
  dmi.modalias: 
dmi:bvnHP:bvrT76Ver.01.08.00:bd01/11/2022:br8.0:efr48.55:svnHP:pnHPEliteBook850G8NotebookPC:pvr:rvnHP:rn8846:rvrKBCVersion30.37.00:cvnHP:ct10:cvr:sku4V1S3UP#ABL:
  dmi.product.family: 103C_5336AN HP EliteBook
  dmi.product.name: HP EliteBook 850 G8 Notebook PC
  dmi.product.sku: 4V1S3UP#ABL
  dmi.sys.vendor: HP

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1978347/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1969482] Re: zfs-2.1.4+ sru

2022-07-14 Thread Dan Podeanu
Got it, thank you. It sounds like testing encrypted zfs performance
could perhaps be added to the validation suite for kernels.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1969482

Title:
  zfs-2.1.4+ sru

Status in zfs-linux package in Ubuntu:
  Fix Released
Status in zfs-linux source package in Jammy:
  Fix Released

Bug description:
  [Impact]

   * Upstream stable point release update with bugfixes, performance
  fixes, and newer kernel support as needed already in the OEM kernel
  and will be needed in the future HWE kernels.

  [Test Plan]

   * autopkgtest pass

   * kernel regression zfs testsuite pass

   * zsys integration test pass

  [Where problems could occur]

   * The stable branches maintain api/abi. Certain bugfixes do change
  userspace visible behavior of either succeeeding (when previously
  operations failed), or return errors when previously succeeding in
  error. For example there are changes when unlinking files in full
  volumes; changes to fallocate behaviour, etc. Overall they are minor
  corner cases, and bugfixes to correct the bahaviour to what is
  universally expected and how things behave on other filesystems (i.e.
  ext4).

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1969482/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1969482] Re: zfs-2.1.4+ sru

2022-07-13 Thread Dan Podeanu
@dmitri

Thank you! Quick question, the Jammy kernel version was just bumped, but
it still does not include 2.1.4 zfs module

# uname -a
Linux arpa 5.15.0-41-generic #44-Ubuntu SMP Wed Jun 22 14:20:53 UTC 2022 x86_64 
x86_64 x86_64 GNU/Linux

# zfs --version
zfs-2.1.4-0ubuntu0.1
zfs-kmod-2.1.2-1ubuntu3

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1969482

Title:
  zfs-2.1.4+ sru

Status in zfs-linux package in Ubuntu:
  Fix Released
Status in zfs-linux source package in Jammy:
  Fix Released

Bug description:
  [Impact]

   * Upstream stable point release update with bugfixes, performance
  fixes, and newer kernel support as needed already in the OEM kernel
  and will be needed in the future HWE kernels.

  [Test Plan]

   * autopkgtest pass

   * kernel regression zfs testsuite pass

   * zsys integration test pass

  [Where problems could occur]

   * The stable branches maintain api/abi. Certain bugfixes do change
  userspace visible behavior of either succeeeding (when previously
  operations failed), or return errors when previously succeeding in
  error. For example there are changes when unlinking files in full
  volumes; changes to fallocate behaviour, etc. Overall they are minor
  corner cases, and bugfixes to correct the bahaviour to what is
  universally expected and how things behave on other filesystems (i.e.
  ext4).

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1969482/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1978347] Re: horrible IO degradation with encrypted zfs root on kernels past 5.15.0-27

2022-06-14 Thread Dan Podeanu
** Also affects: zfs-linux (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1978347

Title:
  horrible IO degradation with encrypted zfs root on kernels past
  5.15.0-27

Status in linux package in Ubuntu:
  Confirmed
Status in zfs-linux package in Ubuntu:
  New

Bug description:
  IO on encrypted zfs root has fallen off a cliff in kernel versions
  after 5.15.0-27 (the degradation is observed since version 5.15.0-30,
  also seen on -33 and -37, -25 and -27 work like a charm). Heavy usage
  almost hangs a new laptop (building large singularity images or
  synthetic testing with dd).

  I have confirmed that things are working as expected on a default +
  upgraded Ubuntu 22.04 desktop LVM+LUKS installation using another NVMe
  SSD on the same laptop. There seems to be a regression when using the
  native zfs encryption (did aes-ni acceleration get turned off?)

  How to reproduce:

  - install ubuntu 22.04 desktop from iso, don't install web updates, check use 
zfs and encryption
  - sudo apt update && apt install dstat htop
  - create a dataset with compression disabled so that dd actually writes 
things to disk

  * sudo zfs create rpool/dummy
  * sudo zfs set compress=off rpool/dummy
  * sudo chown -R myusername. /dummy

  - start dstat and htop in the background (show kernel threads in the htop 
config)
  - dd if=/dev/zero of=/dummy/bigfile bs=1M count=16384

  - sudo apt upgradeand reboot on the latest kernel, repeat

  Expected: some cpu load, dstat reports write speeds about as much as
  the SSD can sustain (2.9-3GiB/s with a 2TiB Samsung 970 EVO Plus for a
  16GiB write test, 1.4GiB/s for a few seconds then 800MiB/s sustained
  for whatever WD 512GiB model I had laying around).

  Observed on versions -30 and later: 700% or more system cpu load,
  mostly in z_wr_iss threads, writes top at around 150-180MiB/s, the
  system becomes somewhat unresponsive. Reads are also not good but I
  have not benchmarked. Booting the system and launching apps seems
  about normal due to the low IO load.

  ProblemType: Bug
  DistroRelease: Ubuntu 22.04
  Package: linux-image-5.15.0-37-generic 5.15.0-37.39
  ProcVersionSignature: Ubuntu 5.15.0-37.39-generic 5.15.35
  Uname: Linux 5.15.0-37-generic x86_64
  NonfreeKernelModules: zfs zunicode zcommon znvpair zavl icp
  ApportVersion: 2.20.11-0ubuntu82.1
  Architecture: amd64
  AudioDevicesInUse:
   USERPID ACCESS COMMAND
   /dev/snd/controlC0:  laperlej   4100 F pulseaudio
  CRDA: N/A
  CasperMD5CheckResult: pass
  CurrentDesktop: ubuntu:GNOME
  Date: Fri Jun 10 15:52:12 2022
  HibernationDevice: RESUME=none
  InstallationDate: Installed on 2022-05-10 (31 days ago)
  InstallationMedia: Ubuntu 22.04 LTS "Jammy Jellyfish" - Release amd64 
(20220419)
  MachineType: HP HP EliteBook 850 G8 Notebook PC
  ProcFB: 0 i915drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/BOOT/ubuntu_rgwvzq@/vmlinuz-5.15.0-37-generic 
root=ZFS=rpool/ROOT/ubuntu_rgwvzq ro quiet splash vt.handoff=1
  RelatedPackageVersions:
   linux-restricted-modules-5.15.0-37-generic N/A
   linux-backports-modules-5.15.0-37-generic  N/A
   linux-firmware 20220329.git681281e4-0ubuntu3.2
  SourcePackage: linux
  UpgradeStatus: No upgrade log present (probably fresh install)
  dmi.bios.date: 01/11/2022
  dmi.bios.release: 8.0
  dmi.bios.vendor: HP
  dmi.bios.version: T76 Ver. 01.08.00
  dmi.board.name: 8846
  dmi.board.vendor: HP
  dmi.board.version: KBC Version 30.37.00
  dmi.chassis.type: 10
  dmi.chassis.vendor: HP
  dmi.ec.firmware.release: 48.55
  dmi.modalias: 
dmi:bvnHP:bvrT76Ver.01.08.00:bd01/11/2022:br8.0:efr48.55:svnHP:pnHPEliteBook850G8NotebookPC:pvr:rvnHP:rn8846:rvrKBCVersion30.37.00:cvnHP:ct10:cvr:sku4V1S3UP#ABL:
  dmi.product.family: 103C_5336AN HP EliteBook
  dmi.product.name: HP EliteBook 850 G8 Notebook PC
  dmi.product.sku: 4V1S3UP#ABL
  dmi.sys.vendor: HP

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1978347/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1978347] Re: horrible IO degradation with encrypted zfs root on kernels past 5.15.0-27

2022-06-14 Thread Dan Podeanu
For more context, openssl using aes-256-cbc (not aes-256-gcm, which is
unsupported by default) appears to behave identically between the two
kernels, and faster than when disabling aes-ni, therefore aes-ni appears
to be enabled in both.

# uname -a
Linux aero 5.15.0-37-generic #39-Ubuntu SMP Wed Jun 1 19:16:45 UTC 2022 x86_64 
x86_64 x86_64 GNU/Linux

# openssl speed aes-256-cbc
Doing aes-256-cbc for 3s on 16 size blocks: 186683511 aes-256-cbc's in 3.00s
Doing aes-256-cbc for 3s on 64 size blocks: 50188595 aes-256-cbc's in 3.00s
Doing aes-256-cbc for 3s on 256 size blocks: 12770138 aes-256-cbc's in 3.00s
Doing aes-256-cbc for 3s on 1024 size blocks: 3204571 aes-256-cbc's in 3.00s
Doing aes-256-cbc for 3s on 8192 size blocks: 400190 aes-256-cbc's in 3.00s
Doing aes-256-cbc for 3s on 16384 size blocks: 199936 aes-256-cbc's in 3.00s
version: 3.0.2
built on: Thu May  5 08:04:52 2022 UTC
options: bn(64,64)
compiler: gcc -fPIC -pthread -m64 -Wa,--noexecstack -Wall -Wa,--noexecstack -g 
-O2 -ffile-prefix-map=/build/openssl-Ke3YUO/openssl-3.0.2=. -flto=auto 
-ffat-lto-objects -flto=auto -ffat-lto-objects -fstack-protector-strong 
-Wformat -Werror=format-security -DOPENSSL_TLS_SECURITY_LEVEL=2 
-DOPENSSL_USE_NODELETE -DL_ENDIAN -DOPENSSL_PIC -DOPENSSL_BUILDING_OPENSSL 
-DNDEBUG -Wdate-time -D_FORTIFY_SOURCE=2
CPUINFO: OPENSSL_ia32cap=0x7ffef3eb:0x818d39ef7eb
The 'numbers' are in 1000s of bytes per second processed.
type 16 bytes 64 bytes256 bytes   1024 bytes   8192 bytes  
16384 bytes
aes-256-cbc 995645.39k  1070690.03k  1089718.44k  1093826.90k  1092785.49k  
1091917.14k

# uname -a
Linux aero 5.15.0-27-generic #28-Ubuntu SMP Thu Apr 14 04:55:28 UTC 2022 x86_64 
x86_64 x86_64 GNU/Linux

# openssl speed aes-256-cbc
Doing aes-256-cbc for 3s on 16 size blocks: 186867669 aes-256-cbc's in 3.00s
Doing aes-256-cbc for 3s on 64 size blocks: 50246758 aes-256-cbc's in 3.00s
Doing aes-256-cbc for 3s on 256 size blocks: 12765344 aes-256-cbc's in 3.00s
Doing aes-256-cbc for 3s on 1024 size blocks: 3204376 aes-256-cbc's in 3.00s
Doing aes-256-cbc for 3s on 8192 size blocks: 399400 aes-256-cbc's in 3.00s
Doing aes-256-cbc for 3s on 16384 size blocks: 200109 aes-256-cbc's in 3.00s
version: 3.0.2
built on: Thu May  5 08:04:52 2022 UTC
options: bn(64,64)
compiler: gcc -fPIC -pthread -m64 -Wa,--noexecstack -Wall -Wa,--noexecstack -g 
-O2 -ffile-prefix-map=/build/openssl-Ke3YUO/openssl-3.0.2=. -flto=auto 
-ffat-lto-objects -flto=auto -ffat-lto-objects -fstack-protector-strong 
-Wformat -Werror=format-security -DOPENSSL_TLS_SECURITY_LEVEL=2 
-DOPENSSL_USE_NODELETE -DL_ENDIAN -DOPENSSL_PIC -DOPENSSL_BUILDING_OPENSSL 
-DNDEBUG -Wdate-time -D_FORTIFY_SOURCE=2
CPUINFO: OPENSSL_ia32cap=0x7ffef3eb:0x818d39ef7eb
The 'numbers' are in 1000s of bytes per second processed.
type 16 bytes 64 bytes256 bytes   1024 bytes   8192 bytes  
16384 bytes
aes-256-cbc 996627.57k  1071930.84k  1089309.35k  1093760.34k  1090628.27k  
1092861.95k

# OPENSSL_ia32cap= openssl speed aes-256-cbc
Doing aes-256-cbc for 3s on 16 size blocks: 20600986 aes-256-cbc's in 3.00s
Doing aes-256-cbc for 3s on 64 size blocks: 5866219 aes-256-cbc's in 3.00s
Doing aes-256-cbc for 3s on 256 size blocks: 1517023 aes-256-cbc's in 3.00s
Doing aes-256-cbc for 3s on 1024 size blocks: 885706 aes-256-cbc's in 3.00s
Doing aes-256-cbc for 3s on 8192 size blocks: 112410 aes-256-cbc's in 3.00s
Doing aes-256-cbc for 3s on 16384 size blocks: 56135 aes-256-cbc's in 3.00s
version: 3.0.2
built on: Thu May  5 08:04:52 2022 UTC
options: bn(64,64)
compiler: gcc -fPIC -pthread -m64 -Wa,--noexecstack -Wall -Wa,--noexecstack -g 
-O2 -ffile-prefix-map=/build/openssl-Ke3YUO/openssl-3.0.2=. -flto=auto 
-ffat-lto-objects -flto=auto -ffat-lto-objects -fstack-protector-strong 
-Wformat -Werror=format-security -DOPENSSL_TLS_SECURITY_LEVEL=2 
-DOPENSSL_USE_NODELETE -DL_ENDIAN -DOPENSSL_PIC -DOPENSSL_BUILDING_OPENSSL 
-DNDEBUG -Wdate-time -D_FORTIFY_SOURCE=2
CPUINFO: OPENSSL_ia32cap=0x400:0x0 env:
The 'numbers' are in 1000s of bytes per second processed.
type 16 bytes 64 bytes256 bytes   1024 bytes   8192 bytes  
16384 bytes
aes-256-cbc 109871.93k   125146.01k   129452.63k   302320.98k   306954.24k  
 306571.95k

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1978347

Title:
  horrible IO degradation with encrypted zfs root on kernels past
  5.15.0-27

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  IO on encrypted zfs root has fallen off a cliff in kernel versions
  after 5.15.0-27 (the degradation is observed since version 5.15.0-30,
  also seen on -33 and -37, -25 and -27 work like a charm). Heavy usage
  almost hangs a new laptop (building large singularity images or
  synthetic testing with dd).

  I have confirmed that things are working as expected on a default +
  upgraded Ubuntu 22.04 

[Kernel-packages] [Bug 1978347] Re: horrible IO degradation with encrypted zfs root on kernels past 5.15.0-27

2022-06-14 Thread Dan Podeanu
Test script "zfs-test.sh"

- "zfs-test.sh enc" creates a 2x8 GB ZFS mirror pool, backed by two files in 
tmpfs, followed by a ZFS encrypted filesystem using the default aes-256-gcm
- "zfs-test.sh" creates a 2x8 GB ZFS mirror pool, backed by two files in tmpfs, 
followed by a ZFS unencrypted filesystem

** Attachment added: "zfs-test.sh"
   
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1978347/+attachment/5597249/+files/zfs-test.sh

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1978347

Title:
  horrible IO degradation with encrypted zfs root on kernels past
  5.15.0-27

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  IO on encrypted zfs root has fallen off a cliff in kernel versions
  after 5.15.0-27 (the degradation is observed since version 5.15.0-30,
  also seen on -33 and -37, -25 and -27 work like a charm). Heavy usage
  almost hangs a new laptop (building large singularity images or
  synthetic testing with dd).

  I have confirmed that things are working as expected on a default +
  upgraded Ubuntu 22.04 desktop LVM+LUKS installation using another NVMe
  SSD on the same laptop. There seems to be a regression when using the
  native zfs encryption (did aes-ni acceleration get turned off?)

  How to reproduce:

  - install ubuntu 22.04 desktop from iso, don't install web updates, check use 
zfs and encryption
  - sudo apt update && apt install dstat htop
  - create a dataset with compression disabled so that dd actually writes 
things to disk

  * sudo zfs create rpool/dummy
  * sudo zfs set compress=off rpool/dummy
  * sudo chown -R myusername. /dummy

  - start dstat and htop in the background (show kernel threads in the htop 
config)
  - dd if=/dev/zero of=/dummy/bigfile bs=1M count=16384

  - sudo apt upgradeand reboot on the latest kernel, repeat

  Expected: some cpu load, dstat reports write speeds about as much as
  the SSD can sustain (2.9-3GiB/s with a 2TiB Samsung 970 EVO Plus for a
  16GiB write test, 1.4GiB/s for a few seconds then 800MiB/s sustained
  for whatever WD 512GiB model I had laying around).

  Observed on versions -30 and later: 700% or more system cpu load,
  mostly in z_wr_iss threads, writes top at around 150-180MiB/s, the
  system becomes somewhat unresponsive. Reads are also not good but I
  have not benchmarked. Booting the system and launching apps seems
  about normal due to the low IO load.

  ProblemType: Bug
  DistroRelease: Ubuntu 22.04
  Package: linux-image-5.15.0-37-generic 5.15.0-37.39
  ProcVersionSignature: Ubuntu 5.15.0-37.39-generic 5.15.35
  Uname: Linux 5.15.0-37-generic x86_64
  NonfreeKernelModules: zfs zunicode zcommon znvpair zavl icp
  ApportVersion: 2.20.11-0ubuntu82.1
  Architecture: amd64
  AudioDevicesInUse:
   USERPID ACCESS COMMAND
   /dev/snd/controlC0:  laperlej   4100 F pulseaudio
  CRDA: N/A
  CasperMD5CheckResult: pass
  CurrentDesktop: ubuntu:GNOME
  Date: Fri Jun 10 15:52:12 2022
  HibernationDevice: RESUME=none
  InstallationDate: Installed on 2022-05-10 (31 days ago)
  InstallationMedia: Ubuntu 22.04 LTS "Jammy Jellyfish" - Release amd64 
(20220419)
  MachineType: HP HP EliteBook 850 G8 Notebook PC
  ProcFB: 0 i915drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/BOOT/ubuntu_rgwvzq@/vmlinuz-5.15.0-37-generic 
root=ZFS=rpool/ROOT/ubuntu_rgwvzq ro quiet splash vt.handoff=1
  RelatedPackageVersions:
   linux-restricted-modules-5.15.0-37-generic N/A
   linux-backports-modules-5.15.0-37-generic  N/A
   linux-firmware 20220329.git681281e4-0ubuntu3.2
  SourcePackage: linux
  UpgradeStatus: No upgrade log present (probably fresh install)
  dmi.bios.date: 01/11/2022
  dmi.bios.release: 8.0
  dmi.bios.vendor: HP
  dmi.bios.version: T76 Ver. 01.08.00
  dmi.board.name: 8846
  dmi.board.vendor: HP
  dmi.board.version: KBC Version 30.37.00
  dmi.chassis.type: 10
  dmi.chassis.vendor: HP
  dmi.ec.firmware.release: 48.55
  dmi.modalias: 
dmi:bvnHP:bvrT76Ver.01.08.00:bd01/11/2022:br8.0:efr48.55:svnHP:pnHPEliteBook850G8NotebookPC:pvr:rvnHP:rn8846:rvrKBCVersion30.37.00:cvnHP:ct10:cvr:sku4V1S3UP#ABL:
  dmi.product.family: 103C_5336AN HP EliteBook
  dmi.product.name: HP EliteBook 850 G8 Notebook PC
  dmi.product.sku: 4V1S3UP#ABL
  dmi.sys.vendor: HP

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1978347/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1978347] Re: horrible IO degradation with encrypted zfs root on kernels past 5.15.0-27

2022-06-14 Thread Dan Podeanu
Output from "apport-cli --save 5.15.0-37-generic.apport -p linux --file-
bug"

** Attachment added: "5.15.0-37-generic.apport"
   
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1978347/+attachment/5597248/+files/5.15.0-37-generic.apport

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1978347

Title:
  horrible IO degradation with encrypted zfs root on kernels past
  5.15.0-27

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  IO on encrypted zfs root has fallen off a cliff in kernel versions
  after 5.15.0-27 (the degradation is observed since version 5.15.0-30,
  also seen on -33 and -37, -25 and -27 work like a charm). Heavy usage
  almost hangs a new laptop (building large singularity images or
  synthetic testing with dd).

  I have confirmed that things are working as expected on a default +
  upgraded Ubuntu 22.04 desktop LVM+LUKS installation using another NVMe
  SSD on the same laptop. There seems to be a regression when using the
  native zfs encryption (did aes-ni acceleration get turned off?)

  How to reproduce:

  - install ubuntu 22.04 desktop from iso, don't install web updates, check use 
zfs and encryption
  - sudo apt update && apt install dstat htop
  - create a dataset with compression disabled so that dd actually writes 
things to disk

  * sudo zfs create rpool/dummy
  * sudo zfs set compress=off rpool/dummy
  * sudo chown -R myusername. /dummy

  - start dstat and htop in the background (show kernel threads in the htop 
config)
  - dd if=/dev/zero of=/dummy/bigfile bs=1M count=16384

  - sudo apt upgradeand reboot on the latest kernel, repeat

  Expected: some cpu load, dstat reports write speeds about as much as
  the SSD can sustain (2.9-3GiB/s with a 2TiB Samsung 970 EVO Plus for a
  16GiB write test, 1.4GiB/s for a few seconds then 800MiB/s sustained
  for whatever WD 512GiB model I had laying around).

  Observed on versions -30 and later: 700% or more system cpu load,
  mostly in z_wr_iss threads, writes top at around 150-180MiB/s, the
  system becomes somewhat unresponsive. Reads are also not good but I
  have not benchmarked. Booting the system and launching apps seems
  about normal due to the low IO load.

  ProblemType: Bug
  DistroRelease: Ubuntu 22.04
  Package: linux-image-5.15.0-37-generic 5.15.0-37.39
  ProcVersionSignature: Ubuntu 5.15.0-37.39-generic 5.15.35
  Uname: Linux 5.15.0-37-generic x86_64
  NonfreeKernelModules: zfs zunicode zcommon znvpair zavl icp
  ApportVersion: 2.20.11-0ubuntu82.1
  Architecture: amd64
  AudioDevicesInUse:
   USERPID ACCESS COMMAND
   /dev/snd/controlC0:  laperlej   4100 F pulseaudio
  CRDA: N/A
  CasperMD5CheckResult: pass
  CurrentDesktop: ubuntu:GNOME
  Date: Fri Jun 10 15:52:12 2022
  HibernationDevice: RESUME=none
  InstallationDate: Installed on 2022-05-10 (31 days ago)
  InstallationMedia: Ubuntu 22.04 LTS "Jammy Jellyfish" - Release amd64 
(20220419)
  MachineType: HP HP EliteBook 850 G8 Notebook PC
  ProcFB: 0 i915drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/BOOT/ubuntu_rgwvzq@/vmlinuz-5.15.0-37-generic 
root=ZFS=rpool/ROOT/ubuntu_rgwvzq ro quiet splash vt.handoff=1
  RelatedPackageVersions:
   linux-restricted-modules-5.15.0-37-generic N/A
   linux-backports-modules-5.15.0-37-generic  N/A
   linux-firmware 20220329.git681281e4-0ubuntu3.2
  SourcePackage: linux
  UpgradeStatus: No upgrade log present (probably fresh install)
  dmi.bios.date: 01/11/2022
  dmi.bios.release: 8.0
  dmi.bios.vendor: HP
  dmi.bios.version: T76 Ver. 01.08.00
  dmi.board.name: 8846
  dmi.board.vendor: HP
  dmi.board.version: KBC Version 30.37.00
  dmi.chassis.type: 10
  dmi.chassis.vendor: HP
  dmi.ec.firmware.release: 48.55
  dmi.modalias: 
dmi:bvnHP:bvrT76Ver.01.08.00:bd01/11/2022:br8.0:efr48.55:svnHP:pnHPEliteBook850G8NotebookPC:pvr:rvnHP:rn8846:rvrKBCVersion30.37.00:cvnHP:ct10:cvr:sku4V1S3UP#ABL:
  dmi.product.family: 103C_5336AN HP EliteBook
  dmi.product.name: HP EliteBook 850 G8 Notebook PC
  dmi.product.sku: 4V1S3UP#ABL
  dmi.sys.vendor: HP

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1978347/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1978347] Re: horrible IO degradation with encrypted zfs root on kernels past 5.15.0-27

2022-06-14 Thread Dan Podeanu
I can confirm 100% repro of this bug, on several systems.

Data for a Xeon Silver 4215R on Supermicro X11SPi-TF. The only change
between 5.15.0-37-generic and 5.15.0-27-generic is booting the same
machine with a different kernel.

Write to encrypted ramdisk:
- 5.15.0-37-generic: 186 MB/s
- 5.15.0-27-generic: 1.2 GB/s (6.5x faster)

Read from encrypted RAIDz1 on 8 x ST16000NM000J, arc cache cold:
- 5.15.0-37-generic: 61.1 MB/s
- 5.15.0-27-generic: 490 MB/s (8x faster)

It only appears to affect ZFS when encryption is enabled:

Write to unencrypted ramdisk:
- 5.15.0-37-generic: 1.6 GB/s
- 5.15.0-27-generic: 1.6 GB/s (identical)

I am getting very similar data on Xeon E3-1240 v6 / Supermicro X11SSM-F
and EPYC 7282 / Supermicro H12SSL-i

I am attaching a test script and output from "apport-cli --save
5.15.0-37-generic.apport -p linux --file-bug"


Methodology below:

# uname -a
Linux aero 5.15.0-37-generic #39-Ubuntu SMP Wed Jun 1 19:16:45 UTC 2022 x86_64 
x86_64 x86_64 GNU/Linux

# Create encrypted ZFS to ramdisk using the the attached "zfs-test.sh
enc"

Write speed to encrypted ramdisk:

# dd if=/dev/zero of=/tmp/mount/zero bs=1M
dd: error writing '/tmp/mount/zero': No space left on device
7432+0 records in
7431+0 records out
7792885760 bytes (7.8 GB, 7.3 GiB) copied, 41.9707 s, 186 MB/s


# Create unencrypted ZFS to ramdisk using the the attached "zfs-test.sh"

Write speed to unencrypted ramdisk:

# dd if=/dev/zero of=/tmp/mount/zero bs=1M
dd: error writing '/tmp/mount/zero': No space left on device
7439+0 records in
7438+0 records out
7799308288 bytes (7.8 GB, 7.3 GiB) copied, 4.93472 s, 1.6 GB/s


Read speed from encrypted RAIDz1 on 8 x ST16000NM000J, arc cache cold:

# dd if=/storage/kits/ubuntu-20.04.1-desktop-amd64.iso of=/dev/null bs=1M
2656+0 records in
2656+0 records out
2785017856 bytes (2.8 GB, 2.6 GiB) copied, 45.6045 s, 61.1 MB/s


# uname -a
Linux aero 5.15.0-27-generic #28-Ubuntu SMP Thu Apr 14 04:55:28 UTC 2022 x86_64 
x86_64 x86_64 GNU/Linux

# Create encrypted ZFS to ramdisk using the the attached "zfs-test.sh
enc"

Write speed to encrypted ramdisk:

# dd if=/dev/zero of=/tmp/mount/zero bs=1M
dd: error writing '/tmp/mount/zero': No space left on device
7433+0 records in
7432+0 records out
7793016832 bytes (7.8 GB, 7.3 GiB) copied, 6.28478 s, 1.2 GB/s


# Create unencrypted ZFS to ramdisk using the the attached "zfs-test.sh"

Write speed to unencrypted ramdisk:

# dd if=/dev/zero of=/tmp/mount/zero bs=1M
dd: error writing '/tmp/mount/zero': No space left on device
7439+0 records in
7438+0 records out
7799308288 bytes (7.8 GB, 7.3 GiB) copied, 4.76863 s, 1.6 GB/s

Read speed from encrypted RAIDz1 on 8 x ST16000NM000J, arc cache cold:

# dd if=/storage/kits/ubuntu-20.04.1-desktop-amd64.iso of=/dev/null bs=1M
2656+0 records in
2656+0 records out
2785017856 bytes (2.8 GB, 2.6 GiB) copied, 5.68203 s, 490 MB/s

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1978347

Title:
  horrible IO degradation with encrypted zfs root on kernels past
  5.15.0-27

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  IO on encrypted zfs root has fallen off a cliff in kernel versions
  after 5.15.0-27 (the degradation is observed since version 5.15.0-30,
  also seen on -33 and -37, -25 and -27 work like a charm). Heavy usage
  almost hangs a new laptop (building large singularity images or
  synthetic testing with dd).

  I have confirmed that things are working as expected on a default +
  upgraded Ubuntu 22.04 desktop LVM+LUKS installation using another NVMe
  SSD on the same laptop. There seems to be a regression when using the
  native zfs encryption (did aes-ni acceleration get turned off?)

  How to reproduce:

  - install ubuntu 22.04 desktop from iso, don't install web updates, check use 
zfs and encryption
  - sudo apt update && apt install dstat htop
  - create a dataset with compression disabled so that dd actually writes 
things to disk

  * sudo zfs create rpool/dummy
  * sudo zfs set compress=off rpool/dummy
  * sudo chown -R myusername. /dummy

  - start dstat and htop in the background (show kernel threads in the htop 
config)
  - dd if=/dev/zero of=/dummy/bigfile bs=1M count=16384

  - sudo apt upgradeand reboot on the latest kernel, repeat

  Expected: some cpu load, dstat reports write speeds about as much as
  the SSD can sustain (2.9-3GiB/s with a 2TiB Samsung 970 EVO Plus for a
  16GiB write test, 1.4GiB/s for a few seconds then 800MiB/s sustained
  for whatever WD 512GiB model I had laying around).

  Observed on versions -30 and later: 700% or more system cpu load,
  mostly in z_wr_iss threads, writes top at around 150-180MiB/s, the
  system becomes somewhat unresponsive. Reads are also not good but I
  have not benchmarked. Booting the system and launching apps seems
  about normal due to the low IO load.

  ProblemType: Bug
  DistroRelease: Ubuntu 22.04