Public bug reported:

I have two systems currently using Native ZFS encryption, a Laptop and a
Server.

The Laptop has been configured with ZFS for quite some time (year +)
with a native ZFS encrypted pool, configured in a stripe, and had been
working fine. Recently, it reported Pool errors. I moved it from a
stripe to a mirror, and zpool scrub found and fixed some errors. One day
I came out to find the system unresponsive and unable to boot. Restoring
the system from backups and liveusb, errors were reported in a system
file, which was not reported previously.

The Laptop (UbuntuZFS Pool) was running with Plucky, previously upgraded
from Oriole, and up to date with updates.

~$ sudo zpool status -v
  pool: UbuntuZFS
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Sun Aug 10 07:57:22 2025
1.86T / 1.86T scanned, 1.22T / 1.86T issued at 169M/s
1.24T resilvered, 65.65% done, 01:06:04 to go
remove: Removal of vdev 1 copied 970G in 0h48m, completed on Sun Aug 10 
07:56:59 2025
25.3M memory used for removed device mappings
config:

NAME                                 STATE     READ WRITE CKSUM
UbuntuZFS                            ONLINE       0     0     0
  mirror-2                           ONLINE       0     0     0
    nvme1n1p4                        ONLINE       0     0    16  (resilvering)
    nvme-CT4000P3PSSD8_2240E671E0D3  ONLINE       0     0    16  (resilvering)

errors: Permanent errors have been detected in the following files:

        <metadata>:<0x30f>
        <metadata>:<0x394>


ZFS has finished a resilver:

   eid: 1093
class: resilver_finish
  host: fafnir
  time: 2025-08-10 11:51:46-0400
  pool: UbuntuZFS
state: ONLINE
status: One or more devices has experienced an error resulting in data
    corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
    entire pool from backup.
   see: https://openzfs.github.io/openzfs-docs/m....
  scan: resilvered 1.89T in 03:54:24 with 15 errors on Sun Aug 10 11:51:46 2025
remove: Removal of vdev 1 copied 970G in 0h48m, completed on Sun Aug 10 
07:56:59 2025
    25.3M memory used for removed device mappings
config:

    NAME                                 STATE     READ WRITE CKSUM
    UbuntuZFS                            ONLINE       0     0     0
      mirror-2                           ONLINE       0     0     0
        nvme1n1p4                        ONLINE       0     0    97
        nvme-CT4000P3PSSD8_2240E671E0D3  ONLINE       0     0    97

errors: 15 data errors, use '-v' for a list

root@fafnir:~# zpool status -v
  pool: UbuntuZFS
 state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: https://openzfs.github.io/openzfs-docs/m....
  scan: resilvered 1.89T in 03:54:24 with 15 errors on Sun Aug 10 11:51:46 2025
remove: Removal of vdev 1 copied 970G in 0h48m, completed on Sun Aug 10 
07:56:59 2025
25.3M memory used for removed device mappings
config:

NAME                                 STATE     READ WRITE CKSUM
UbuntuZFS                            ONLINE       0     0     0
  mirror-2                           ONLINE       0     0     0
    nvme1n1p4                        ONLINE       0     0    97
    nvme-CT4000P3PSSD8_2240E671E0D3  ONLINE       0     0    97

errors: Permanent errors have been detected in the following files:

        <metadata>:<0x205>
        <metadata>:<0x30a>
        <metadata>:<0x40c>
        <metadata>:<0x30f>
        <metadata>:<0x12f>
        <metadata>:<0x14d>
        <metadata>:<0x534f>
        <metadata>:<0x150>
        <metadata>:<0x284>
        <metadata>:<0x386>
        <metadata>:<0x394>
        <metadata>:<0xa2>
        <metadata>:<0x1a2>
        <metadata>:<0xfd>

root@fafnir:~# zpool scrub -e UbuntuZFS 
root@fafnir:~# zpool status
  pool: UbuntuZFS
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
   see: https://openzfs.github.io/openzfs-docs/m....
 scrub: scrubbed 0 error blocks in 0 days 00:00:00 on Sun Aug 10 17:04:02 2025
  scan: resilvered 1.89T in 03:54:24 with 15 errors on Sun Aug 10 11:51:46 2025
remove: Removal of vdev 1 copied 970G in 0h48m, completed on Sun Aug 10 
07:56:59 2025
25.3M memory used for removed device mappings
config:

NAME                                 STATE     READ WRITE CKSUM
UbuntuZFS                            ONLINE       0     0     0
  mirror-2                           ONLINE       0     0     0
    nvme1n1p4                        ONLINE       0     0    97
    nvme-CT4000P3PSSD8_2240E671E0D3  ONLINE       0     0    97

errors: No known data errors
root@fafnir:~# zpool clear UbuntuZFS 
root@fafnir:~# zpool status
  pool: UbuntuZFS
 state: ONLINE
 scrub: scrubbed 0 error blocks in 0 days 00:00:00 on Sun Aug 10 17:04:02 2025
  scan: resilvered 1.89T in 03:54:24 with 15 errors on Sun Aug 10 11:51:46 2025
remove: Removal of vdev 1 copied 970G in 0h48m, completed on Sun Aug 10 
07:56:59 2025
25.3M memory used for removed device mappings
config:

NAME                                 STATE     READ WRITE CKSUM
UbuntuZFS                            ONLINE       0     0     0
  mirror-2                           ONLINE       0     0     0
    nvme1n1p4                        ONLINE       0     0     0
    nvme-CT4000P3PSSD8_2240E671E0D3  ONLINE       0     0     0

errors: No known data errors


root@fafnir:~# zpool status
  pool: UbuntuZFS
 state: ONLINE
  scan: scrub repaired 0B in 01:39:22 with 0 errors on Sun Aug 10 18:43:49 2025
remove: Removal of vdev 1 copied 970G in 0h48m, completed on Sun Aug 10 
07:56:59 2025
25.3M memory used for removed device mappings
config:

NAME                                 STATE     READ WRITE CKSUM
UbuntuZFS                            ONLINE       0     0     0
  mirror-2                           ONLINE       0     0     0
    nvme1n1p4                        ONLINE       0     0     0
    nvme-CT4000P3PSSD8_2240E671E0D3  ONLINE       0     0     0

errors: No known data errors


Eight days later, the system quit. Importing the pool from the LiveUSB reports 
the following:

oot@ubuntu:/mnt# zpool status -v
  pool: UbuntuZFS
 state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: https://openzfs.github.io/openzfs-docs/m....
  scan: scrub canceled on Mon Aug 18 03:59:03 2025
remove: Removal of vdev 1 copied 970G in 0h48m, completed on Sun Aug 10 
11:56:59 2025
25.3M memory used for removed device mappings
config:

NAME                                 STATE     READ WRITE CKSUM
UbuntuZFS                            ONLINE       0     0     0
  mirror-2                           ONLINE       0     0     0
    nvme0n1p4                        ONLINE       0     0    10
    nvme-CT4000P3PSSD8_2240E671E0D3  ONLINE       0     0    10

errors: Permanent errors have been detected in the following files:

        
UbuntuZFS/root@autosnap_2025-08-15_16:00:05_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-01_02:30:01_monthly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-16_11:00:02_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        /mnt/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-16_20:00:01_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-12_00:00:07_daily:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-16_22:00:15_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-17_03:00:00_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-16_07:00:15_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-17_00:00:15_daily:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-01_11:45:02_monthly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-07_00:00:01_daily:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@syncoid_fafnir_2025-08-16:01:39:51-GMT-04:00:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-04_00:00:00_daily:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-16_02:00:13_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-09_00:00:04_daily:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-17_00:00:15_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-16_04:00:01_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-16_13:00:13_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-06_00:00:01_daily:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-16_18:00:15_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-15_00:00:07_daily:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-15_17:00:06_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-16_23:00:03_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-16_01:00:03_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-08_00:37:43_daily:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-16_05:00:02_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-16_06:00:06_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-05_00:00:06_daily:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-16_09:00:15_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-11_00:00:15_daily:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-16_19:00:15_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-13_00:00:05_daily:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-16_14:00:02_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-17_01:00:15_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-17_02:00:05_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-03_00:00:01_daily:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-16_17:00:15_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-16_15:00:02_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-16_21:00:07_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-10_00:00:01_daily:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-15_18:00:00_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-16_00:00:01_daily:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-16_12:00:08_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-15_21:00:13_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-15_22:00:05_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-16_08:00:10_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-16_10:00:01_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-15_23:00:01_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-15_20:00:02_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-16_00:00:01_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-16_16:00:08_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-15_19:00:06_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-14_00:00:07_daily:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
        
UbuntuZFS/root@autosnap_2025-08-16_03:00:11_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73


At this point I wiped and reinstalled the system. 

Regarding the Server, it was built recently, 25 July 2025 with 24.04
LTS. It is in quasi production, as I'm moving services to it. The system
pool began reporting errors recently.

root@nidhoggur:~# zfs -V
zfs-2.2.2-0ubuntu9.4
zfs-kmod-2.2.2-0ubuntu9.2


 pool: HIWRITE
 state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: https://openzfs.github.io/openzfs-docs/m....
  scan: scrub in progress since Fri Aug 15 11:41:35 2025
852G / 1.31T scanned at 870M/s, 322G / 1.31T issued at 328M/s
0B repaired, 23.89% done, 00:53:15 to go
config:

NAME                                                   STATE     READ WRITE 
CKSUM
HIWRITE                                                ONLINE       0     0     0
  mirror-0                                             ONLINE       0     0     0
    ata-KINGSTON_SEDC600M1920G_50026B7686B10211-part3  ONLINE       0     0     0
    ata-KINGSTON_SEDC600M1920G_50026B7686B103A2-part3  ONLINE       0     0     0

errors: Permanent errors have been detected in the following files:

        HIWRITE/var/vmail:<0x1>
        HIWRITE/root@autosnap_2025-08-15_12:32:06_monthly:<0x1>
        HIWRITE/var/vmail@autosnap_2025-08-15_12:32:06_monthly:<0x1>
        HIWRITE/var/vmail@syncoid_nidhoggur_2025-08-15:11:37:26-GMT-04:00:<0x1>
        HIWRITE/root@syncoid_nidhoggur_2025-08-15:11:37:16-GMT-04:00:<0x1>
        HIWRITE/root@syncoid_nidhoggur_2025-08-15:10:20:28-GMT-04:00:<0x1>
        HIWRITE/root@autosnap_2025-08-15_12:32:06_daily:<0x1>
        HIWRITE/var/vmail@autosnap_2025-08-15_12:32:06_daily:<0x1>
        HIWRITE/root@autosnap_2025-08-15_15:00:26_hourly:<0x1>
        HIWRITE/var/vmail@autosnap_2025-08-15_15:00:26_hourly:<0x1>
        HIWRITE/root:<0x1>
        HIWRITE/root@syncoid_nidhoggur_2025-08-15:08:26:19-GMT-04:00:<0x1>
        HIWRITE/root@syncoid_nidhoggur_2025-08-15:11:37:23-GMT-04:00:<0x1>
        HIWRITE/root@syncoid_nidhoggur_2025-08-15:10:24:56-GMT-04:00:<0x1>
        HIWRITE/root@autosnap_2025-08-15_14:00:27_hourly:<0x1>
        HIWRITE/var/vmail@autosnap_2025-08-15_14:00:27_hourly:<0x1>
        HIWRITE/root@autosnap_2025-08-15_13:00:29_hourly:<0x1>
        HIWRITE/var/vmail@autosnap_2025-08-15_13:00:29_hourly:<0x1>
        HIWRITE/root@autosnap_2025-08-15_12:32:06_hourly:<0x1>
        HIWRITE/var/vmail@autosnap_2025-08-15_12:32:06_hourly:<0x1>
        HIWRITE/root@syncoid_nidhoggur_2025-08-15:10:27:19-GMT-04:00:<0x1>


After deleting snapshots, I was able to get the pool to report clear:

root@nidhoggur:/mnt/nest/storage# zpool status -v HIWRITE
  pool: HIWRITE
 state: ONLINE
 scrub: scrubbed 81 error blocks in 0 days 00:00:00 on Fri Aug 15 15:14:49 2025
config:

NAME                                                   STATE     READ WRITE 
CKSUM
HIWRITE                                                ONLINE       0     0     0
  mirror-0                                             ONLINE       0     0     0
    ata-KINGSTON_SEDC600M1920G_50026B7686B10211-part3  ONLINE       0     0     0
    ata-KINGSTON_SEDC600M1920G_50026B7686B103A2-part3  ONLINE       0     0     0

errors: No known data errors

Phantom system errors came back:

oot@nidhoggur:~# zpool status -v
  pool: HIWRITE
 state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: https://openzfs.github.io/openzfs-docs/m....
  scan: scrub repaired 0B in 01:06:12 with 1 errors on Sun Aug 17 00:36:56 2025
config:

NAME                                                   STATE     READ WRITE 
CKSUM
HIWRITE                                                ONLINE       0     0     0
  mirror-0                                             ONLINE       0     0     0
    ata-KINGSTON_SEDC600M1920G_50026B7686B10211-part3  ONLINE       0     0     0
    ata-KINGSTON_SEDC600M1920G_50026B7686B103A2-part3  ONLINE       0     0     0
    ata-KINGSTON_SEDC600M1920G_50026B7686E895B4-part3  ONLINE       0     0     
2

errors: Permanent errors have been detected in the following files:


<no errors reported>


These errors prevent Syncoid from backing up the pool. Other pools are 
currently backing up just fine, as well as remote systems to an ARCHIVE pool on 
the same system (nidhoggur). 

Sending HIRWITE snaps. Starting at 2025-08-17-03-53...
Sending incremental HIWRITE@syncoid_nidhoggur_2025-08-17:03:52:01-GMT-04:00 ... 
syncoid_nidhoggur_2025-08-17:03:53:01-GMT-04:00 (~ 4 KB):
Resuming interrupted zfs send/receive from HIWRITE/home to 
NEST-ARCHIVE-ZFS/HIWRITE3/home (~ 41 KB remaining):
warning: cannot send 
'HIWRITE/home@syncoid_nidhoggur_2025-08-17:03:04:11-GMT-04:00': Input/output 
error
cannot receive resume stream: checksum mismatch or incomplete stream.
Partially received snapshot is saved.
A resuming stream can be generated on the sending system by running:
    zfs send -t 
1-11ae4cac69-110-789c636064000310a501c49c50360710a715e5e7a69766a6304081286b8be299b5ab4215806c762475f94959a9c925103e0860c8a7a515a79630c001489e0d493ea9b224b59801551e597f493ec4150f936d249ebd573b158124cf0996cf4bcc4d6560f0f00c0ff20c71d5cfc8cf4d7528aecc4bcecf4c89cfcb4cc9c84f4f2f2d8a37323032d535b0d03534b73230b63230b13234d475f70dd105b20c0c60760300f3ec2a48
CRITICAL ERROR:  zfs send  -t 
1-11ae4cac69-110-789c636064000310a501c49c50360710a715e5e7a69766a6304081286b8be299b5ab4215806c762475f94959a9c925103e0860c8a7a515a79630c001489e0d493ea9b224b59801551e597f493ec4150f936d249ebd573b158124cf0996cf4bcc4d6560f0f00c0ff20c71d5cfc8cf4d7528aecc4bcecf4c89cfcb4cc9c84f4f2f2d8a37323032d535b0d03534b73230b63230b13234d475f70dd105b20c0c60760300f3ec2a48
 | mbuffer  -q -s 128k -m 16M | pv -p -t -e -r -b -s 42608 |  zfs receive  -s 
-F 'NEST-ARCHIVE-ZFS/HIWRITE3/home' 2>&1 failed: 256 at /usr/sbin/syncoid line 
637.
Sending incremental 
HIWRITE/root@syncoid_nidhoggur_2025-08-16:03:06:04-GMT-04:00 ... 
syncoid_nidhoggur_2025-08-17:03:53:20-GMT-04:00 (~ 1.9 GB):
warning: cannot send 
'HIWRITE/root@syncoid_nidhoggur_2025-08-16:03:07:04-GMT-04:00': Input/output 
error
cannot receive incremental stream: most recent snapshot of 
NEST-ARCHIVE-ZFS/HIWRITE3/root does not
match incremental source
mbuffer: error: outputThread: error writing to <stdout> at offset 0x40000: 
Broken pipe
mbuffer: warning: error during output to <stdout>: Broken pipe
CRITICAL ERROR:  zfs send  -I 
'HIWRITE/root'@'syncoid_nidhoggur_2025-08-16:03:06:04-GMT-04:00' 
'HIWRITE/root'@'syncoid_nidhoggur_2025-08-17:03:53:20-GMT-04:00' | mbuffer  -q 
-s 128k -m 16M | pv -p -t -e -r -b -s 2066744904 |  zfs receive  -s -F 
'NEST-ARCHIVE-ZFS/HIWRITE3/root' 2>&1 failed: 256 at /usr/sbin/syncoid line 889.
Sending incremental HIWRITE/var@syncoid_nidhoggur_2025-08-17:03:52:21-GMT-04:00 
... syncoid_nidhoggur_2025-08-17:03:53:21-GMT-04:00 (~ 3.8 MB):
Sending incremental 
HIWRITE/var/vmail@syncoid_nidhoggur_2025-08-17:03:52:40-GMT-04:00 ... 
syncoid_nidhoggur_2025-08-17:03:53:41-GMT-04:00 (~ 2.1 MB):
Finished at 2025-08-17-03-54

The laptop is using NVME drives. The server HIWRITE pool is using
enterprise SSD drives. The other, currently unaffected pools on server
are using Seagate Ironwolf drives.

The server is still up, and I'm willing to assist investigations for a
bit. At some point here, I will be taking it down and reconstructing the
server HIWRITE pool.

Server LSB:

root@nidhoggur:/var/log# lsb_release -rd
No LSB modules are available.
Description:    Ubuntu 24.04.3 LTS
Release:        24.04

I don't believe it to be a hardware problem, but it is suspicious that
it happened in two systems in relatively close time period, one of which
had been operating fine for over a year.

ProblemType: Bug
DistroRelease: Ubuntu 24.04
Package: zfsutils-linux 2.2.2-0ubuntu9
ProcVersionSignature: Ubuntu 6.8.0-31.31-generic 6.8.1
Uname: Linux 6.8.0-31-generic x86_64
NonfreeKernelModules: nvidia_modeset nvidia zfs
ApportVersion: 2.28.1-0ubuntu2
Architecture: amd64
CasperMD5CheckResult: unknown
CurrentDesktop: ubuntu:GNOME
Date: Tue Aug 19 10:09:59 2025
SourcePackage: zfs-linux
UpgradeStatus: No upgrade log present (probably fresh install)
modified.conffile..etc.sudoers.d.zfs: [inaccessible: [Errno 13] Permission 
denied: '/etc/sudoers.d/zfs']

** Affects: zfs-linux (Ubuntu)
     Importance: Undecided
         Status: New


** Tags: amd64 apport-bug noble

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/2120951

Title:
  Possible ZFS encryption bug resulting in pool corruption

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  I have two systems currently using Native ZFS encryption, a Laptop and
  a Server.

  The Laptop has been configured with ZFS for quite some time (year +)
  with a native ZFS encrypted pool, configured in a stripe, and had been
  working fine. Recently, it reported Pool errors. I moved it from a
  stripe to a mirror, and zpool scrub found and fixed some errors. One
  day I came out to find the system unresponsive and unable to boot.
  Restoring the system from backups and liveusb, errors were reported in
  a system file, which was not reported previously.

  The Laptop (UbuntuZFS Pool) was running with Plucky, previously
  upgraded from Oriole, and up to date with updates.

  ~$ sudo zpool status -v
    pool: UbuntuZFS
   state: ONLINE
  status: One or more devices is currently being resilvered.  The pool will
  continue to function, possibly in a degraded state.
  action: Wait for the resilver to complete.
    scan: resilver in progress since Sun Aug 10 07:57:22 2025
  1.86T / 1.86T scanned, 1.22T / 1.86T issued at 169M/s
  1.24T resilvered, 65.65% done, 01:06:04 to go
  remove: Removal of vdev 1 copied 970G in 0h48m, completed on Sun Aug 10 
07:56:59 2025
  25.3M memory used for removed device mappings
  config:

  NAME                                 STATE     READ WRITE CKSUM
  UbuntuZFS                            ONLINE       0     0     0
    mirror-2                           ONLINE       0     0     0
      nvme1n1p4                        ONLINE       0     0    16  (resilvering)
      nvme-CT4000P3PSSD8_2240E671E0D3  ONLINE       0     0    16  (resilvering)

  errors: Permanent errors have been detected in the following files:

          <metadata>:<0x30f>
          <metadata>:<0x394>

  
  ZFS has finished a resilver:

     eid: 1093
  class: resilver_finish
    host: fafnir
    time: 2025-08-10 11:51:46-0400
    pool: UbuntuZFS
  state: ONLINE
  status: One or more devices has experienced an error resulting in data
      corruption.  Applications may be affected.
  action: Restore the file in question if possible.  Otherwise restore the
      entire pool from backup.
     see: https://openzfs.github.io/openzfs-docs/m....
    scan: resilvered 1.89T in 03:54:24 with 15 errors on Sun Aug 10 11:51:46 
2025
  remove: Removal of vdev 1 copied 970G in 0h48m, completed on Sun Aug 10 
07:56:59 2025
      25.3M memory used for removed device mappings
  config:

      NAME                                 STATE     READ WRITE CKSUM
      UbuntuZFS                            ONLINE       0     0     0
        mirror-2                           ONLINE       0     0     0
          nvme1n1p4                        ONLINE       0     0    97
          nvme-CT4000P3PSSD8_2240E671E0D3  ONLINE       0     0    97

  errors: 15 data errors, use '-v' for a list

  root@fafnir:~# zpool status -v
    pool: UbuntuZFS
   state: ONLINE
  status: One or more devices has experienced an error resulting in data
  corruption.  Applications may be affected.
  action: Restore the file in question if possible.  Otherwise restore the
  entire pool from backup.
     see: https://openzfs.github.io/openzfs-docs/m....
    scan: resilvered 1.89T in 03:54:24 with 15 errors on Sun Aug 10 11:51:46 
2025
  remove: Removal of vdev 1 copied 970G in 0h48m, completed on Sun Aug 10 
07:56:59 2025
  25.3M memory used for removed device mappings
  config:

  NAME                                 STATE     READ WRITE CKSUM
  UbuntuZFS                            ONLINE       0     0     0
    mirror-2                           ONLINE       0     0     0
      nvme1n1p4                        ONLINE       0     0    97
      nvme-CT4000P3PSSD8_2240E671E0D3  ONLINE       0     0    97

  errors: Permanent errors have been detected in the following files:

          <metadata>:<0x205>
          <metadata>:<0x30a>
          <metadata>:<0x40c>
          <metadata>:<0x30f>
          <metadata>:<0x12f>
          <metadata>:<0x14d>
          <metadata>:<0x534f>
          <metadata>:<0x150>
          <metadata>:<0x284>
          <metadata>:<0x386>
          <metadata>:<0x394>
          <metadata>:<0xa2>
          <metadata>:<0x1a2>
          <metadata>:<0xfd>

  root@fafnir:~# zpool scrub -e UbuntuZFS 
  root@fafnir:~# zpool status
    pool: UbuntuZFS
   state: ONLINE
  status: One or more devices has experienced an unrecoverable error.  An
  attempt was made to correct the error.  Applications are unaffected.
  action: Determine if the device needs to be replaced, and clear the errors
  using 'zpool clear' or replace the device with 'zpool replace'.
     see: https://openzfs.github.io/openzfs-docs/m....
   scrub: scrubbed 0 error blocks in 0 days 00:00:00 on Sun Aug 10 17:04:02 2025
    scan: resilvered 1.89T in 03:54:24 with 15 errors on Sun Aug 10 11:51:46 
2025
  remove: Removal of vdev 1 copied 970G in 0h48m, completed on Sun Aug 10 
07:56:59 2025
  25.3M memory used for removed device mappings
  config:

  NAME                                 STATE     READ WRITE CKSUM
  UbuntuZFS                            ONLINE       0     0     0
    mirror-2                           ONLINE       0     0     0
      nvme1n1p4                        ONLINE       0     0    97
      nvme-CT4000P3PSSD8_2240E671E0D3  ONLINE       0     0    97

  errors: No known data errors
  root@fafnir:~# zpool clear UbuntuZFS 
  root@fafnir:~# zpool status
    pool: UbuntuZFS
   state: ONLINE
   scrub: scrubbed 0 error blocks in 0 days 00:00:00 on Sun Aug 10 17:04:02 2025
    scan: resilvered 1.89T in 03:54:24 with 15 errors on Sun Aug 10 11:51:46 
2025
  remove: Removal of vdev 1 copied 970G in 0h48m, completed on Sun Aug 10 
07:56:59 2025
  25.3M memory used for removed device mappings
  config:

  NAME                                 STATE     READ WRITE CKSUM
  UbuntuZFS                            ONLINE       0     0     0
    mirror-2                           ONLINE       0     0     0
      nvme1n1p4                        ONLINE       0     0     0
      nvme-CT4000P3PSSD8_2240E671E0D3  ONLINE       0     0     0

  errors: No known data errors

  
  root@fafnir:~# zpool status
    pool: UbuntuZFS
   state: ONLINE
    scan: scrub repaired 0B in 01:39:22 with 0 errors on Sun Aug 10 18:43:49 
2025
  remove: Removal of vdev 1 copied 970G in 0h48m, completed on Sun Aug 10 
07:56:59 2025
  25.3M memory used for removed device mappings
  config:

  NAME                                 STATE     READ WRITE CKSUM
  UbuntuZFS                            ONLINE       0     0     0
    mirror-2                           ONLINE       0     0     0
      nvme1n1p4                        ONLINE       0     0     0
      nvme-CT4000P3PSSD8_2240E671E0D3  ONLINE       0     0     0

  errors: No known data errors

  
  Eight days later, the system quit. Importing the pool from the LiveUSB 
reports the following:

  oot@ubuntu:/mnt# zpool status -v
    pool: UbuntuZFS
   state: ONLINE
  status: One or more devices has experienced an error resulting in data
  corruption.  Applications may be affected.
  action: Restore the file in question if possible.  Otherwise restore the
  entire pool from backup.
     see: https://openzfs.github.io/openzfs-docs/m....
    scan: scrub canceled on Mon Aug 18 03:59:03 2025
  remove: Removal of vdev 1 copied 970G in 0h48m, completed on Sun Aug 10 
11:56:59 2025
  25.3M memory used for removed device mappings
  config:

  NAME                                 STATE     READ WRITE CKSUM
  UbuntuZFS                            ONLINE       0     0     0
    mirror-2                           ONLINE       0     0     0
      nvme0n1p4                        ONLINE       0     0    10
      nvme-CT4000P3PSSD8_2240E671E0D3  ONLINE       0     0    10

  errors: Permanent errors have been detected in the following files:

          
UbuntuZFS/root@autosnap_2025-08-15_16:00:05_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-01_02:30:01_monthly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-16_11:00:02_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          /mnt/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-16_20:00:01_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-12_00:00:07_daily:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-16_22:00:15_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-17_03:00:00_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-16_07:00:15_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-17_00:00:15_daily:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-01_11:45:02_monthly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-07_00:00:01_daily:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@syncoid_fafnir_2025-08-16:01:39:51-GMT-04:00:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-04_00:00:00_daily:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-16_02:00:13_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-09_00:00:04_daily:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-17_00:00:15_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-16_04:00:01_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-16_13:00:13_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-06_00:00:01_daily:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-16_18:00:15_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-15_00:00:07_daily:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-15_17:00:06_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-16_23:00:03_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-16_01:00:03_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-08_00:37:43_daily:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-16_05:00:02_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-16_06:00:06_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-05_00:00:06_daily:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-16_09:00:15_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-11_00:00:15_daily:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-16_19:00:15_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-13_00:00:05_daily:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-16_14:00:02_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-17_01:00:15_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-17_02:00:05_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-03_00:00:01_daily:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-16_17:00:15_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-16_15:00:02_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-16_21:00:07_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-10_00:00:01_daily:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-15_18:00:00_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-16_00:00:01_daily:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-16_12:00:08_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-15_21:00:13_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-15_22:00:05_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-16_08:00:10_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-16_10:00:01_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-15_23:00:01_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-15_20:00:02_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-16_00:00:01_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-16_16:00:08_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-15_19:00:06_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-14_00:00:07_daily:/usr/lib/x86_64-linux-gnu/libcap.so.2.73
          
UbuntuZFS/root@autosnap_2025-08-16_03:00:11_hourly:/usr/lib/x86_64-linux-gnu/libcap.so.2.73

  
  At this point I wiped and reinstalled the system. 

  Regarding the Server, it was built recently, 25 July 2025 with 24.04
  LTS. It is in quasi production, as I'm moving services to it. The
  system pool began reporting errors recently.

  root@nidhoggur:~# zfs -V
  zfs-2.2.2-0ubuntu9.4
  zfs-kmod-2.2.2-0ubuntu9.2

  
   pool: HIWRITE
   state: ONLINE
  status: One or more devices has experienced an error resulting in data
  corruption.  Applications may be affected.
  action: Restore the file in question if possible.  Otherwise restore the
  entire pool from backup.
     see: https://openzfs.github.io/openzfs-docs/m....
    scan: scrub in progress since Fri Aug 15 11:41:35 2025
  852G / 1.31T scanned at 870M/s, 322G / 1.31T issued at 328M/s
  0B repaired, 23.89% done, 00:53:15 to go
  config:

  NAME                                                   STATE     READ WRITE 
CKSUM
  HIWRITE                                                ONLINE       0     0   
  0
    mirror-0                                             ONLINE       0     0   
  0
      ata-KINGSTON_SEDC600M1920G_50026B7686B10211-part3  ONLINE       0     0   
  0
      ata-KINGSTON_SEDC600M1920G_50026B7686B103A2-part3  ONLINE       0     0   
  0

  errors: Permanent errors have been detected in the following files:

          HIWRITE/var/vmail:<0x1>
          HIWRITE/root@autosnap_2025-08-15_12:32:06_monthly:<0x1>
          HIWRITE/var/vmail@autosnap_2025-08-15_12:32:06_monthly:<0x1>
          
HIWRITE/var/vmail@syncoid_nidhoggur_2025-08-15:11:37:26-GMT-04:00:<0x1>
          HIWRITE/root@syncoid_nidhoggur_2025-08-15:11:37:16-GMT-04:00:<0x1>
          HIWRITE/root@syncoid_nidhoggur_2025-08-15:10:20:28-GMT-04:00:<0x1>
          HIWRITE/root@autosnap_2025-08-15_12:32:06_daily:<0x1>
          HIWRITE/var/vmail@autosnap_2025-08-15_12:32:06_daily:<0x1>
          HIWRITE/root@autosnap_2025-08-15_15:00:26_hourly:<0x1>
          HIWRITE/var/vmail@autosnap_2025-08-15_15:00:26_hourly:<0x1>
          HIWRITE/root:<0x1>
          HIWRITE/root@syncoid_nidhoggur_2025-08-15:08:26:19-GMT-04:00:<0x1>
          HIWRITE/root@syncoid_nidhoggur_2025-08-15:11:37:23-GMT-04:00:<0x1>
          HIWRITE/root@syncoid_nidhoggur_2025-08-15:10:24:56-GMT-04:00:<0x1>
          HIWRITE/root@autosnap_2025-08-15_14:00:27_hourly:<0x1>
          HIWRITE/var/vmail@autosnap_2025-08-15_14:00:27_hourly:<0x1>
          HIWRITE/root@autosnap_2025-08-15_13:00:29_hourly:<0x1>
          HIWRITE/var/vmail@autosnap_2025-08-15_13:00:29_hourly:<0x1>
          HIWRITE/root@autosnap_2025-08-15_12:32:06_hourly:<0x1>
          HIWRITE/var/vmail@autosnap_2025-08-15_12:32:06_hourly:<0x1>
          HIWRITE/root@syncoid_nidhoggur_2025-08-15:10:27:19-GMT-04:00:<0x1>

  
  After deleting snapshots, I was able to get the pool to report clear:

  root@nidhoggur:/mnt/nest/storage# zpool status -v HIWRITE
    pool: HIWRITE
   state: ONLINE
   scrub: scrubbed 81 error blocks in 0 days 00:00:00 on Fri Aug 15 15:14:49 
2025
  config:

  NAME                                                   STATE     READ WRITE 
CKSUM
  HIWRITE                                                ONLINE       0     0   
  0
    mirror-0                                             ONLINE       0     0   
  0
      ata-KINGSTON_SEDC600M1920G_50026B7686B10211-part3  ONLINE       0     0   
  0
      ata-KINGSTON_SEDC600M1920G_50026B7686B103A2-part3  ONLINE       0     0   
  0

  errors: No known data errors

  Phantom system errors came back:

  oot@nidhoggur:~# zpool status -v
    pool: HIWRITE
   state: ONLINE
  status: One or more devices has experienced an error resulting in data
  corruption.  Applications may be affected.
  action: Restore the file in question if possible.  Otherwise restore the
  entire pool from backup.
     see: https://openzfs.github.io/openzfs-docs/m....
    scan: scrub repaired 0B in 01:06:12 with 1 errors on Sun Aug 17 00:36:56 
2025
  config:

  NAME                                                   STATE     READ WRITE 
CKSUM
  HIWRITE                                                ONLINE       0     0   
  0
    mirror-0                                             ONLINE       0     0   
  0
      ata-KINGSTON_SEDC600M1920G_50026B7686B10211-part3  ONLINE       0     0   
  0
      ata-KINGSTON_SEDC600M1920G_50026B7686B103A2-part3  ONLINE       0     0   
  0
      ata-KINGSTON_SEDC600M1920G_50026B7686E895B4-part3  ONLINE       0     0   
  2

  errors: Permanent errors have been detected in the following files:

  
  <no errors reported>

  
  These errors prevent Syncoid from backing up the pool. Other pools are 
currently backing up just fine, as well as remote systems to an ARCHIVE pool on 
the same system (nidhoggur). 

  Sending HIRWITE snaps. Starting at 2025-08-17-03-53...
  Sending incremental HIWRITE@syncoid_nidhoggur_2025-08-17:03:52:01-GMT-04:00 
... syncoid_nidhoggur_2025-08-17:03:53:01-GMT-04:00 (~ 4 KB):
  Resuming interrupted zfs send/receive from HIWRITE/home to 
NEST-ARCHIVE-ZFS/HIWRITE3/home (~ 41 KB remaining):
  warning: cannot send 
'HIWRITE/home@syncoid_nidhoggur_2025-08-17:03:04:11-GMT-04:00': Input/output 
error
  cannot receive resume stream: checksum mismatch or incomplete stream.
  Partially received snapshot is saved.
  A resuming stream can be generated on the sending system by running:
      zfs send -t 
1-11ae4cac69-110-789c636064000310a501c49c50360710a715e5e7a69766a6304081286b8be299b5ab4215806c762475f94959a9c925103e0860c8a7a515a79630c001489e0d493ea9b224b59801551e597f493ec4150f936d249ebd573b158124cf0996cf4bcc4d6560f0f00c0ff20c71d5cfc8cf4d7528aecc4bcecf4c89cfcb4cc9c84f4f2f2d8a37323032d535b0d03534b73230b63230b13234d475f70dd105b20c0c60760300f3ec2a48
  CRITICAL ERROR:  zfs send  -t 
1-11ae4cac69-110-789c636064000310a501c49c50360710a715e5e7a69766a6304081286b8be299b5ab4215806c762475f94959a9c925103e0860c8a7a515a79630c001489e0d493ea9b224b59801551e597f493ec4150f936d249ebd573b158124cf0996cf4bcc4d6560f0f00c0ff20c71d5cfc8cf4d7528aecc4bcecf4c89cfcb4cc9c84f4f2f2d8a37323032d535b0d03534b73230b63230b13234d475f70dd105b20c0c60760300f3ec2a48
 | mbuffer  -q -s 128k -m 16M | pv -p -t -e -r -b -s 42608 |  zfs receive  -s 
-F 'NEST-ARCHIVE-ZFS/HIWRITE3/home' 2>&1 failed: 256 at /usr/sbin/syncoid line 
637.
  Sending incremental 
HIWRITE/root@syncoid_nidhoggur_2025-08-16:03:06:04-GMT-04:00 ... 
syncoid_nidhoggur_2025-08-17:03:53:20-GMT-04:00 (~ 1.9 GB):
  warning: cannot send 
'HIWRITE/root@syncoid_nidhoggur_2025-08-16:03:07:04-GMT-04:00': Input/output 
error
  cannot receive incremental stream: most recent snapshot of 
NEST-ARCHIVE-ZFS/HIWRITE3/root does not
  match incremental source
  mbuffer: error: outputThread: error writing to <stdout> at offset 0x40000: 
Broken pipe
  mbuffer: warning: error during output to <stdout>: Broken pipe
  CRITICAL ERROR:  zfs send  -I 
'HIWRITE/root'@'syncoid_nidhoggur_2025-08-16:03:06:04-GMT-04:00' 
'HIWRITE/root'@'syncoid_nidhoggur_2025-08-17:03:53:20-GMT-04:00' | mbuffer  -q 
-s 128k -m 16M | pv -p -t -e -r -b -s 2066744904 |  zfs receive  -s -F 
'NEST-ARCHIVE-ZFS/HIWRITE3/root' 2>&1 failed: 256 at /usr/sbin/syncoid line 889.
  Sending incremental 
HIWRITE/var@syncoid_nidhoggur_2025-08-17:03:52:21-GMT-04:00 ... 
syncoid_nidhoggur_2025-08-17:03:53:21-GMT-04:00 (~ 3.8 MB):
  Sending incremental 
HIWRITE/var/vmail@syncoid_nidhoggur_2025-08-17:03:52:40-GMT-04:00 ... 
syncoid_nidhoggur_2025-08-17:03:53:41-GMT-04:00 (~ 2.1 MB):
  Finished at 2025-08-17-03-54

  The laptop is using NVME drives. The server HIWRITE pool is using
  enterprise SSD drives. The other, currently unaffected pools on server
  are using Seagate Ironwolf drives.

  The server is still up, and I'm willing to assist investigations for a
  bit. At some point here, I will be taking it down and reconstructing
  the server HIWRITE pool.

  Server LSB:

  root@nidhoggur:/var/log# lsb_release -rd
  No LSB modules are available.
  Description:  Ubuntu 24.04.3 LTS
  Release:      24.04

  I don't believe it to be a hardware problem, but it is suspicious that
  it happened in two systems in relatively close time period, one of
  which had been operating fine for over a year.

  ProblemType: Bug
  DistroRelease: Ubuntu 24.04
  Package: zfsutils-linux 2.2.2-0ubuntu9
  ProcVersionSignature: Ubuntu 6.8.0-31.31-generic 6.8.1
  Uname: Linux 6.8.0-31-generic x86_64
  NonfreeKernelModules: nvidia_modeset nvidia zfs
  ApportVersion: 2.28.1-0ubuntu2
  Architecture: amd64
  CasperMD5CheckResult: unknown
  CurrentDesktop: ubuntu:GNOME
  Date: Tue Aug 19 10:09:59 2025
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)
  modified.conffile..etc.sudoers.d.zfs: [inaccessible: [Errno 13] Permission 
denied: '/etc/sudoers.d/zfs']

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/2120951/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to     : [email protected]
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to