Hello Colin, yes, this is still an open issue:

Linux wopr 4.15.0-91-generic #92-Ubuntu SMP Fri Feb 28 11:09:48 UTC 2020
x86_64 x86_64 x86_64 GNU/Linux


Apr 22 19:10:03 wopr zed[12576]: eid=8352 class=history_event 
pool_guid=0xB3B099B638F02EEF
Apr 22 19:10:03 wopr kernel: VERIFY(size != 0) failed
Apr 22 19:10:03 wopr kernel: PANIC at range_tree.c:304:range_tree_find_impl()
Apr 22 19:10:03 wopr kernel: Showing stack for process 12577
Apr 22 19:10:03 wopr kernel: CPU: 8 PID: 12577 Comm: receive_writer Tainted: P  
         O     4.15.0-91-generic #92-Ubuntu
Apr 22 19:10:03 wopr kernel: Hardware name: Supermicro 
SSG-6038R-E1CR16L/X10DRH-iT, BIOS 2.0 12/17/2015
Apr 22 19:10:03 wopr kernel: Call Trace:
Apr 22 19:10:03 wopr kernel:  dump_stack+0x6d/0x8e
Apr 22 19:10:03 wopr kernel:  spl_dumpstack+0x42/0x50 [spl]
Apr 22 19:10:03 wopr kernel:  spl_panic+0xc8/0x110 [spl]
Apr 22 19:10:03 wopr kernel:  ? __switch_to_asm+0x41/0x70
Apr 22 19:10:03 wopr kernel:  ? abd_iter_map+0xa/0x90 [zfs]
Apr 22 19:10:03 wopr kernel:  ? dbuf_dirty+0x43d/0x850 [zfs]
Apr 22 19:10:03 wopr kernel:  ? getrawmonotonic64+0x43/0xd0
Apr 22 19:10:03 wopr kernel:  ? getrawmonotonic64+0x43/0xd0
Apr 22 19:10:03 wopr kernel:  ? dmu_zfetch+0x49a/0x500 [zfs]
Apr 22 19:10:03 wopr kernel:  ? getrawmonotonic64+0x43/0xd0
Apr 22 19:10:03 wopr kernel:  ? dmu_zfetch+0x49a/0x500 [zfs]
Apr 22 19:10:03 wopr kernel:  ? mutex_lock+0x12/0x40
Apr 22 19:10:03 wopr kernel:  ? dbuf_rele_and_unlock+0x1a8/0x4b0 [zfs]
Apr 22 19:10:03 wopr kernel:  range_tree_find_impl+0x88/0x90 [zfs]
Apr 22 19:10:03 wopr kernel:  ? spl_kmem_zalloc+0xdc/0x1a0 [spl]
Apr 22 19:10:03 wopr kernel:  range_tree_clear+0x4f/0x60 [zfs]
Apr 22 19:10:03 wopr kernel:  dnode_free_range+0x11f/0x5a0 [zfs]
Apr 22 19:10:03 wopr kernel:  dmu_object_free+0x53/0x90 [zfs]
Apr 22 19:10:03 wopr kernel:  dmu_free_long_object+0x9f/0xc0 [zfs]
Apr 22 19:10:03 wopr kernel:  receive_freeobjects.isra.12+0x7a/0x100 [zfs]
Apr 22 19:10:03 wopr kernel:  receive_writer_thread+0x6d2/0xa60 [zfs]
Apr 22 19:10:03 wopr kernel:  ? set_curr_task_fair+0x2b/0x60
Apr 22 19:10:03 wopr kernel:  ? spl_kmem_free+0x33/0x40 [spl]
Apr 22 19:10:03 wopr kernel:  ? kfree+0x165/0x180
Apr 22 19:10:03 wopr kernel:  ? receive_free.isra.13+0xc0/0xc0 [zfs]
Apr 22 19:10:03 wopr kernel:  thread_generic_wrapper+0x74/0x90 [spl]
Apr 22 19:10:03 wopr kernel:  kthread+0x121/0x140
Apr 22 19:10:03 wopr kernel:  ? __thread_exit+0x20/0x20 [spl]
Apr 22 19:10:03 wopr kernel:  ? kthread_create_worker_on_cpu+0x70/0x70
Apr 22 19:10:03 wopr kernel:  ret_from_fork+0x35/0x40
Apr 22 19:12:56 wopr kernel: INFO: task txg_quiesce:2265 blocked for more than 
120 seconds.
Apr 22 19:12:56 wopr kernel:       Tainted: P           O     4.15.0-91-generic 
#92-Ubuntu
Apr 22 19:12:56 wopr kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" 
disables this message.
Apr 22 19:12:56 wopr kernel: txg_quiesce     D    0  2265      2 0x80000000
Apr 22 19:12:56 wopr kernel: Call Trace:
Apr 22 19:12:56 wopr kernel:  __schedule+0x24e/0x880
Apr 22 19:12:56 wopr kernel:  schedule+0x2c/0x80
Apr 22 19:12:56 wopr kernel:  cv_wait_common+0x11e/0x140 [spl]
Apr 22 19:12:56 wopr kernel:  ? wait_woken+0x80/0x80
Apr 22 19:12:56 wopr kernel:  __cv_wait+0x15/0x20 [spl]
Apr 22 19:12:56 wopr kernel:  txg_quiesce_thread+0x2cb/0x3d0 [zfs]
Apr 22 19:12:56 wopr kernel:  ? txg_delay+0x1b0/0x1b0 [zfs]
Apr 22 19:12:56 wopr kernel:  thread_generic_wrapper+0x74/0x90 [spl]
Apr 22 19:12:56 wopr kernel:  kthread+0x121/0x140
Apr 22 19:12:56 wopr kernel:  ? __thread_exit+0x20/0x20 [spl]
Apr 22 19:12:56 wopr kernel:  ? kthread_create_worker_on_cpu+0x70/0x70
Apr 22 19:12:56 wopr kernel:  ret_from_fork+0x35/0x40
Apr 22 19:12:56 wopr kernel: INFO: task zfs:12482 blocked for more than 120 
seconds.
Apr 22 19:12:56 wopr kernel:       Tainted: P           O     4.15.0-91-generic 
#92-Ubuntu
Apr 22 19:12:56 wopr kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" 
disables this message.
Apr 22 19:12:56 wopr kernel: zfs             D    0 12482  12479 0x80000080
Apr 22 19:12:56 wopr kernel: Call Trace:
Apr 22 19:12:56 wopr kernel:  __schedule+0x24e/0x880
Apr 22 19:12:56 wopr kernel:  schedule+0x2c/0x80
Apr 22 19:12:56 wopr kernel:  cv_wait_common+0x11e/0x140 [spl]
Apr 22 19:12:56 wopr kernel:  ? wait_woken+0x80/0x80
Apr 22 19:12:56 wopr kernel:  __cv_wait+0x15/0x20 [spl]
Apr 22 19:12:56 wopr kernel:  dmu_recv_stream+0xa51/0xef0 [zfs]
Apr 22 19:12:56 wopr kernel:  zfs_ioc_recv_impl+0x306/0x1100 [zfs]
Apr 22 19:12:56 wopr kernel:  ? dbuf_rele+0x36/0x40 [zfs]
Apr 22 19:12:56 wopr kernel:  zfs_ioc_recv_new+0x33d/0x410 [zfs]
Apr 22 19:12:56 wopr kernel:  ? spl_kmem_alloc_impl+0xe5/0x1a0 [spl]
Apr 22 19:12:56 wopr kernel:  ? spl_vmem_alloc+0x19/0x20 [spl]
Apr 22 19:12:56 wopr kernel:  ? nv_alloc_sleep_spl+0x1f/0x30 [znvpair]
Apr 22 19:12:56 wopr kernel:  ? nv_mem_zalloc.isra.0+0x2e/0x40 [znvpair]
Apr 22 19:12:56 wopr kernel:  ? nvlist_xalloc.part.2+0x50/0xb0 [znvpair]
Apr 22 19:12:56 wopr kernel:  zfsdev_ioctl+0x451/0x610 [zfs]
Apr 22 19:12:56 wopr kernel:  do_vfs_ioctl+0xa8/0x630
Apr 22 19:12:56 wopr kernel:  ? __audit_syscall_entry+0xbc/0x110
Apr 22 19:12:56 wopr kernel:  ? syscall_trace_enter+0x1da/0x2d0
Apr 22 19:12:56 wopr kernel:  SyS_ioctl+0x79/0x90
Apr 22 19:12:56 wopr kernel:  do_syscall_64+0x73/0x130
Apr 22 19:12:56 wopr kernel:  entry_SYSCALL_64_after_hwframe+0x3d/0xa2
Apr 22 19:12:56 wopr kernel: RIP: 0033:0x7f3c5a2d55d7
Apr 22 19:12:56 wopr kernel: RSP: 002b:00007ffcf28d05d8 EFLAGS: 00000246 
ORIG_RAX: 0000000000000010
Apr 22 19:12:56 wopr kernel: RAX: ffffffffffffffda RBX: 0000000000005a46 RCX: 
00007f3c5a2d55d7
Apr 22 19:12:56 wopr kernel: RDX: 00007ffcf28d05f0 RSI: 0000000000005a46 RDI: 
0000000000000006
Apr 22 19:12:56 wopr kernel: RBP: 00007ffcf28d05f0 R08: 00007f3c5a5aae20 R09: 
0000000000000000
Apr 22 19:12:56 wopr kernel: R10: 000055c7fedf4010 R11: 0000000000000246 R12: 
00007ffcf28d3c20
Apr 22 19:12:56 wopr kernel: R13: 0000000000000006 R14: 000055c7fedfbf10 R15: 
000000000000000c
Apr 22 19:12:56 wopr kernel: INFO: task receive_writer:12577 blocked for more 
than 120 seconds.
Apr 22 19:12:56 wopr kernel:       Tainted: P           O     4.15.0-91-generic 
#92-Ubuntu
Apr 22 19:12:56 wopr kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" 
disables this message.
Apr 22 19:12:56 wopr kernel: receive_writer  D    0 12577      2 0x80000080
Apr 22 19:12:56 wopr kernel: Call Trace:
Apr 22 19:12:56 wopr kernel:  __schedule+0x24e/0x880
Apr 22 19:12:56 wopr kernel:  schedule+0x2c/0x80
Apr 22 19:12:56 wopr kernel:  spl_panic+0xfa/0x110 [spl]
Apr 22 19:12:56 wopr kernel:  ? abd_iter_map+0xa/0x90 [zfs]
Apr 22 19:12:56 wopr kernel:  ? dbuf_dirty+0x43d/0x850 [zfs]
Apr 22 19:12:56 wopr kernel:  ? getrawmonotonic64+0x43/0xd0
Apr 22 19:12:56 wopr kernel:  ? getrawmonotonic64+0x43/0xd0
Apr 22 19:12:56 wopr kernel:  ? dmu_zfetch+0x49a/0x500 [zfs]
Apr 22 19:12:56 wopr kernel:  ? getrawmonotonic64+0x43/0xd0
Apr 22 19:12:56 wopr kernel:  ? dmu_zfetch+0x49a/0x500 [zfs]
Apr 22 19:12:56 wopr kernel:  ? mutex_lock+0x12/0x40
Apr 22 19:12:56 wopr kernel:  ? dbuf_rele_and_unlock+0x1a8/0x4b0 [zfs]
Apr 22 19:12:56 wopr kernel:  range_tree_find_impl+0x88/0x90 [zfs]
Apr 22 19:12:56 wopr kernel:  ? spl_kmem_zalloc+0xdc/0x1a0 [spl]
Apr 22 19:12:56 wopr kernel:  range_tree_clear+0x4f/0x60 [zfs]
Apr 22 19:12:56 wopr kernel:  dnode_free_range+0x11f/0x5a0 [zfs]
Apr 22 19:12:56 wopr kernel:  dmu_object_free+0x53/0x90 [zfs]
Apr 22 19:12:56 wopr kernel:  dmu_free_long_object+0x9f/0xc0 [zfs]
Apr 22 19:12:56 wopr kernel:  receive_freeobjects.isra.12+0x7a/0x100 [zfs]
Apr 22 19:12:56 wopr kernel:  receive_writer_thread+0x6d2/0xa60 [zfs]
Apr 22 19:12:56 wopr kernel:  ? set_curr_task_fair+0x2b/0x60
Apr 22 19:12:56 wopr kernel:  ? spl_kmem_free+0x33/0x40 [spl]
Apr 22 19:12:56 wopr kernel:  ? kfree+0x165/0x180
Apr 22 19:12:56 wopr kernel:  ? receive_free.isra.13+0xc0/0xc0 [zfs]
Apr 22 19:12:56 wopr kernel:  thread_generic_wrapper+0x74/0x90 [spl]
Apr 22 19:12:56 wopr kernel:  kthread+0x121/0x140
Apr 22 19:12:56 wopr kernel:  ? __thread_exit+0x20/0x20 [spl]
Apr 22 19:12:56 wopr kernel:  ? kthread_create_worker_on_cpu+0x70/0x70
Apr 22 19:12:56 wopr kernel:  ret_from_fork+0x35/0x40



And the syncoid output:

# SSH_AUTH_SOCK=/tmp/ssh-EAphgaS9vNJE/agent.3449 SSH_AGENT_PID=3484 syncoid 
--recursive --skip-parent --create-bookmark --recvoptions="u" rpool 
syncoid@wopr:srv/backups/millbarge/rpool
Sending incremental rpool/ROOT@autosnap_2020-01-28_21:00:02_hourly ... 
syncoid_millbarge_2020-04-23:01:55:31 (~ 26 KB):
27.7KiB 0:00:00 [ 220KiB/s] 
[===============================================================================================================================================]
 103%            
Resuming interrupted zfs send/receive from rpool/ROOT/ubuntu to 
srv/backups/millbarge/rpool/ROOT/ubuntu (~ UNKNOWN remaining):
cannot resume send: 'rpool/ROOT/ubuntu@autosnap_2020-01-25_00:00:01_daily' used 
in the initial send no longer exists
cannot receive: failed to read from stream
WARN: resetting partially receive state because the snapshot source no longer 
exists
Sending incremental rpool/ROOT/ubuntu@before_backup ... 
syncoid_millbarge_2020-04-23:01:55:52 (~ 28.6 GB):
cannot restore to 
srv/backups/millbarge/rpool/ROOT/ubuntu@autosnap_2020-02-01_00:00:01_monthly: 
destination already exists                                                   ]  
3% ETA 0:04:25
mbuffer: error: outputThread: error writing to <stdout> at offset 0x3b9e0000: 
Broken pipe                                                                     
               ]  4% ETA 0:04:43
mbuffer: warning: error during output to <stdout>: Broken pipe
1.16GiB 0:00:12 [94.3MiB/s] [====>                                              
                                                                                
             ]  4%            
CRITICAL ERROR:  zfs send  -I 'rpool/ROOT/ubuntu'@'before_backup' 
'rpool/ROOT/ubuntu'@'syncoid_millbarge_2020-04-23:01:55:52' | pv -s 30731359288 
| lzop  | mbuffer  -q -s 128k -m 16M 2>/dev/null | ssh     -S 
/tmp/syncoid-syncoid-syncoid@wopr-1587606930 syncoid@wopr ' mbuffer  -q -s 128k 
-m 16M 2>/dev/null | lzop -dfc | sudo zfs receive -u  -s -F 
'"'"'srv/backups/millbarge/rpool/ROOT/ubuntu'"'"' 2>&1' failed: 256 at 
/usr/sbin/syncoid line 786.
Sending incremental rpool/home@autosnap_2020-01-28_21:00:02_hourly ... 
syncoid_millbarge_2020-04-23:01:56:19 (~ 56 KB):
36.1KiB 0:00:00 [ 294KiB/s] 
[=========================================================================================>
                                                      ] 63%            
Sending incremental rpool/home/root#'autosnap_2020-01-28_21:30:02_frequently' 
... autosnap_2020-02-01_00:00:01_monthly (~ UNKNOWN):
 677KiB 0:00:00 [43.1MiB/s] [   <=>                                             
                                                                                
                             ]
Sending incremental rpool/home/root@autosnap_2020-02-01_00:00:01_monthly ... 
syncoid_millbarge_2020-04-23:01:56:28 (~ 844.4 MB):
 850MiB 0:00:11 [75.7MiB/s] 
[==============================================================================================================================================>]
 100%            
Sending incremental 
rpool/home/sarnold#'autosnap_2020-01-28_21:30:02_frequently' ... 
autosnap_2020-02-01_00:00:01_monthly (~ UNKNOWN):
2.83GiB 0:00:29 [96.8MiB/s] [                                                   
                                           <=>                                  
                             ]
Sending incremental rpool/home/sarnold@autosnap_2020-02-01_00:00:01_monthly ... 
syncoid_millbarge_2020-04-23:01:56:56 (~ 45.7 GB):
49.5GiB 0:09:33 [88.3MiB/s] 
[===============================================================================================================================================]
 108%            
Sending incremental rpool/swap@autosnap_2020-01-28_21:00:02_hourly ... 
syncoid_millbarge_2020-04-23:02:07:05 (~ 184.9 MB):
 193MiB 0:00:00 [ 282MiB/s] 
[===============================================================================================================================================]
 104%            
Sending incremental rpool/tmp@autosnap_2020-01-28_21:00:02_hourly ... 
syncoid_millbarge_2020-04-23:02:07:24 (~ 13.6 MB):
14.1MiB 0:00:00 [40.0MiB/s] 
[===============================================================================================================================================]
 103%            
Sending incremental rpool/usr#'autosnap_2020-01-28_21:30:02_frequently' ... 
autosnap_2020-02-01_00:00:01_monthly (~ UNKNOWN):
 624 B 0:00:00 [90.9KiB/s] [   <=>                                              
                                                                                
                             ]
Sending incremental rpool/usr@autosnap_2020-02-01_00:00:01_monthly ... 
syncoid_millbarge_2020-04-23:02:07:39 (~ 49 KB):
50.9KiB 0:00:00 [ 190KiB/s] 
[===============================================================================================================================================]
 101%            
Sending incremental rpool/usr/local#'autosnap_2020-01-28_21:30:02_frequently' 
... autosnap_2020-02-01_00:00:01_monthly (~ UNKNOWN):
6.79MiB 0:00:00 [ 241MiB/s] [   <=>                                             
                                                                                
                             ]
Sending incremental rpool/usr/local@autosnap_2020-02-01_00:00:01_monthly ... 
syncoid_millbarge_2020-04-23:02:07:58 (~ 2.0 MB):
1.60MiB 0:00:00 [4.69MiB/s] 
[================================================================================================================>
                               ] 79%            
Sending incremental rpool/var@autosnap_2020-01-28_21:00:02_hourly ... 
syncoid_millbarge_2020-04-23:02:08:20 (~ 26 KB):
27.7KiB 0:00:00 [ 171KiB/s] 
[===============================================================================================================================================]
 103%            
Sending incremental rpool/var/cache@autosnap_2020-01-28_21:00:02_hourly ... 
syncoid_millbarge_2020-04-23:02:08:33 (~ 1013.1 MB):
1013MiB 0:00:18 [55.4MiB/s] 
[==============================================================================================================================================>]
 100%            
Sending incremental rpool/var/lib@autosnap_2020-01-28_21:00:02_hourly ... 
syncoid_millbarge_2020-04-23:02:08:54 (~ 26 KB):
27.7KiB 0:00:00 [ 125KiB/s] 
[===============================================================================================================================================]
 103%            
Sending incremental 
rpool/var/lib/AccountsService@autosnap_2020-01-28_21:00:02_hourly ... 
syncoid_millbarge_2020-04-23:02:09:06 (~ 56 KB):
36.1KiB 0:00:00 [ 232KiB/s] 
[=========================================================================================>
                                                      ] 63%            
Sending incremental rpool/var/lib/docker@autosnap_2020-01-28_21:00:02_hourly 
... syncoid_millbarge_2020-04-23:02:09:18 (~ 54 KB):
34.9KiB 0:00:00 [ 230KiB/s] 
[===========================================================================================>
                                                    ] 64%            
INFO: Sending oldest full snapshot 
rpool/var/lib/lxd@syncoid_millbarge_2020-04-23:02:09:30 (~ 42 KB) to new target 
filesystem:
45.1KiB 0:00:00 [3.96MiB/s] 
[===============================================================================================================================================]
 105%            
INFO: Sending oldest full snapshot 
rpool/var/lib/lxd/containers@syncoid_millbarge_2020-04-23:02:09:30 (~ 42 KB) to 
new target filesystem:
45.1KiB 0:00:00 [5.74MiB/s] 
[===============================================================================================================================================]
 105%            
INFO: Sending oldest full snapshot 
rpool/var/lib/lxd/custom@syncoid_millbarge_2020-04-23:02:09:31 (~ 42 KB) to new 
target filesystem:
45.1KiB 0:00:00 [3.95MiB/s] 
[===============================================================================================================================================]
 105%            
INFO: Sending oldest full snapshot 
rpool/var/lib/lxd/deleted@syncoid_millbarge_2020-04-23:02:09:31 (~ 42 KB) to 
new target filesystem:
45.1KiB 0:00:00 [4.32MiB/s] 
[===============================================================================================================================================]
 105%            
INFO: Sending oldest full snapshot 
rpool/var/lib/lxd/deleted/containers@syncoid_millbarge_2020-04-23:02:09:31 (~ 
42 KB) to new target filesystem:
45.1KiB 0:00:00 [4.37MiB/s] 
[===============================================================================================================================================]
 105%            
INFO: Sending oldest full snapshot 
rpool/var/lib/lxd/deleted/custom@syncoid_millbarge_2020-04-23:02:09:32 (~ 42 
KB) to new target filesystem:
45.1KiB 0:00:00 [4.86MiB/s] 
[===============================================================================================================================================]
 105%            
INFO: Sending oldest full snapshot 
rpool/var/lib/lxd/deleted/images@syncoid_millbarge_2020-04-23:02:09:32 (~ 42 
KB) to new target filesystem:
45.1KiB 0:00:00 [5.40MiB/s] 
[===============================================================================================================================================]
 105%            
INFO: Sending oldest full snapshot 
rpool/var/lib/lxd/deleted/virtual-machines@syncoid_millbarge_2020-04-23:02:09:33
 (~ 42 KB) to new target filesystem:
45.1KiB 0:00:00 [5.02MiB/s] 
[===============================================================================================================================================]
 105%            
INFO: Sending oldest full snapshot 
rpool/var/lib/lxd/images@syncoid_millbarge_2020-04-23:02:09:33 (~ 42 KB) to new 
target filesystem:
45.1KiB 0:00:00 [5.09MiB/s] 
[===============================================================================================================================================]
 105%            
INFO: Sending oldest full snapshot 
rpool/var/lib/lxd/virtual-machines@syncoid_millbarge_2020-04-23:02:09:33 (~ 42 
KB) to new target filesystem:
45.1KiB 0:00:00 [5.25MiB/s] 
[===============================================================================================================================================]
 105%            
Sending incremental rpool/var/lib/nfs@autosnap_2020-01-28_21:00:02_hourly ... 
syncoid_millbarge_2020-04-23:02:09:34 (~ 54 KB):
34.9KiB 0:00:00 [ 236KiB/s] 
[===========================================================================================>
                                                    ] 64%            
Sending incremental rpool/var/lib/schroot@autosnap_2020-01-28_21:00:02_hourly 
... syncoid_millbarge_2020-04-23:02:09:46 (~ 394.5 MB):
 402MiB 0:00:11 [34.8MiB/s] 
[===============================================================================================================================================]
 101%            
INFO: Sending oldest full snapshot 
rpool/var/lib/schroot/chroots@syncoid_millbarge_2020-04-23:02:10:02 (~ 42 KB) 
to new target filesystem:
45.1KiB 0:00:00 [4.89MiB/s] 
[===============================================================================================================================================]
 105%            
Resuming interrupted zfs send/receive from rpool/var/log to 
srv/backups/millbarge/rpool/var/log (~ 57.9 MB remaining):
58.2MiB 0:00:00 [ 318MiB/s] 
[==============================================================================================================================================>]
 100%

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1861235

Title:
  zfs recv PANIC at range_tree.c:304:range_tree_find_impl()

To manage notifications about this bug go to:
https://bugs.launchpad.net/linux/+bug/1861235/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to