Can you make the crash dumps available? I only see text output attached to the bug.
Any particular reason that you're creating a pool on top of an iscsi device which is backed by a zvol on a pool on the same machine? Have you tried it using 2 different machines? --matt On Wed, Feb 12, 2014 at 4:26 AM, Franz Schober <[email protected]>wrote: > Hi, > > I have an in a few cmd lines reproducible kernel panic on a zfs send. > > At first the zfs send and the whole zpool blocks for some minutes, then > the system panics. > > The stack trace of the zfs send process while blocking (The system is > OmniOS r151.006y here): > > stack pointer for thread ffffff014a8283c0: ffffff0004cfb810 > [ ffffff0004cfb810 _resume_from_idle+0xf1() ] > ffffff0004cfb840 swtch+0x141() > ffffff0004cfb880 cv_wait+0x70(ffffff014f7125d2, ffffff014f712598) > ffffff0004cfb8d0 txg_wait_synced+0x83(ffffff014f712400, e2e) > ffffff0004cfb9e0 dsl_sync_task+0x187(ffffff0148ecc080, fffffffff79ca0f0, > fffffffff79ca190, ffffff0004cfbaf0, 1) > ffffff0004cfbb50 dsl_dataset_user_release_tmp+0xa5(ffffff014f712400, 32, > ffffff0150774b90) > ffffff0004cfbb90 dsl_dataset_user_release_onexit+0xa2(ffffff0150774a80) > ffffff0004cfbbd0 zfs_onexit_destroy+0x43(ffffff0154991a58) > ffffff0004cfbc00 zfs_ctldev_destroy+0x18(ffffff0154991a58, 4) > ffffff0004cfbc60 zfsdev_close+0x89(ac00000004, 403, 2, ffffff014e6b8ad8) > ffffff0004cfbc90 dev_close+0x31(ac00000004, 403, 2, ffffff014e6b8ad8) > ffffff0004cfbce0 device_close+0xd8(ffffff0154bab880, 403, > ffffff014e6b8ad8) > ffffff0004cfbd70 spec_close+0x17b(ffffff0154bab880, 403, 1, 0, > ffffff014e6b8ad8, 0) > ffffff0004cfbdf0 fop_close+0x61(ffffff0154bab880, 403, 1, 0, > ffffff014e6b8ad8, 0) > ffffff0004cfbe30 closef+0x5e(ffffff014f360950) > ffffff0004cfbea0 closeandsetf+0x398(8, 0) > ffffff0004cfbec0 close+0x13(8) > ffffff0004cfbf10 _sys_sysenter_post_swapgs+0x149() > > After that the kernel panics with: > > ffffff00062e1c40 PANIC <NONE> 1 > 0xffffff01492e7040 > apic_send_ipi+0x73 > send_dirint+0x18 > poke_cpu+0x2a > cpu_wakeup+0x9f > apix_do_softint_prolog+0x59 > 0 > panic > mutex_vector_enter+0x367 > idm_conn_event+0x35 > idm_ini_conn_disconnect+0x18 > iscsi_timeout_checks+0x1eb > iscsi_wd_thread+0x28 > iscsi_threads_entry+0x16 > taskq_thread+0x2d0 > thread_start+8 > > (crash file is available in https://www.illumos.org/ > issues/4589#change-11778) > > Here are the lines to reproduce this issue: > > zfs create -V 2G rpool/vol1 > stmfadm create-lu /dev/zvol/rdsk/rpool/vol1 > stmfadm add-view 600144F000000000000052F8B1390001 #your lun here > svcadm enable -s stmf > svcadm enable -s iscsi/target itadm create-target iscsiadm add > discovery-address 10.1.0.155:3260 iscsiadm modify discovery --sendtargets > enable zpool create test c3t600144F000000000000052F8B1390001d0 #your > devicename here > > zfs create test/testds dd if=/dev/zero of=/test/testds/file1 bs=1M > count=50 dd if=/dev/zero of=/test/testds/file2 bs=1M count=50 zfs snapshot > test/testds@test zfs send -R test/testds@test | pv > /dev/null > > > On OmniOS r151008j the system crashed already on the zpool create at first > test, > > ffffff0008c84c40 PANIC <NONE> 1 > param_preset > mutex_panic+0x73 > mutex_vector_enter+0x367 > idm_conn_event+0x35 > idm_ini_conn_disconnect+0x18 > iscsi_timeout_checks+0x1eb > iscsi_wd_thread+0x28 > iscsi_threads_entry+0x16 > taskq_thread+0x2d0 > thread_start+8 > > (crash file is also available in the issue tracker mentioned above) > on a second test the result was the same as in r151.006y. > Thank you very much for your big help. > > Thx, > Franz > > _______________________________________________ > developer mailing list > [email protected] > http://lists.open-zfs.org/mailman/listinfo/developer >
_______________________________________________ developer mailing list [email protected] http://lists.open-zfs.org/mailman/listinfo/developer
