We have 3 test servers running SmartOS, host B and C have zpools zp01 and
zp02 respectively, host A can see disks of zp01 and zp02 (disks of zp01 are
connected to HBA ports of A and B; disks of zp02 are connected to HBA ports
of A and C).

And I have a loop to test zpool import/export:

for i in {1..100}
do
    on B: zpool export -f zp01
    on A: zpool import -o cachefile=none zp01
    on A: zpool export -f zp01
    on B: zpool import -o cachefile=none zp01
    on C: zpool export -f zp02
    on A: zpool import -o cachefile=none zp02
    on A: zpool export -f zp02
    on C: zpool import -o cachefile=none zp02
done

After a few iterations, zpool import hangs. This can be reproduced
consistently. Is this a known issue? Should I file a bug report?

I have captured the stack of the hanging zpool import process.

Thanks,

Youzhong
----------------------------------------------------------------------------------------
# pstack 29207
29207:  zpool import -o cachefile=none zp01

# mdb -ke '0t29207::pid2proc | ::walk thread | ::findstack -v'
stack pointer for thread ffffff3289f39040: ffffff01ed3041e0
[ ffffff01ed3041e0 _resume_from_idle+0xf4() ]
  ffffff01ed304210 swtch+0x141()
  ffffff01ed304250 cv_wait+0x70(ffffff32330f901e, ffffff32330f9020)
  ffffff01ed304380 vmem_xalloc+0x630(ffffff32330f9000, 6000, 1000, 0, 0, 0,
0,
  ffffffff00000100)
  ffffff01ed3043f0 vmem_alloc+0x137(ffffff32330f9000, 6000, 100)
  ffffff01ed304510 segkp_get_internal+0x11b(fffffffffbc33760, 5000, e,
  ffffff01ed304528, 0)
  ffffff01ed304570 segkp_cache_get+0x103(1)
  ffffff01ed304610 thread_create+0x544(0, 0, fffffffffbaf35b0,
ffffff32c9520b40
  , 0, fffffffffbc30540, ffffff0100000002, ffffffff0000003c)
  ffffff01ed304660 taskq_thread_create+0x108(ffffff32c9520b40)
  ffffff01ed304710 taskq_create_common+0x1a7(fffffffff7e7e238, 0, 32, 3c, a,
  7fffffff, fffffffffbc30540, ffffff3200000000, ffffff0100040008)
  ffffff01ed304770 taskq_create+0x50(fffffffff7e7e238, 32, 3c, a, 7fffffff,
8)
  ffffff01ed3047b0 metaslab_group_create+0x96(ffffff36fbca70a8,
ffffff328784d000
  )
  ffffff01ed304860 vdev_alloc+0x54a(ffffff32a78fe000, ffffff01ed304928,
  ffffff3285910a80, ffffff3284e74540, a, 0)
  ffffff01ed304900 spa_config_parse+0x48(ffffff32a78fe000, ffffff01ed304928,
  ffffff3285910a80, ffffff3284e74540, a, 0)
  ffffff01ed3049a0 spa_config_parse+0xda(ffffff32a78fe000, ffffff01ed304a18,
  ffffff36fbca7f88, 0, 0, 0)
  ffffff01ed304a90 spa_load_impl+0xf4(ffffff32a78fe000, d5c8b305012c90c8,
  ffffff32d3417d30, 3, 0, 1, ffffff01ed304ad8)
  ffffff01ed304b30 spa_load+0x14e(ffffff32a78fe000, 3, 0, 1)
  ffffff01ed304b80 spa_tryimport+0xaa(ffffff3286740180)
  ffffff01ed304bd0 zfs_ioc_pool_tryimport+0x51(ffffff335c22a000)
  ffffff01ed304c80 zfsdev_ioctl+0x4a7(5a00000000, 5a06, 804258c, 100003,
  ffffff32578b3458, ffffff01ed304e68)
  ffffff01ed304cc0 cdev_ioctl+0x39(5a00000000, 5a06, 804258c, 100003,
  ffffff32578b3458, ffffff01ed304e68)
  ffffff01ed304d10 spec_ioctl+0x60(ffffff3284335d80, 5a06, 804258c, 100003,
  ffffff32578b3458, ffffff01ed304e68, 0)
  ffffff01ed304da0 fop_ioctl+0x55(ffffff3284335d80, 5a06, 804258c, 100003,
  ffffff32578b3458, ffffff01ed304e68, 0)
  ffffff01ed304ec0 ioctl+0x9b(3, 5a06, 804258c)
  ffffff01ed304f10 _sys_sysenter_post_swapgs+0x149()



-------------------------------------------
smartos-discuss
Archives: https://www.listbox.com/member/archive/184463/=now
RSS Feed: https://www.listbox.com/member/archive/rss/184463/25769125-55cfbc00
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=25769125&id_secret=25769125-7688e9fb
Powered by Listbox: http://www.listbox.com

Reply via email to