http://defect.opensolaris.org/bz/show_bug.cgi?id=2206

           Summary: tests/functional/snapshot/snapshot_013_pos hangs - zfs
                    destroy
    Classification: Development
           Product: zfs-crypto
           Version: unspecified
          Platform: Other
        OS/Version: Solaris
            Status: NEW
          Severity: major
          Priority: P2
         Component: other
        AssignedTo: darrenm at opensolaris.org
        ReportedBy: hua.tang at sun.com
         QAContact: hua.tang at sun.com
                CC: zfs-crypto-discuss at opensolaris.org
   Estimated Hours: 0.0


test log:
Test_Case_Start| 25446 tests/functional/snapshot/setup | 01:48:18 0 |
stdout| 01:48:18 NOTE: zpool create -f testpool.25348 c1t4d0
stdout| 01:48:18 NOTE: /usr/sbin/zpool create -o
keysource=raw,file:///export/ho
me/zfs-tests/proto/suites/zfs/etc/raw_key_file  -f testpool.25348 c1t4d0
stdout| 01:48:25 SUCCESS: zpool create -f testpool.25348 c1t4d0
stdout| 01:48:25 NOTE: zfs create testpool.25348/testfs.25348
stdout| 01:48:25 NOTE: /usr/sbin/zfs create -o encryption=on
testpool.25348/test
fs.25348
stdout| 01:48:26 SUCCESS: zfs create testpool.25348/testfs.25348
stdout| 01:48:26 NOTE: zfs set mountpoint=/testdir25348
testpool.25348/testfs.25
348
stdout| 01:48:26 SUCCESS: zfs set mountpoint=/testdir25348
testpool.25348/testfs
.25348
stdout| 01:48:26 NOTE: zfs create testpool.25348/testctr25348
stdout| 01:48:26 NOTE: /usr/sbin/zfs create -o encryption=on
testpool.25348/test
ctr25348
stdout| 01:48:27 SUCCESS: zfs create testpool.25348/testctr25348
stdout| 01:48:27 NOTE: zfs set canmount=off testpool.25348/testctr25348
stdout| 01:48:27 SUCCESS: zfs set canmount=off testpool.25348/testctr25348
stdout| 01:48:27 NOTE: zfs create testpool.25348/testctr25348/testfs1.25348
stdout| 01:48:27 NOTE: /usr/sbin/zfs create -o encryption=on
testpool.25348/test
ctr25348/testfs1.25348
stdout| 01:48:27 SUCCESS: zfs create testpool.25348/testctr25348/testfs1.25348
stdout| 01:48:27 NOTE: zfs set mountpoint=/testdir1.25348
testpool.25348/testctr
25348/testfs1.25348
stdout| 01:48:28 SUCCESS: zfs set mountpoint=/testdir1.25348
testpool.25348/test
ctr25348/testfs1.25348
stdout| 01:48:28 NOTE: zfs create -V 1gb testpool.25348/testvol25348
stdout| 01:48:28 NOTE: /usr/sbin/zfs create -o encryption=on -V 1gb
testpool.253
48/testvol25348
stdout| 01:48:34 SUCCESS: zfs create -V 1gb testpool.25348/testvol25348
Test_Case_End| 25446 tests/functional/snapshot/setup | PASS | 01:48:34 0 |

Test_Case_Start| 27916 tests/functional/snapshot/snapshot_013_pos | 01:49:54 0
|
stdout| 01:49:54 ASSERTION: Verify snapshots from 'snapshot -r' can be used for
zfs send/recv
stdout| 01:49:54 NOTE: Populate the /testdir25348 directory (prior to snapshot)
stdout| 01:49:54 NOTE: file_write -o create -f /testdir25348/file0 -b 8192 -c
20 -d 0
stdout| 01:49:54 SUCCESS: file_write -o create -f /testdir25348/file0 -b 8192
-c 20 -d 0
stdout| 01:49:54 NOTE: file_write -o create -f /testdir25348/file1 -b 8192 -c
20 -d 1
stdout| 01:49:54 SUCCESS: file_write -o create -f /testdir25348/file1 -b 8192
-c 20 -d 1
stdout| 01:49:54 NOTE: file_write -o create -f /testdir25348/file2 -b 8192 -c
20 -d 2
stdout| 01:49:54 SUCCESS: file_write -o create -f /testdir25348/file2 -b 8192
-c 20 -d 2
stdout| 01:49:54 NOTE: file_write -o create -f /testdir25348/file3 -b 8192 -c
20 -d 3
stdout| 01:49:54 SUCCESS: file_write -o create -f /testdir25348/file3 -b 8192
-c 20 -d 3
stdout| 01:49:54 NOTE: file_write -o create -f /testdir25348/file4 -b 8192 -c
20 -d 4
stdout| 01:49:54 SUCCESS: file_write -o create -f /testdir25348/file4 -b 8192
-c 20 -d 4
stdout| 01:49:54 NOTE: file_write -o create -f /testdir25348/file5 -b 8192 -c
20 -d 5
stdout| 01:49:54 SUCCESS: file_write -o create -f /testdir25348/file5 -b 8192
-c 20 -d 5
stdout| 01:49:54 NOTE: file_write -o create -f /testdir25348/file6 -b 8192 -c
20 -d 6
stdout| 01:49:54 SUCCESS: file_write -o create -f /testdir25348/file6 -b 8192
-c 20 -d 6
stdout| 01:49:54 NOTE: file_write -o create -f /testdir25348/file7 -b 8192 -c
20 -d 7
stdout| 01:49:54 SUCCESS: file_write -o create -f /testdir25348/file7 -b 8192
-c 20 -d 7
stdout| 01:49:54 NOTE: file_write -o create -f /testdir25348/file8 -b 8192 -c
20 -d 8
stdout| 01:49:54 SUCCESS: file_write -o create -f /testdir25348/file8 -b 8192
-c 20 -d 8
stdout| 01:49:54 NOTE: file_write -o create -f /testdir25348/file9 -b 8192 -c
20 -d 9
stdout| 01:49:55 SUCCESS: file_write -o create -f /testdir25348/file9 -b 8192
-c 20 -d 9
stdout| 01:49:55 NOTE: zfs snapshot -r testpool.25348 at testsnap25348
stdout| 01:50:01 SUCCESS: zfs snapshot -r testpool.25348 at testsnap25348
stdout| 01:50:01 NOTE: Performing local cleanup via log_onexit (cleanup)

stack info:

# mdb -K

Welcome to kmdb
Loaded modules: [ scsi_vhci crypto cpc uppc neti sd ptm ufs unix 
cpu_ms.AuthenticAMD.15 mpt zfs krtld sppp nca hook lofs genunix ip logindmux 
nsmb usba specfs pcplusmp md nfs random cpu.generic sctp arp smbsrv ]
[1]> ::ptree
fffffffffbc26770  sched
     ffffff02d1dd3a48  fsflush
     ffffff02d1dd46a8  pageout
     ffffff02d1dd5308  init
          ffffff02d1dcf008  ypbind
          ffffff02d61a2c98  devfsadm
          ffffff02d61a2038  nfsmapid
          ffffff02d61a1340  nfs4cbd
          ffffff02d61a06e0  lockd
          ffffff02d61af330  statd
          ffffff02d61b6328  java
          ffffff02d61ab550  sendmail
          ffffff02d61a8338  sendmail
          ffffff02d1dc7318  rmvolmgr
          ffffff02d1dc4198  hald
               ffffff02d61a9030  hald-runner
                    ffffff02d1dc4df8  hald-addon-stora
                    ffffff02d1dc8c70  hald-addon-cpufr
                    ffffff02d61b4a68  hald-addon-netwo
          ffffff02d61b0c88  dtlogin
               ffffff02d61a51b8  dtlogin
                    ffffff02d61a76d8  dtgreet
               ffffff02d61a4558  Xorg
          ffffff02d61ae6d0  fmd
          ffffff02d1dc5a58  intrd
          ffffff02d61ac1b0  sshd
               ffffff02d619e1c0  sshd
                    ffffff02d61b0028  sshd
                         ffffff02d61a6a78  bash
                              ffffff02dfdfe568  tail
               ffffff02d1dba020  sshd
                    ffffff02d619ee20  sshd
                         ffffff02d1dc1c78  bash
                              ffffff02dfe00a88  stf_execute
                                   ffffff02dfdfd908  stf_execute
                                        ffffff02dfdfcca8  stf_timeout
                                             ffffff02d619bca0  stf_jnl_context
                                                  ffffff02dfe016e8  
                                                  snapshot_013_pos
                                                       ffffff02d61a38f8  zfs
                                                            ffffff02ec6c6570  
                                                            zfs
          ffffff02d61ada70  syslogd
          ffffff02d61b2548  inetd
          ffffff02d61b18e8  automountd
               ffffff02d1dbde00  automountd
          ffffff02d61b31a8  cron
          ffffff02d1dcfc68  smcboot
               ffffff02d1dcbdf0  smcboot
               ffffff02d1dbea60  smcboot
          ffffff02d1dce310  avahi-daemon-bri
          ffffff02d1dcca50  utmpd
          ffffff02d1dbac80  keyserv
          ffffff02d1dbb8e0  dbus-daemon
          ffffff02d1dd2de8  rpcbind
          ffffff02d1dc3538  in.routed
          ffffff02d1dca530  in.ndpd
          ffffff02d1dc66b8  picld
          ffffff02d1dbf6c0  powerd
          ffffff02d1dc8010  syseventd
          ffffff02d1dcd6b0  kcfd
          ffffff02d1dc0320  nscd
          ffffff02d1dcb190  dlmgmtd
          ffffff02d1dd1528  svc.configd
          ffffff02d1dd2188  svc.startd
               ffffff02d619fa80  sh
                    ffffff02d61b3e08  mdb
               ffffff02d1dc98d0  sac
                    ffffff02d1dd08c8  ttymon
[1]> ffffff02ec6c6570::walk thread
ffffff03671a1bc0
[1]> ffffff03671a1bc0::threadlist -v
            ADDR             PROC              LWP CLS PRI            WCHAN
ffffff03671a1bc0 ffffff02ec6c6570 ffffff02e021d8d0   1  59 ffffff049d52e462
  PC: _resume_from_idle+0xf1    CMD: 
  /usr/sbin/zfs destroy -r testpool.25348/testctr25348/testfs.25348
  stack pointer for thread ffffff03671a1bc0: ffffff000f6ca9b0
  [ ffffff000f6ca9b0 _resume_from_idle+0xf1() ]
    swtch+0x17f()
    cv_wait+0x61()
    zfs`txg_wait_synced+0x81()
    zfs`dsl_sync_task_group_wait+0xf1()
    zfs`dsl_sync_task_do+0x61()
    zfs`dsl_dataset_destroy+0x291()
    zfs`dmu_objset_destroy+0x50()
    zfs`zfs_ioc_destroy+0x42()
    zfs`zfsdev_ioctl+0x140()
    cdev_ioctl+0x48()
    specfs`spec_ioctl+0x86()
    fop_ioctl+0x7b()
    ioctl+0x174()
    sys_syscall32+0x101()

[1]>

-- 
Configure bugmail: http://defect.opensolaris.org/bz/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are on the CC list for the bug.

Reply via email to