@ahrens the failure has been originally observed on the OpenZFS Jenkins 
pipelines, for instance: 
http://jenkins.open-zfs.org/blue/organizations/jenkins/openzfs%2Fopenzfs/detail/PR-510/8/pipeline

The PR description is just a stripped-down version of the failing test to 
demonstrate the race condition, sorry i wasn't clear. Full output when the test 
fails:

```
loli@openindiana:~$ export DISKS='c4t0d0 c4t1d0 c4t2d0'
loli@openindiana:~$ ppriv -s EIP=basic -e /opt/zfs-tests/bin/zfstest -c 
/opt/zfs-tests/runfiles/illumos-8940.run 
Test: /opt/zfs-tests/tests/functional/removal/remove_mirror (run as root) 
[05:41] [FAIL]
Test: /opt/zfs-tests/tests/functional/removal/cleanup (run as root) [00:00] 
[PASS]

Results Summary
FAIL       1
PASS       1

Running Time:   00:05:41
Percent passed: 50.0%
Log directory:  /var/tmp/test_results/20180205T182003
loli@openindiana:~$ cat /var/tmp/test_results/20180205T182003/log
Test: /opt/zfs-tests/tests/functional/removal/remove_mirror (run as root) 
[05:41] [FAIL]
18:20:04.91 SUCCESS: mkfile 536870912 /tmp/dsk1
18:21:36.54 SUCCESS: mkfile 536870912 /tmp/dsk2
18:22:43.59 SUCCESS: mkfile 536870912 /tmp/dsk3
18:25:32.92 SUCCESS: zpool create -f testpool /tmp/dsk1 mirror /tmp/dsk2 
/tmp/dsk3
18:25:33.31 SUCCESS: zfs create testpool/testfs
18:25:34.89 SUCCESS: zfs set mountpoint=/testdir testpool/testfs
18:25:34.90 SUCCESS: default_setup_noexit /tmp/dsk1 mirror /tmp/dsk2 /tmp/dsk3
18:25:37.04 SUCCESS: zpool remove testpool /tmp/dsk1
18:25:37.52 SUCCESS: zpool remove testpool /tmp/dsk2 exited 1
18:25:37.53 invalid vdev specification use '-f' to override the following 
errors: /tmp/dsk1 is part of active pool 'testpool'
18:25:37.53 ERROR: zpool add testpool /tmp/dsk1 exited 1
18:25:37.54 NOTE: Performing test-fail callback 
(/opt/zfs-tests/callbacks/zfs_dbgmsg)
18:25:37.73 =================================================================
18:25:37.73  Tailing last 200 lines of zfs_dbgmsg log
18:25:37.73 =================================================================
18:25:41.97 spa=rpool async request task=1
18:25:41.97 txg 24711 open pool version 5000; software version 5000/5; uts  
5.11 master-0-g8a051e3a96 i86pc
18:25:41.97 txg 6 create pool version 5000; software version 5000/5; uts 
openindiana 5.11 master-0-g8a051e3a96 i86pc
18:25:41.97 command: zpool create -f testpool /tmp/dsk1 mirror /tmp/dsk2 
/tmp/dsk3
18:25:41.97 txg 6 create testpool/testfs (id 48) 
18:25:41.97 ioctl create
18:25:41.97 command: zfs create testpool/testfs
18:25:41.97 txg 8 set testpool/testfs (id 48) mountpoint=/testdir
18:25:41.97 command: zfs set mountpoint=/testdir testpool/testfs
18:25:41.97 starting removal thread for vdev 0 (ffffff0319db1980) in txg 18 
im_obj=53
18:25:41.97 txg 18 vdev remove started testpool vdev 0 /tmp/dsk1
18:25:41.97 copying 7 segments for metaslab 0
18:25:41.97 copying 0 segments for metaslab 1
18:25:41.97 copying 0 segments for metaslab 2
18:25:41.97 copying 0 segments for metaslab 3
18:25:41.97 copying 0 segments for metaslab 4
18:25:41.97 copying 0 segments for metaslab 5
18:25:41.97 copying 0 segments for metaslab 6
18:25:41.97 copying 0 segments for metaslab 7
18:25:41.97 copying 0 segments for metaslab 8
18:25:41.97 copying 0 segments for metaslab 9
18:25:41.97 copying 0 segments for metaslab 10
18:25:41.97 copying 0 segments for metaslab 11
18:25:41.97 copying 0 segments for metaslab 12
18:25:41.97 copying 0 segments for metaslab 13
18:25:41.97 copying 0 segments for metaslab 14
18:25:41.97 copying 0 segments for metaslab 15
18:25:41.97 copying 0 segments for metaslab 16
18:25:41.97 copying 0 segments for metaslab 17
18:25:41.97 copying 0 segments for metaslab 18
18:25:41.97 copying 0 segments for metaslab 19
18:25:41.97 copying 0 segments for metaslab 20
18:25:41.97 copying 0 segments for metaslab 21
18:25:41.97 copying 0 segments for metaslab 22
18:25:41.97 copying 0 segments for metaslab 23
18:25:41.97 copying 0 segments for metaslab 24
18:25:41.97 copying 0 segments for metaslab 25
18:25:41.97 copying 0 segments for metaslab 26
18:25:41.97 copying 0 segments for metaslab 27
18:25:41.97 copying 0 segments for metaslab 28
18:25:41.97 copying 0 segments for metaslab 29
18:25:41.97 copying 0 segments for metaslab 30
18:25:41.97 txg 19: wrote 7 entries to indirect mapping obj 53; max 
offset=0x13800
18:25:41.97 command: zpool remove testpool /tmp/dsk1
18:25:41.97 finishing device removal for vdev 0 in txg 25
18:25:41.97 txg 25 vdev remove completed testpool vdev 0
18:25:42.01 =================================================================
18:25:42.01  End of zfs_dbgmsg log
18:25:42.01 =================================================================
18:25:42.01 NOTE: Performing local cleanup via log_onexit (cleanup)
18:25:44.13 SUCCESS: rm -rf /testdir
18:25:44.19 SUCCESS: rm -f /tmp/dsk1 mirror /tmp/dsk2 /tmp/dsk3
Test: /opt/zfs-tests/tests/functional/removal/cleanup (run as root) [00:00] 
[PASS]
loli@openindiana:~$ 
```

@prashks changes in the run file `delphix.run` were just to avoid wasting time 
and resources while stress-testing `remove_mirror`.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/534#issuecomment-363175529
------------------------------------------
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T9163fe24d217e464-M3c362cc9fd3b6cb3f8a743a0
Powered by Topicbox: https://topicbox.com

Reply via email to