Re: [zfs-discuss] Trouble testing hot spares

2009-10-21 Thread Richard Elling

On Oct 21, 2009, at 5:18 PM, Ian Allison wrote:


Hi,

I've been looking at a raidz using opensolaris snv_111b and I've  
come across something I don't quite understand. I have 5 disks  
(fixed size disk images defined in virtualbox) in a raidz  
configuration, with 1 disk marked as a spare. The disks are 100m in  
size and I wanted simulate data corruption on one of them and watch  
the hot spare kick in, but when I do


dd if=/dev/zero of=/dev/c10t0d0 ibs=1024 count=102400


Should be: of=/dev/c10t0d0s0
ZFS tries to hide the slice from you, but it really confuses people by
trying to not be confusing.
 -- richard



The pool remains perfectly healthy

 pool: datapool
state: ONLINE
scrub: scrub completed after 0h0m with 0 errors on Wed Oct 21  
17:12:11 2009

config:

   NAME STATE READ WRITE CKSUM
   datapool ONLINE   0 0 0
 raidz1 ONLINE   0 0 0
   c10t0d0  ONLINE   0 0 0
   c10t1d0  ONLINE   0 0 0
   c10t2d0  ONLINE   0 0 0
   c10t3d0  ONLINE   0 0 0
   spares
 c10t4d0AVAIL

errors: No known data errors


I don't understand the output, I thought I should see cksum errors  
against c10t0d0. I tried exporting/importing the pool and scrubbing  
it incase this was a cache thing, but nothing changes.


I've tried this on all the disks in the pool with the same result  
and the datasets in the pool is uncorrupted. I guess I'm  
misunderstanding something fundamental about ZFS, can anyone help me  
out and explain.


-Ian.
z


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Trouble testing hot spares

2009-10-21 Thread Victor Latushkin

On Oct 22, 2009, at 4:18, Ian Allison i...@pims.math.ca wrote:


Hi,

I've been looking at a raidz using opensolaris snv_111b and I've  
come across something I don't quite understand. I have 5 disks  
(fixed size disk images defined in virtualbox) in a raidz  
configuration, with 1 disk marked as a spare. The disks are 100m in  
size and I wanted simulate data corruption on one of them and watch  
the hot spare kick in, but when I do


dd if=/dev/zero of=/dev/c10t0d0 ibs=1024 count=102400

The pool remains perfectly healthy


Try of=/dev/rdsk/c10t0d0s0 and see what happens

Victor


 pool: datapool
state: ONLINE
scrub: scrub completed after 0h0m with 0 errors on Wed Oct 21  
17:12:11 2009

config:

   NAME STATE READ WRITE CKSUM
   datapool ONLINE   0 0 0
 raidz1 ONLINE   0 0 0
   c10t0d0  ONLINE   0 0 0
   c10t1d0  ONLINE   0 0 0
   c10t2d0  ONLINE   0 0 0
   c10t3d0  ONLINE   0 0 0
   spares
 c10t4d0AVAIL

errors: No known data errors


I don't understand the output, I thought I should see cksum errors  
against c10t0d0. I tried exporting/importing the pool and scrubbing  
it incase this was a cache thing, but nothing changes.


I've tried this on all the disks in the pool with the same result  
and the datasets in the pool is uncorrupted. I guess I'm  
misunderstanding something fundamental about ZFS, can anyone help me  
out and explain.


-Ian.
z


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss