Re: [zfs-discuss] raidz replace issue

2009-09-14 Thread Mark J Musante

On Sat, 12 Sep 2009, Jeremy Kister wrote:


scrub: resilver in progress, 0.12% done, 108h42m to go
 [...]
  raidz1  DEGRADED 0 0 0
   c3t8d0ONLINE   0 0 0
   c5t8d0ONLINE   0 0 0
   c3t9d0ONLINE   0 0 0
   replacing DEGRADED 0 0 0
 c5t9d0s0/o  UNAVAIL  0 0 0  cannot open
 c5t9d0  ONLINE   0 0 0

woohoo!  i've never had to use either s0 or s0/o, but hey, i'm happy.


Glad to see it's working.  I opened CR 6881631 to track this issue.


Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] raidz replace issue

2009-09-13 Thread Mark J Musante


The device is listed with s0; did you try using c5t9d0s0 as the name?

On 12 Sep, 2009, at 17.44, Jeremy Kister wrote:


[sorry for the cross post to solarisx86]

One of my disks died that i had in a raidz configuration on a Sun  
V40z with Solaris 10u5.  I took the bad disk out, replaced the disk,  
and issued 'zpool replace pool c5t9d0'.  the resilver process  
started, and before it was done i rebooted the system.


now, the raidz is all upset:

# zpool status
  pool: pool
 state: DEGRADED
status: One or more devices could not be opened.  Sufficient  
replicas exist for

the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-D3
 scrub: resilver completed with 0 errors on Sat Sep 12 17:19:57 2009
config:

NAMESTATE READ WRITE CKSUM
nfspool DEGRADED 0 0 0
  raidz1ONLINE   0 0 0
c3t4d0  ONLINE   0 0 0
c5t4d0  ONLINE   0 0 0
c3t5d0  ONLINE   0 0 0
c5t5d0  ONLINE   0 0 0
  raidz1DEGRADED 0 0 0
c3t8d0  ONLINE   0 0 0
c5t8d0  ONLINE   0 0 0
c3t9d0  ONLINE   0 0 0
c5t9d0s0/o  UNAVAIL  0 0 0  cannot open
  raidz1ONLINE   0 0 0
c3t10d0 ONLINE   0 0 0
c5t10d0 ONLINE   0 0 0
c3t11d0 ONLINE   0 0 0
c5t11d0 ONLINE   0 0 0
spares
  c3t15d0   AVAIL
  c3t14d0   AVAIL
  c5t14d0   AVAIL

# zpool replace nfspool c5t9d0 c5t9d0
cannot replace c5t9d0 with c5t9d0: no such device in pool
# suex zpool replace nfspool c5t90d0 c5t14d0
cannot replace c5t9d0 with c5t14d0: no such device in pool


Any clues on what to do here ?

--

Jeremy Kister
http://jeremy.kister.net./



--

Jeremy Kister
http://jeremy.kister.net./
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss





Regards,
markm


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] raidz replace issue

2009-09-12 Thread Jeremy Kister

On 9/12/2009 9:41 PM, Mark J Musante wrote:

The device is listed with s0; did you try using c5t9d0s0 as the name?


I didn't -- I never used s0 in the config setting up the zpool -- it 
changed to s0 after reboot.  but in either case, it's a good thought:



# zpool replace nfspool c5t9d0s0 c5t9d0
cannot replace c5t9d0s0 with c5t9d0: no such device in pool
# suex zpool replace nfspool c5t9d0s0 c5t9d0s0
cannot replace c5t9d0s0 with c5t9d0s0: no such device in pool

but no luck.

FYI, there are many more disks than what i showed in my previous example, 
but i don't think it was relevant to include them all in the email to the 
list.  they're all working fine and are just more raidz1s. but i'll surely 
post the entire output of zpool status if anyone wants.



--

Jeremy Kister
http://jeremy.kister.net./
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] raidz replace issue

2009-09-12 Thread Jeremy Kister

On 9/12/2009 10:33 PM, Mark J. Musante wrote:
That could be a bug with the status output. Could you try zdb -l on  
one of the good drives and see if the label for c5t9d0 has /old  


oops, i just realized i took this thread off list.  i hope you dont mind me 
putting it back on -- mea culpa.


the data is below my sig.  but we may not need it..

 appended?  If so, you may be able to replace the drive by using
 c5t9d0s0/old as the name.

# zpool replace nfspool c5t9d0s0/old c5t9d0
cannot replace c5t9d0s0/old with c5t9d0: no such device in pool
# zpool replace nfspool c5t9d0s0/o c5t9d0
#

hey now!!

# sleep 600
# zpool status
  [...]
 scrub: resilver in progress, 0.12% done, 108h42m to go
  [...]
   raidz1  DEGRADED 0 0 0
c3t8d0ONLINE   0 0 0
c5t8d0ONLINE   0 0 0
c3t9d0ONLINE   0 0 0
replacing DEGRADED 0 0 0
  c5t9d0s0/o  UNAVAIL  0 0 0  cannot open
  c5t9d0  ONLINE   0 0 0

woohoo!  i've never had to use either s0 or s0/o, but hey, i'm happy.

Thanks for your help.

--

Jeremy Kister
http://jeremy.kister.net./


## on a disk that's online in a raidz1:

# zdb -l /dev/dsk/c3t8d0s0

LABEL 0

version=4
name='nfspool'
state=0
txg=13112019
pool_guid=16867309821638598147
top_guid=16762401194364239721
guid=4372736765277861814
vdev_tree
type='raidz'
id=9
guid=16762401194364239721
nparity=1
metaslab_array=327
metaslab_shift=31
ashift=9
asize=1199947382784
children[0]
type='disk'
id=0
guid=4372736765277861814
path='/dev/dsk/c3t8d0s0'
devid='id1,s...@scompaq__bd3008856c__tp1012070593/a'
whole_disk=1
DTL=429
children[1]
type='disk'
id=1
guid=246503143867597614
path='/dev/dsk/c5t8d0s0'
devid='id1,s...@x0e1100eb0f79/a'
whole_disk=1
DTL=428
children[2]
type='disk'
id=2
guid=12776584137217099681
path='/dev/dsk/c3t9d0s0'
devid='id1,s...@x0e1100eb0f0e/a'
whole_disk=1
DTL=427
children[3]
type='disk'
id=3
guid=10802333971928443637
path='/dev/dsk/c5t9d0s0/old'
whole_disk=1
DTL=4722

LABEL 1

version=4
name='nfspool'
state=0
txg=13112019
pool_guid=16867309821638598147
top_guid=16762401194364239721
guid=4372736765277861814
vdev_tree
type='raidz'
id=9
guid=16762401194364239721
nparity=1
metaslab_array=327
metaslab_shift=31
ashift=9
asize=1199947382784
children[0]
type='disk'
id=0
guid=4372736765277861814
path='/dev/dsk/c3t8d0s0'
devid='id1,s...@scompaq__bd3008856c__tp1012070593/a'
whole_disk=1
DTL=429
children[1]
type='disk'
id=1
guid=246503143867597614
path='/dev/dsk/c5t8d0s0'
devid='id1,s...@x0e1100eb0f79/a'
whole_disk=1
DTL=428
children[2]
type='disk'
id=2
guid=12776584137217099681
path='/dev/dsk/c3t9d0s0'
devid='id1,s...@x0e1100eb0f0e/a'
whole_disk=1
DTL=427
children[3]
type='disk'
id=3
guid=10802333971928443637
path='/dev/dsk/c5t9d0s0/old'
whole_disk=1
DTL=4722

LABEL 2

version=4
name='nfspool'
state=0
txg=13112019
pool_guid=16867309821638598147
top_guid=16762401194364239721
guid=4372736765277861814
vdev_tree
type='raidz'
id=9
guid=16762401194364239721
nparity=1
metaslab_array=327
metaslab_shift=31
ashift=9
asize=1199947382784
children[0]
type='disk'
id=0
guid=4372736765277861814
path='/dev/dsk/c3t8d0s0'
devid='id1,s...@scompaq__bd3008856c__tp1012070593/a'
whole_disk=1
DTL=429
children[1]
type='disk'
id=1