[zfs-discuss] Replacing root pool disk

2012-04-12 Thread Peter Wood
Hi,

I was following the instructions in ZFS Troubleshooting Guide on how to
replace a disk in the root pool on x86 system. I'm using OpenIndiana, ZFS
pool v.28 with mirrored system rpool. The replacement disk is brand new.

root:~# zpool status
  pool: rpool
 state: DEGRADED
status: One or more devices has experienced an unrecoverable error.  An
attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
  scan: resilvered 17.6M in 0h0m with 0 errors on Wed Apr 11 17:45:16 2012
config:

NAME STATE READ WRITE CKSUM
rpoolDEGRADED 0 0 0
  mirror-0   DEGRADED 0 0 0
c2t5000CCA369C55DB8d0s0  OFFLINE  0   126 0
c2t5000CCA369D5231Cd0s0  ONLINE   0 0 0

errors: No known data errors
root:~#

I'm not very familiar with Solaris partitions and slices so somewhere in
the format/partition commands I must to have made a mistake because when I
try to replace the disk I'm getting the following error:

root:~# zpool replace rpool c2t5000CCA369C55DB8d0s0 c2t5000CCA369C89636d0s0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c2t5000CCA369C89636d0s0 overlaps with
/dev/dsk/c2t5000CCA369C89636d0s2
root:~#

I used -f and it worked but I was wondering is there a way to completely
reset the new disk? Remove all partitions and start from scratch.

Thank you
Peter
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Replacing root pool disk

2012-04-12 Thread Cindy Swearingen

Hi Peter,

The root pool disk labeling/partitioning is not so easy.

I don't know which OpenIndiana release this is but in a previous
Solaris release we had a bug that caused the error message below
and the workaround is exactly what you did, use the -f option.

We don't yet have an easy way to clear a disk label, but
sometimes I just create a new test pool on the problematic disk,
destroy the pool, and start over with a more coherent label.
This doesn't work for scenarios. Some people use the dd command
to wipe an existing label, but you must use it carefully.

Thanks,

Cindy

On 04/12/12 11:35, Peter Wood wrote:

Hi,

I was following the instructions in ZFS Troubleshooting Guide on how to
replace a disk in the root pool on x86 system. I'm using OpenIndiana,
ZFS pool v.28 with mirrored system rpool. The replacement disk is brand new.

root:~# zpool status
   pool: rpool
  state: DEGRADED
status: One or more devices has experienced an unrecoverable error.  An
 attempt was made to correct the error.  Applications are
unaffected.
action: Determine if the device needs to be replaced, and clear the errors
 using 'zpool clear' or replace the device with 'zpool replace'.
see: http://www.sun.com/msg/ZFS-8000-9P
   scan: resilvered 17.6M in 0h0m with 0 errors on Wed Apr 11 17:45:16 2012
config:

 NAME STATE READ WRITE CKSUM
 rpoolDEGRADED 0 0 0
   mirror-0   DEGRADED 0 0 0
 c2t5000CCA369C55DB8d0s0  OFFLINE  0   126 0
 c2t5000CCA369D5231Cd0s0  ONLINE   0 0 0

errors: No known data errors
root:~#

I'm not very familiar with Solaris partitions and slices so somewhere in
the format/partition commands I must to have made a mistake because when
I try to replace the disk I'm getting the following error:

root:~# zpool replace rpool c2t5000CCA369C55DB8d0s0 c2t5000CCA369C89636d0s0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c2t5000CCA369C89636d0s0 overlaps with
/dev/dsk/c2t5000CCA369C89636d0s2
root:~#

I used -f and it worked but I was wondering is there a way to completely
reset the new disk? Remove all partitions and start from scratch.

Thank you
Peter


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Replacing root pool disk

2012-04-12 Thread Cindy Swearingen

Actually, since I answer the root pool partitioning questions
at least 52 times per year, I provided the steps for clearing
the existing partitions in step 9, here:

http://docs.oracle.com/cd/E23823_01/html/817-5093/disksxadd-2.html#disksxadd-40

How to Create a Disk Slice for a ZFS Root File System

Step 9 uses the format--disk--partition--modify option and
sets the free hog space to slice 0. Then, you press return for
each existing slice to zero them out. This creates one large
slice 0.

cs

On 04/12/12 11:48, Cindy Swearingen wrote:

Hi Peter,

The root pool disk labeling/partitioning is not so easy.

I don't know which OpenIndiana release this is but in a previous
Solaris release we had a bug that caused the error message below
and the workaround is exactly what you did, use the -f option.

We don't yet have an easy way to clear a disk label, but
sometimes I just create a new test pool on the problematic disk,
destroy the pool, and start over with a more coherent label.
This doesn't work for scenarios. Some people use the dd command
to wipe an existing label, but you must use it carefully.

Thanks,

Cindy

On 04/12/12 11:35, Peter Wood wrote:

Hi,

I was following the instructions in ZFS Troubleshooting Guide on how to
replace a disk in the root pool on x86 system. I'm using OpenIndiana,
ZFS pool v.28 with mirrored system rpool. The replacement disk is
brand new.

root:~# zpool status
pool: rpool
state: DEGRADED
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are
unaffected.
action: Determine if the device needs to be replaced, and clear the
errors
using 'zpool clear' or replace the device with 'zpool replace'.
see: http://www.sun.com/msg/ZFS-8000-9P
scan: resilvered 17.6M in 0h0m with 0 errors on Wed Apr 11 17:45:16 2012
config:

NAME STATE READ WRITE CKSUM
rpool DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
c2t5000CCA369C55DB8d0s0 OFFLINE 0 126 0
c2t5000CCA369D5231Cd0s0 ONLINE 0 0 0

errors: No known data errors
root:~#

I'm not very familiar with Solaris partitions and slices so somewhere in
the format/partition commands I must to have made a mistake because when
I try to replace the disk I'm getting the following error:

root:~# zpool replace rpool c2t5000CCA369C55DB8d0s0
c2t5000CCA369C89636d0s0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c2t5000CCA369C89636d0s0 overlaps with
/dev/dsk/c2t5000CCA369C89636d0s2
root:~#

I used -f and it worked but I was wondering is there a way to completely
reset the new disk? Remove all partitions and start from scratch.

Thank you
Peter


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Replacing root pool disk

2012-04-12 Thread Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.

hi
by default the disk partition s2 cover the whole disk
this is fine for ufs  for LONG time.
Now zfs  does not like this overlap so you just need to run format then 
delete s2

or use s2 and delete all other partitions
(by default  when you run format/fdisk it create s2 whole disk and s7 
for boot, so there is also overlap between s2 and s7:-()


ZFS root need to fix this problem that require partition and not the 
whole disk (without s?)

my 2c
regards


On 4/12/2012 1:35 PM, Peter Wood wrote:

Hi,

I was following the instructions in ZFS Troubleshooting Guide on how 
to replace a disk in the root pool on x86 system. I'm using 
OpenIndiana, ZFS pool v.28 with mirrored system rpool. The replacement 
disk is brand new.


root:~# zpool status
  pool: rpool
 state: DEGRADED
status: One or more devices has experienced an unrecoverable error.  An
attempt was made to correct the error.  Applications are 
unaffected.

action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
  scan: resilvered 17.6M in 0h0m with 0 errors on Wed Apr 11 17:45:16 2012
config:

NAME STATE READ WRITE CKSUM
rpoolDEGRADED 0 0 0
  mirror-0   DEGRADED 0 0 0
c2t5000CCA369C55DB8d0s0  OFFLINE  0   126 0
c2t5000CCA369D5231Cd0s0  ONLINE   0 0 0

errors: No known data errors
root:~#

I'm not very familiar with Solaris partitions and slices so somewhere 
in the format/partition commands I must to have made a mistake because 
when I try to replace the disk I'm getting the following error:


root:~# zpool replace rpool c2t5000CCA369C55DB8d0s0 
c2t5000CCA369C89636d0s0

invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c2t5000CCA369C89636d0s0 overlaps with 
/dev/dsk/c2t5000CCA369C89636d0s2

root:~#

I used -f and it worked but I was wondering is there a way to 
completely reset the new disk? Remove all partitions and start from 
scratch.


Thank you
Peter


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Replacing root pool disk

2012-04-12 Thread Anh Quach
This might help, too: 

http://constantin.glez.de/blog/2011/03/how-set-zfs-root-pool-mirror-oracle-solaris-11-express



On Apr 12, 2012, at 11:04 AM, Cindy Swearingen wrote:

 Actually, since I answer the root pool partitioning questions
 at least 52 times per year, I provided the steps for clearing
 the existing partitions in step 9, here:
 
 http://docs.oracle.com/cd/E23823_01/html/817-5093/disksxadd-2.html#disksxadd-40
 
 How to Create a Disk Slice for a ZFS Root File System
 
 Step 9 uses the format--disk--partition--modify option and
 sets the free hog space to slice 0. Then, you press return for
 each existing slice to zero them out. This creates one large
 slice 0.
 
 cs
 
 On 04/12/12 11:48, Cindy Swearingen wrote:
 Hi Peter,
 
 The root pool disk labeling/partitioning is not so easy.
 
 I don't know which OpenIndiana release this is but in a previous
 Solaris release we had a bug that caused the error message below
 and the workaround is exactly what you did, use the -f option.
 
 We don't yet have an easy way to clear a disk label, but
 sometimes I just create a new test pool on the problematic disk,
 destroy the pool, and start over with a more coherent label.
 This doesn't work for scenarios. Some people use the dd command
 to wipe an existing label, but you must use it carefully.
 
 Thanks,
 
 Cindy
 
 On 04/12/12 11:35, Peter Wood wrote:
 Hi,
 
 I was following the instructions in ZFS Troubleshooting Guide on how to
 replace a disk in the root pool on x86 system. I'm using OpenIndiana,
 ZFS pool v.28 with mirrored system rpool. The replacement disk is
 brand new.
 
 root:~# zpool status
 pool: rpool
 state: DEGRADED
 status: One or more devices has experienced an unrecoverable error. An
 attempt was made to correct the error. Applications are
 unaffected.
 action: Determine if the device needs to be replaced, and clear the
 errors
 using 'zpool clear' or replace the device with 'zpool replace'.
 see: http://www.sun.com/msg/ZFS-8000-9P
 scan: resilvered 17.6M in 0h0m with 0 errors on Wed Apr 11 17:45:16 2012
 config:
 
 NAME STATE READ WRITE CKSUM
 rpool DEGRADED 0 0 0
 mirror-0 DEGRADED 0 0 0
 c2t5000CCA369C55DB8d0s0 OFFLINE 0 126 0
 c2t5000CCA369D5231Cd0s0 ONLINE 0 0 0
 
 errors: No known data errors
 root:~#
 
 I'm not very familiar with Solaris partitions and slices so somewhere in
 the format/partition commands I must to have made a mistake because when
 I try to replace the disk I'm getting the following error:
 
 root:~# zpool replace rpool c2t5000CCA369C55DB8d0s0
 c2t5000CCA369C89636d0s0
 invalid vdev specification
 use '-f' to override the following errors:
 /dev/dsk/c2t5000CCA369C89636d0s0 overlaps with
 /dev/dsk/c2t5000CCA369C89636d0s2
 root:~#
 
 I used -f and it worked but I was wondering is there a way to completely
 reset the new disk? Remove all partitions and start from scratch.
 
 Thank you
 Peter
 
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Replacing root pool disk

2012-04-12 Thread Roberto Waltman

Cindy Swearingen wrote:


We don't yet have an easy way to clear a disk label, ...


dd if=/dev/zero of=...  on the 1st and last 10% (roughly) of the disk 
has worked fine for me.


--
Roberto Waltman
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Replacing root pool disk

2012-04-12 Thread Peter Wood
Thank you all for the replies. I'll try the suggested solutions.

--
Peter

On Thu, Apr 12, 2012 at 2:06 PM, Roberto Waltman li...@rwaltman.com wrote:

 Cindy Swearingen wrote:


 We don't yet have an easy way to clear a disk label, ...


 dd if=/dev/zero of=...  on the 1st and last 10% (roughly) of the disk has
 worked fine for me.

 --
 Roberto Waltman

 __**_
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/**mailman/listinfo/zfs-discusshttp://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss