Actually, since I answer the root pool partitioning questions
at least 52 times per year, I provided the steps for clearing
the existing partitions in step 9, here:

How to Create a Disk Slice for a ZFS Root File System

Step 9 uses the format-->disk-->partition-->modify option and
sets the free hog space to slice 0. Then, you press return for
each existing slice to zero them out. This creates one large
slice 0.


On 04/12/12 11:48, Cindy Swearingen wrote:
Hi Peter,

The root pool disk labeling/partitioning is not so easy.

I don't know which OpenIndiana release this is but in a previous
Solaris release we had a bug that caused the error message below
and the workaround is exactly what you did, use the -f option.

We don't yet have an easy way to clear a disk label, but
sometimes I just create a new test pool on the problematic disk,
destroy the pool, and start over with a more coherent label.
This doesn't work for scenarios. Some people use the dd command
to wipe an existing label, but you must use it carefully.



On 04/12/12 11:35, Peter Wood wrote:

I was following the instructions in ZFS Troubleshooting Guide on how to
replace a disk in the root pool on x86 system. I'm using OpenIndiana,
ZFS pool v.28 with mirrored system rpool. The replacement disk is
brand new.

root:~# zpool status
pool: rpool
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are
action: Determine if the device needs to be replaced, and clear the
using 'zpool clear' or replace the device with 'zpool replace'.
scan: resilvered 17.6M in 0h0m with 0 errors on Wed Apr 11 17:45:16 2012

rpool DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
c2t5000CCA369C55DB8d0s0 OFFLINE 0 126 0
c2t5000CCA369D5231Cd0s0 ONLINE 0 0 0

errors: No known data errors

I'm not very familiar with Solaris partitions and slices so somewhere in
the format/partition commands I must to have made a mistake because when
I try to replace the disk I'm getting the following error:

root:~# zpool replace rpool c2t5000CCA369C55DB8d0s0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c2t5000CCA369C89636d0s0 overlaps with

I used -f and it worked but I was wondering is there a way to completely
"reset" the new disk? Remove all partitions and start from scratch.

Thank you

zfs-discuss mailing list
zfs-discuss mailing list
zfs-discuss mailing list

Reply via email to