This might help, too: 

http://constantin.glez.de/blog/2011/03/how-set-zfs-root-pool-mirror-oracle-solaris-11-express



On Apr 12, 2012, at 11:04 AM, Cindy Swearingen wrote:

> Actually, since I answer the root pool partitioning questions
> at least 52 times per year, I provided the steps for clearing
> the existing partitions in step 9, here:
> 
> http://docs.oracle.com/cd/E23823_01/html/817-5093/disksxadd-2.html#disksxadd-40
> 
> How to Create a Disk Slice for a ZFS Root File System
> 
> Step 9 uses the format-->disk-->partition-->modify option and
> sets the free hog space to slice 0. Then, you press return for
> each existing slice to zero them out. This creates one large
> slice 0.
> 
> cs
> 
> On 04/12/12 11:48, Cindy Swearingen wrote:
>> Hi Peter,
>> 
>> The root pool disk labeling/partitioning is not so easy.
>> 
>> I don't know which OpenIndiana release this is but in a previous
>> Solaris release we had a bug that caused the error message below
>> and the workaround is exactly what you did, use the -f option.
>> 
>> We don't yet have an easy way to clear a disk label, but
>> sometimes I just create a new test pool on the problematic disk,
>> destroy the pool, and start over with a more coherent label.
>> This doesn't work for scenarios. Some people use the dd command
>> to wipe an existing label, but you must use it carefully.
>> 
>> Thanks,
>> 
>> Cindy
>> 
>> On 04/12/12 11:35, Peter Wood wrote:
>>> Hi,
>>> 
>>> I was following the instructions in ZFS Troubleshooting Guide on how to
>>> replace a disk in the root pool on x86 system. I'm using OpenIndiana,
>>> ZFS pool v.28 with mirrored system rpool. The replacement disk is
>>> brand new.
>>> 
>>> root:~# zpool status
>>> pool: rpool
>>> state: DEGRADED
>>> status: One or more devices has experienced an unrecoverable error. An
>>> attempt was made to correct the error. Applications are
>>> unaffected.
>>> action: Determine if the device needs to be replaced, and clear the
>>> errors
>>> using 'zpool clear' or replace the device with 'zpool replace'.
>>> see: http://www.sun.com/msg/ZFS-8000-9P
>>> scan: resilvered 17.6M in 0h0m with 0 errors on Wed Apr 11 17:45:16 2012
>>> config:
>>> 
>>> NAME STATE READ WRITE CKSUM
>>> rpool DEGRADED 0 0 0
>>> mirror-0 DEGRADED 0 0 0
>>> c2t5000CCA369C55DB8d0s0 OFFLINE 0 126 0
>>> c2t5000CCA369D5231Cd0s0 ONLINE 0 0 0
>>> 
>>> errors: No known data errors
>>> root:~#
>>> 
>>> I'm not very familiar with Solaris partitions and slices so somewhere in
>>> the format/partition commands I must to have made a mistake because when
>>> I try to replace the disk I'm getting the following error:
>>> 
>>> root:~# zpool replace rpool c2t5000CCA369C55DB8d0s0
>>> c2t5000CCA369C89636d0s0
>>> invalid vdev specification
>>> use '-f' to override the following errors:
>>> /dev/dsk/c2t5000CCA369C89636d0s0 overlaps with
>>> /dev/dsk/c2t5000CCA369C89636d0s2
>>> root:~#
>>> 
>>> I used -f and it worked but I was wondering is there a way to completely
>>> "reset" the new disk? Remove all partitions and start from scratch.
>>> 
>>> Thank you
>>> Peter
>>> 
>>> 
>>> _______________________________________________
>>> zfs-discuss mailing list
>>> zfs-discuss@opensolaris.org
>>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>> _______________________________________________
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to