you should just be able to:

(physically) replace the failed disk,
use "cfgadm -c configure <device>" (if you haven't set "set
sata:sata_auto_online=1" in /etc/system)
use "zpool replace <pool> <device>" to start the rebuild of the information
finally use "/boot/solaris/bin/update_grub" to make sure the second disk is
bootable.


I am not sure where the /boot/solaris/bin/update_grub command is
documented, although I know it is. The full syntax of zpool is documented
in "man zpool", as is "cfgadm" I would recommend setting that entry
in /etc/system. There is another setting which will start a rebuild as soon
as a new drive is attached, but I don't think I would recommend it. I like
the safety of being able to make sure I know I put in the disk I think I
did.

Andrew Hettinger
http://Prominic.NET  ||  [EMAIL PROTECTED]
Tel:  866.339.3169 (toll free) -or- +1.217.356.2888 x.110 (int'l)
Fax: 866.372.3356 (toll free) -or- +1.217.356.3356            (int'l)
Mobile direct: 1.217.621.2540
CompTIA A+, CompTIA Network+, MCP

[EMAIL PROTECTED] wrote on 09/13/2008 10:54:01 AM:

> 2008/9/11 Jonathan Loran:
> > Zfs mirrors have the same advantages as with raidz(2):  You get zfs
> > checksums and self healing.  Also you can increase your pool size by
adding
> > bigger disks one side of the mirror at a time.  Perhaps some hardware
> > controllers can do this too, but I'm no familiar with any.
>
> Thanks Jon (and Andrew) - I do see your point. Do you know if it's
> documented anywhere how to recover a system when the primary disk
> fails? Eg, steps to follow.
> _______________________________________________
> storage-discuss mailing list
> [email protected]
> http://mail.opensolaris.org/mailman/listinfo/storage-discuss
_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss

Reply via email to