I'd recommend getting a second 80GB disk and mirroring your root as well.
UFS+SDS for root (don't forget a live upgrade slice) and ZFS for the other
disks.
Probably RAID-Z as you don't have enough disks to be interesting for doing 1+0.
Paul
This message posted from opensolaris.org
There isn't a global hot spare, but you can add a hot spare to multiple pools.
Paul
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Paul Kraus wrote:
In the ZFS case I could replace the disk
and the zpool would
resilver automatically. I could also take the
removed disk and put it
into the second system and have it recognize the
zpool (and that it
was missing half of a mirror) and the data was all
there.
I have a machine connected to an HDS with a corrupted pool.
While running zpool import -nfFX on the pool, it spawns a large number of
zfsdle processes and eventually the machine hangs for 20-30 seconds, spits out
error messages
zfs: [ID 346414 kern.warning] WARNING: Couldn't create process
bash-4.0# ulimit -a
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
open files (-n) 256
pipe size(512 bytes, -p) 10
stack size (kbytes, -s) 10240
cpu time
I'm surprised at the number as well.
Running it again, I'm seeing it jump fairly high just before the fork errors:
bash-4.0# ps -ef | grep zfsdle | wc -l
20930
(the next run of ps failed due to the fork error).
So maybe it is running out of processes.
ZFS file data from ::memstat just went
Alas, even moving the file out of the way and rebooting the box (to guarantee
state) didn't work:
-bash-4.0# zpool import -nfFX hds1
echo $?
-bash-4.0# echo $?
1
Do you need to be able to read all the labels for each disk in the array in
order to recover?
From zdb -l on one of the disks:
Rather than hacking something like that, he could use a Disk on Module
(http://en.wikipedia.org/wiki/Disk_on_module) or something like
http://www.tomshardware.com/news/nanoSSD-Drive-Elecom-Japan-SATA,8538.html
(which I suspect may be a DOM but I've not poked around sufficiently to see).
Paul