For my latest test I set up a stripe of two mirrors with one hot spare
like so:
zpool create -f -m /export/zmir zmir mirror c0t0d0 c3t2d0 mirror c3t3d0 c3t4d0
spare c3t1d0
I spun down c3t2d0 and c3t4d0 simultaneously, and while the system kept
running (my tar over NFS barely hiccuped), the
On Tue, Dec 12, 2006 at 07:53:32AM -0800, Jim Hranicky wrote:
- I know I can attach it via the zpool commands, but is there a way to
kickstart the attachment process if it fails to attach automatically upon
disk failure?
Yep. Just do a 'zpool replace zmir target spare'. This is what the
Eric Schrock wrote:
On Tue, Dec 12, 2006 at 07:53:32AM -0800, Jim Hranicky wrote:
- I know I can attach it via the zpool commands, but is there a way to
kickstart the attachment process if it fails to attach automatically upon
disk failure?
Yep. Just do a 'zpool replace zmir target spare'.
On Tue, Dec 12, 2006 at 02:08:57PM -0500, James F. Hranicky wrote:
Sure, but that's what I want to avoid. The FMA agent should do this by
itself, but it's not, so I guess I'm just wondering why, or if there's
a good way to get to do so. If this happens in the middle of the night I
don't want
Eric Schrock wrote:
On Tue, Dec 12, 2006 at 02:08:57PM -0500, James F. Hranicky wrote:
Sure, but that's what I want to avoid. The FMA agent should do this by
itself, but it's not, so I guess I'm just wondering why, or if there's
a good way to get to do so. If this happens in the middle of the
On Tue, Dec 12, 2006 at 02:38:22PM -0500, James F. Hranicky wrote:
Dec 11 14:42:32.1271 1319464e-7a8c-e65b-962e-db386e90f7f2 ZFS-8000-D3
100% fault.fs.zfs.device
Problem in: zfs://pool=2646e20c1cb0a9d0/vdev=724c128cdbc17745
Affects:
Eric Schrock wrote:
Hmmm, it means that we correctly noticed that the device had failed, but
for whatever reason the ZFS FMA agent didn't correctly replace the
drive. I am cleaning up the hot spare behavior as we speak so I will
try to reproduce this.
Ok, great.
Well, as long as I know