On 04/17/12 01:00 PM, Jim Klimov wrote:
2012-04-17 14:47, Matt Keenan wrote:
- or is it possible that one of the devices being a USB device is
causing the failure ? I don't know.


Might be, I've got little experience with those beside LiveUSB
imagery ;)

My reason for splitting the pool was so I could attach the clean USB
rpool to another laptop and simply attach the disk from the new laptop,
let it resilver, installgrub to new laptop disk device and boot it up
and I would be back in action.

If the USB disk split-off were to work, I'd rather try booting
the laptop off the USB disk, if BIOS permits, or I'd boot off
a LiveCD/LiveUSB (if Solaris 11 has one - or from installation
media and break out into a shell) and try to import the rpool
from USB disk and then attach the laptop's disk to it to resilver.

This is exactly what I am doing, booted new laptop into LiveCD, imported USB pool, and zpool replacing the old laptop disk device which is in degraded state, with the new laptop disk device (after I partitioned to keep windows install).


As a workaround I'm trying to simply attach my USB rpool to the new
laptop and use zfs replace to effectively replace the offline device
with the new laptop disk device. So far so good, 12% resilvering, so
fingers crossed this will work.

Won't this overwrite the USB disk with the new laptop's (empty)
disk? The way you describe it...

No the offline disk in this instance is the old laptop's internal disk, the online device is the USB drive.


As an aside, I have noticed that on the old laptop, it would not boot if
the USB part of the mirror was not attached to the laptop, successful
boot could only be achieved when both mirror devices were online. Is
this a know issue with ZFS ? bug ?

Shouldn't be as mirrors are to protect against the disk failures.
What was your rpool's "failmode" zpool-level attribute?
It might have some relevance, but should define kernel's reaction
to "catastrophic failures" of the pool, and loss of a mirror's
side IMHO should not be one?.. Try failmode=continue and see if
that helps the rpool, to be certain. I think that's what the
installer should have done.

Exactly what I would have thought ZFS should actually help here not hinder. From what I can see the default failmode as set by install is "wait" which is exactly what is happening when I attempt to boot.

Just tried setting zpool failmode=continue and unfortunately still fails to boot, failmode=wait is definitely the default.

cheers

Matt

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to