Hi all,
I have a 5 drive RAIDZ volume with data that I'd like to recover.
The long story runs roughly:
1) The volume was running fine under FreeBSD on motherboard SATA controllers.
2) Two drives were moved to a HP P411 SAS/SATA controller
3) I *think* the HP controllers wrote some volume
On Thu, Jun 14, 2012 at 09:56:43AM +1000, Daniel Carosone wrote:
On Tue, Jun 12, 2012 at 03:46:00PM +1000, Scott Aitken wrote:
Hi all,
Hi Scott. :-)
I have a 5 drive RAIDZ volume with data that I'd like to recover.
Yeah, still..
I tried using Jeff Bonwick's labelfix binary
On Fri, Jun 15, 2012 at 10:54:34AM +0200, Stefan Ring wrote:
Have you also mounted the broken image as /dev/lofi/2?
Yep.
Wouldn't it be better to just remove the corrupted device? This worked
just fine in my case.
Hi Stefan,
when you say remove the device, I assume you mean simply
On Sat, Jun 16, 2012 at 08:54:05AM +0200, Stefan Ring wrote:
when you say remove the device, I assume you mean simply make it unavailable
for import (I can't remove it from the vdev).
Yes, that's what I meant.
root@openindiana-01:/mnt# zpool import -d /dev/lofi
??pool: ZP-8T-RZ1-01
and then you can 'zpool replace'
the
new disk into the pool perhaps?
Gregg Wonderly
On 6/16/2012 2:02 AM, Scott Aitken wrote:
On Sat, Jun 16, 2012 at 08:54:05AM +0200, Stefan Ring wrote:
when you say remove the device, I assume you mean simply make it
unavailable
for import (I
On Sat, Jun 16, 2012 at 09:58:40AM -0500, Gregg Wonderly wrote:
On Jun 16, 2012, at 9:49 AM, Scott Aitken wrote:
On Sat, Jun 16, 2012 at 09:09:53AM -0500, Gregg Wonderly wrote:
Use 'dd' to replicate as much of lofi/2 as you can onto another device,
and then
cable that into place
Hi all,
this is a follow up some help I was soliciting with my corrupted pool.
The short story is I can have no confidence in the quality in the labels on 2
of my 5 drive RAIDZ array. For various reasons.
There is a possibility even that one drive has label of another (a mirroring
accident).
Hi all,
I know the easiest answer to this question is don't do it in the first
place, and if you do, you should have a backup, however I'll ask it
regardless.
Is there a way to backup the ZFS metadata on each member device of a pool
to another device (possibly non-ZFS)?
I have recently read a