Hi All.
I have pool (3 disks, raidz1). I made recabling for disks and now some of
disks in pool not available (cannot open). bounce back is not possible. Can
i recovery data from this pool?
Thanks.
___
zfs-discuss mailing list
Hi,
I'm not very familiar with manipulating zfs.
This is what happened:
I have an osol 2009.06 system on which I have some files that I need to
recover. Due to my ignorance and blindly testing, I have managed to get
this system to be unbootable... I know, my own fault.
So now I have a second
You need the -R option to zpool import. Try the procedure documented here:
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Resolving_ZFS_Mount_Point_Problems_That_Prevent_Successful_Booting
-- richard
On Mar 3, 2010, at 2:32 PM, Erwin Panen wrote:
Hi,
I'm not
Erwin Panen wrote:
Hi,
I'm not very familiar with manipulating zfs.
This is what happened:
I have an osol 2009.06 system on which I have some files that I need
to recover. Due to my ignorance and blindly testing, I have managed to
get this system to be unbootable... I know, my own fault.
So
Richard, thanks for replying;
I seem to have complicated matters:
I shutdown the system (past midnight here :-) )and seeing your reply
come in, fired it up again to further test.
The system wouldn't come up anymore (dumped in maintenance shell) as it
would try to import both rpool systems (I
Erwin Panen wrote:
Richard, thanks for replying;
I seem to have complicated matters:
I shutdown the system (past midnight here :-) )and seeing your reply
come in, fired it up again to further test.
The system wouldn't come up anymore (dumped in maintenance shell) as
it would try to import both
Ian, thanks for replying.
I'll give cfgadm | grep sata a go in a minute.
At the mo I've rebooted from 2009.06 livecd. Of course I can't import
rpool because it's a newer zfs version :-(
Any way to update zfs version on a running livecd?
Thanks for helping out!
Erwin
Ian Collins wrote:
Erwin Panen wrote:
Ian, thanks for replying.
I'll give cfgadm | grep sata a go in a minute.
At the mo I've rebooted from 2009.06 livecd. Of course I can't import
rpool because it's a newer zfs version :-(
Any way to update zfs version on a running livecd?
No, if you can get a failsafe
Matthew Angelo wrote:
Hi there,
Is there a way to get as much data as possible off an existing
slightly corrupted zpool? I have a 2 disk stripe which I'm moving to
new storage. I will be moving it to a ZFS Mirror, however at the
moment I'm having problems with ZFS Panic'ing the system
Hi there,
Is there a way to get as much data as possible off an existing slightly
corrupted zpool? I have a 2 disk stripe which I'm moving to new storage. I
will be moving it to a ZFS Mirror, however at the moment I'm having problems
with ZFS Panic'ing the system during a send | recv.
I don't
2009/3/27 Matthew Angelo bang...@gmail.com:
Doing an $( ls -lR | grep -i IO Error ) returns roughly 10-15 files which
are affected.
If ls works then tar, cpio, etc. should work.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
I was wondering if this ever made to zfs as a fix for bad labels?
On Wed, 7 May 2008, Jeff Bonwick wrote:
Yes, I think that would be useful. Something like 'zpool revive'
or 'zpool undead'. It would not be completely general-purpose --
in a pool with multiple mirror devices, it could only
I'm wondering if this bug is fixed and if not, what is the bug number:
If your entire pool consisted of a single mirror of
two disks, A and B,
and you detached B at some point in the past, you
*should* be able to
recover the pool as it existed when you detached B.
However, I just
ried
Jeff, Sorry this is so late. Thanks for the labelfix binary. I would like to
have one compiled for sparc. I tried compiling your source code but it threw up
with many errors. I'm not a programmer and reading the source code means
absolutely nothing to me. One error was:
cc labelfix.c
Hello Darren,
Tuesday, May 6, 2008, 11:16:25 AM, you wrote:
DJM Great tool, any chance we can have it integrated into zpool(1M) so that
DJM it can find and fixup on import detached vdevs as new pools ?
I remember long time ago some posts about 'zpool split' so one could
split a pool in two
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Darren J Moffat wrote:
| Great tool, any chance we can have it integrated into zpool(1M) so that
| it can find and fixup on import detached vdevs as new pools ?
|
| I'd think it would be reasonable to extend the meaning of
| 'zpool import -D' to list
Yes, I think that would be useful. Something like 'zpool revive'
or 'zpool undead'. It would not be completely general-purpose --
in a pool with multiple mirror devices, it could only work if
all replicas were detached in the same txg -- but for the simple
case of a single top-level mirror vdev,
Jeff Bonwick wrote:
Yes, I think that would be useful. Something like 'zpool revive'
or 'zpool undead'.
Why a new subcommand when 'zpool import' got '-D' to revive destroyed
pools ?
It would not be completely general-purpose --
in a pool with multiple mirror devices, it could only work
Hello Cyril,
Sunday, May 4, 2008, 11:34:28 AM, you wrote:
CP On Sun, May 4, 2008 at 11:42 AM, Jeff Bonwick [EMAIL PROTECTED] wrote:
Oh, and here's the source code, for the curious:
CP [snipped]
label_write(fd, offsetof(vdev_label_t, vl_uberblock),
1ULL
Great tool, any chance we can have it integrated into zpool(1M) so that
it can find and fixup on import detached vdevs as new pools ?
I'd think it would be reasonable to extend the meaning of
'zpool import -D' to list detached vdevs as well as destroyed pools.
--
Darren J Moffat
Oh, you're right! Well, that will simplify things! All we have to do
is convince a few bits of code to ignore ub_txg == 0. I'll try a
couple of things and get back to you in a few hours...
Jeff
On Fri, May 02, 2008 at 03:31:52AM -0700, Benjamin Brumaire wrote:
Hi,
while diving deeply in
OK, here you go. I've successfully recovered a pool from a detached
device using the attached binary. You can verify its integrity
against the following MD5 hash:
# md5sum labelfix
ab4f33d99fdb48d9d20ee62b49f11e20 labelfix
It takes just one argument -- the disk to repair:
# ./labelfix
Oh, and here's the source code, for the curious:
#include devid.h
#include dirent.h
#include errno.h
#include libintl.h
#include stdlib.h
#include string.h
#include sys/stat.h
#include unistd.h
#include fcntl.h
#include stddef.h
#include sys/vdev_impl.h
/*
* Write a label block with a ZBT
On Sun, May 4, 2008 at 11:42 AM, Jeff Bonwick [EMAIL PROTECTED] wrote:
Oh, and here's the source code, for the curious:
[snipped]
label_write(fd, offsetof(vdev_label_t, vl_uberblock),
1ULL UBERBLOCK_SHIFT, ub);
label_write(fd, offsetof(vdev_label_t,
Oh, and here's the source code, for the curious:
The forensics project will be all over this, I hope, and wrap it up in a
nice command line tool.
-mg
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Well, thanks your program, I could recover the data on the detach disk. Now I m
copying the data on other disks and resilver it inside the pool.
Warm words aren't enough to express how I feel. This community is great. Thanks
you very much.
bbr
This message posted from opensolaris.org
Hi,
while diving deeply in zfs in order to recover data I found that every
uberblock in label0 does have the same ub_rootbp and a zeroed ub_txg. Does it
means only ub_txg was touch while detaching?
Hoping it is the case, I modified ub_txg from one uberblock to match the tgx
from the label
Benjamin Brumaire wrote:
I try to calculate it assuming only uberblock is relevant.
#dd if=/dev/dsk/c0d1s4 bs=1 iseek=247808 count=168 | digest -a sha256
168+0 records in
168+0 records out
710306650facf818e824db5621be394f3b3fe934107bdfc861bbc82cb9e1bbf3
Is this on SPARC or x86 ?
ZFS stores
it is on x86.
Does it means that I have to split the output from digest in 4 words (each 8
bytes) and reverse each before comparing with the stored value?
bbr
This message posted from opensolaris.org
___
zfs-discuss mailing list
If your entire pool consisted of a single mirror of two disks, A and B,
and you detached B at some point in the past, you *should* be able to
recover the pool as it existed when you detached B. However, I just
tried that experiment on a test pool and it didn't work. I will
investigate further
Jeff thank you very much for taking time to look at this.
My entire pool consisted of a single mirror of two slices on different disks A
and B. I attach a third slice on disk C and wait for resilver and then detach
it. Now disks A and B burned and I have only disk C at hand.
bbr
This
Urgh. This is going to be harder than I thought -- not impossible,
just hard.
When we detach a disk from a mirror, we write a new label to indicate
that the disk is no longer in use. As a side effect, this zeroes out
all the old uberblocks. That's the bad news -- you have no uberblocks.
The
If I understand you correctly the steps to follow are:
read each sector (dd bs=512 count=1 split=n is enough?)
decompress it (any tools implementing the algo lzjb?)
size = 1024?
structure might be objset_phys_t?
take the oldest birth time as the root block
Hi,
my system (solaris b77) was physically destroyed and i loosed data saved in a
zpool mirror. The only thing left is a dettached vdev from the pool. I'm aware
that uberblock is gone and that i can't import the pool. But i still hope their
is a way or a tool (like tct
34 matches
Mail list logo