[zfs-discuss] Recovering data

2010-04-20 Thread eXeC001er
Hi All.

I have pool (3 disks, raidz1). I made recabling for disks and now some of
disks in pool not available (cannot open). bounce back is not possible. Can
i recovery data from this pool?

Thanks.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] recovering data - howto mount rpool to newpool?

2010-03-03 Thread Erwin Panen

Hi,

I'm not very familiar with manipulating zfs.
This is what happened:
I have an osol 2009.06 system on which I have some files that I need to 
recover. Due to my ignorance and blindly testing, I have managed to get 
this system to be unbootable... I know, my own fault.


So now I have a second osol 2009.06 machine. Off course this system has 
the same user and homedir structural settings.

I've added the harddisk from system 1 to system 2.
The zfspool was not exported at shutdown of system 1.
Of course both contain the standard rpool. As far as my reading has 
learned me, I should be able to import rpool to newpool.

-
zpool import -f rpool newpool
cannot mount 'export': directory is not empty
cannot mount 'export/home':directory is not empty
cannot mount 'export/home/erwin':directory is not empty


So I end up with /newpool containing boot and etc dirs.

How can I work around this problem? Mount to different mountpoint?

Thanks for helping out!

Erwin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] recovering data - howto mount rpool to newpool?

2010-03-03 Thread Richard Elling
You need the -R option to zpool import.  Try the procedure documented here: 
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Resolving_ZFS_Mount_Point_Problems_That_Prevent_Successful_Booting

 -- richard

On Mar 3, 2010, at 2:32 PM, Erwin Panen wrote:

 Hi,
 
 I'm not very familiar with manipulating zfs.
 This is what happened:
 I have an osol 2009.06 system on which I have some files that I need to 
 recover. Due to my ignorance and blindly testing, I have managed to get this 
 system to be unbootable... I know, my own fault.
 
 So now I have a second osol 2009.06 machine. Off course this system has the 
 same user and homedir structural settings.
 I've added the harddisk from system 1 to system 2.
 The zfspool was not exported at shutdown of system 1.
 Of course both contain the standard rpool. As far as my reading has learned 
 me, I should be able to import rpool to newpool.
 -
 zpool import -f rpool newpool
 cannot mount 'export': directory is not empty
 cannot mount 'export/home':directory is not empty
 cannot mount 'export/home/erwin':directory is not empty
 
 
 So I end up with /newpool containing boot and etc dirs.
 
 How can I work around this problem? Mount to different mountpoint?
 
 Thanks for helping out!
 
 Erwin
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

ZFS storage and performance consulting at http://www.RichardElling.com
ZFS training on deduplication, NexentaStor, and NAS performance
http://nexenta-atlanta.eventbrite.com (March 16-18, 2010)




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] recovering data - howto mount rpool to newpool?

2010-03-03 Thread Ian Collins

Erwin Panen wrote:

Hi,

I'm not very familiar with manipulating zfs.
This is what happened:
I have an osol 2009.06 system on which I have some files that I need 
to recover. Due to my ignorance and blindly testing, I have managed to 
get this system to be unbootable... I know, my own fault.


So now I have a second osol 2009.06 machine. Off course this system 
has the same user and homedir structural settings.

I've added the harddisk from system 1 to system 2.
The zfspool was not exported at shutdown of system 1.
Of course both contain the standard rpool. As far as my reading has 
learned me, I should be able to import rpool to newpool.

-
zpool import -f rpool newpool
cannot mount 'export': directory is not empty

Try adding the -R option to change the root directory.

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] recovering data - howto mount rpool to newpool?

2010-03-03 Thread Erwin Panen

Richard, thanks for replying;
I seem to have complicated matters:
I shutdown the system (past midnight here :-) )and seeing your reply 
come in, fired it up again to further test.
The system wouldn't come up anymore (dumped in maintenance shell) as it 
would try to import both rpool systems (I guess)
So I powered down, and disconnected the 2nd disk, and rebooted. So far 
so good, system comes up.
Then I reconnected the 2nd disk (it's a sata) but the system will not 
see it.

/var/adm/messages shows this:
---
er...@mars:/var/adm$ tail -f messages
Mar  3 23:55:30 mars  SATA device detected at port 0
Mar  3 23:55:30 mars sata: [ID 663010 kern.info] /p...@0,0/pci1849,5...@9 :
Mar  3 23:55:30 mars sata: [ID 761595 kern.info]SATA disk device 
at port 0
Mar  3 23:55:30 mars sata: [ID 846691 kern.info]model WDC 
WD800JD-75JNC0

Mar  3 23:55:30 mars sata: [ID 693010 kern.info]firmware 06.01C06
Mar  3 23:55:30 mars sata: [ID 163988 kern.info]serial 
number  WD-WMAM96632208

Mar  3 23:55:30 mars sata: [ID 594940 kern.info]supported features:
Mar  3 23:55:30 mars sata: [ID 981177 kern.info] 28-bit LBA, 
DMA, SMART self-test
Mar  3 23:55:30 mars sata: [ID 514995 kern.info]SATA Gen1 
signaling speed (1.5Gbps)
Mar  3 23:55:30 mars sata: [ID 349649 kern.info]capacity = 
15625 sectors

-
I also have this from previous collections:
-
er...@mars:~# format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
  0. c4d0 DEFAULT cyl 6525 alt 2 hd 255 sec 63
 /p...@0,0/pci-...@6/i...@0/c...@0,0
  1. c6t0d0 DEFAULT cyl 9722 alt 2 hd 255 sec 63
 /p...@0,0/pci1849,5...@9/d...@0,0
Specify disk (enter its number):
---
So I know the sata disk is /dev/dsk/c6t0d0

How would I proceed to get this fixed?

Thanks for helping out!

Erwin

Richard Elling wrote:
You need the -R option to zpool import.  Try the procedure documented here: 
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Resolving_ZFS_Mount_Point_Problems_That_Prevent_Successful_Booting


 -- richard

On Mar 3, 2010, at 2:32 PM, Erwin Panen wrote:

  

Hi,

I'm not very familiar with manipulating zfs.
This is what happened:
I have an osol 2009.06 system on which I have some files that I need to 
recover. Due to my ignorance and blindly testing, I have managed to get this 
system to be unbootable... I know, my own fault.

So now I have a second osol 2009.06 machine. Off course this system has the 
same user and homedir structural settings.
I've added the harddisk from system 1 to system 2.
The zfspool was not exported at shutdown of system 1.
Of course both contain the standard rpool. As far as my reading has learned me, 
I should be able to import rpool to newpool.
-
zpool import -f rpool newpool
cannot mount 'export': directory is not empty
cannot mount 'export/home':directory is not empty
cannot mount 'export/home/erwin':directory is not empty


So I end up with /newpool containing boot and etc dirs.

How can I work around this problem? Mount to different mountpoint?

Thanks for helping out!

Erwin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



ZFS storage and performance consulting at http://www.RichardElling.com
ZFS training on deduplication, NexentaStor, and NAS performance
http://nexenta-atlanta.eventbrite.com (March 16-18, 2010)




  
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] recovering data - howto mount rpool to newpool?

2010-03-03 Thread Ian Collins

Erwin Panen wrote:

Richard, thanks for replying;
I seem to have complicated matters:
I shutdown the system (past midnight here :-) )and seeing your reply 
come in, fired it up again to further test.
The system wouldn't come up anymore (dumped in maintenance shell) as 
it would try to import both rpool systems (I guess)
So I powered down, and disconnected the 2nd disk, and rebooted. So far 
so good, system comes up.
Then I reconnected the 2nd disk (it's a sata) but the system will not 
see it.

/var/adm/messages shows this:
---
er...@mars:/var/adm$ tail -f messages
Mar  3 23:55:30 mars  SATA device detected at port 0
Mar  3 23:55:30 mars sata: [ID 663010 kern.info] /p...@0,0/pci1849,5...@9 :
Mar  3 23:55:30 mars sata: [ID 761595 kern.info]SATA disk 
device at port 0
Mar  3 23:55:30 mars sata: [ID 846691 kern.info]model WDC 
WD800JD-75JNC0

Mar  3 23:55:30 mars sata: [ID 693010 kern.info]firmware 06.01C06
Mar  3 23:55:30 mars sata: [ID 163988 kern.info]serial 
number  WD-WMAM96632208
Mar  3 23:55:30 mars sata: [ID 594940 kern.info]supported 
features:
Mar  3 23:55:30 mars sata: [ID 981177 kern.info] 28-bit LBA, 
DMA, SMART self-test
Mar  3 23:55:30 mars sata: [ID 514995 kern.info]SATA Gen1 
signaling speed (1.5Gbps)
Mar  3 23:55:30 mars sata: [ID 349649 kern.info]capacity = 
15625 sectors


Assuming your system supports hot swap, what does cfgadm | grep sata show?

You should be able to use cfgadm -c configure sataX/Y to configure an 
attached, but unconfigured drive.


Or you could use failsafe boot and import/rename the old rpool.

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] recovering data - howto mount rpool to newpool?

2010-03-03 Thread Erwin Panen

Ian, thanks for replying.
I'll give cfgadm | grep sata a go in a minute.
At the mo I've rebooted from 2009.06 livecd. Of course I can't import 
rpool because it's a newer zfs version :-(


Any way to update zfs version on a running livecd?

Thanks for helping out!

Erwin

Ian Collins wrote:

Erwin Panen wrote:

Richard, thanks for replying;
I seem to have complicated matters:
I shutdown the system (past midnight here :-) )and seeing your reply 
come in, fired it up again to further test.
The system wouldn't come up anymore (dumped in maintenance shell) as 
it would try to import both rpool systems (I guess)
So I powered down, and disconnected the 2nd disk, and rebooted. So 
far so good, system comes up.
Then I reconnected the 2nd disk (it's a sata) but the system will not 
see it.

/var/adm/messages shows this:
---
er...@mars:/var/adm$ tail -f messages
Mar  3 23:55:30 mars  SATA device detected at port 0
Mar  3 23:55:30 mars sata: [ID 663010 kern.info] 
/p...@0,0/pci1849,5...@9 :
Mar  3 23:55:30 mars sata: [ID 761595 kern.info]SATA disk 
device at port 0
Mar  3 23:55:30 mars sata: [ID 846691 kern.info]model WDC 
WD800JD-75JNC0
Mar  3 23:55:30 mars sata: [ID 693010 kern.info]firmware 
06.01C06
Mar  3 23:55:30 mars sata: [ID 163988 kern.info]serial 
number  WD-WMAM96632208
Mar  3 23:55:30 mars sata: [ID 594940 kern.info]supported 
features:
Mar  3 23:55:30 mars sata: [ID 981177 kern.info] 28-bit LBA, 
DMA, SMART self-test
Mar  3 23:55:30 mars sata: [ID 514995 kern.info]SATA Gen1 
signaling speed (1.5Gbps)
Mar  3 23:55:30 mars sata: [ID 349649 kern.info]capacity = 
15625 sectors


Assuming your system supports hot swap, what does cfgadm | grep sata 
show?


You should be able to use cfgadm -c configure sataX/Y to configure 
an attached, but unconfigured drive.


Or you could use failsafe boot and import/rename the old rpool.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] recovering data - howto mount rpool to newpool?

2010-03-03 Thread Ian Collins

Erwin Panen wrote:

Ian, thanks for replying.
I'll give cfgadm | grep sata a go in a minute.
At the mo I've rebooted from 2009.06 livecd. Of course I can't import 
rpool because it's a newer zfs version :-(


Any way to update zfs version on a running livecd?

No, if you can get a failsafe session to boot, use that.  Or download a 
more recent liveCD from genunix.org!


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recovering data from a corrupted zpool

2009-03-27 Thread Richard Elling

Matthew Angelo wrote:

Hi there,

Is there a way to get as much data as possible off an existing 
slightly corrupted zpool?  I have a 2 disk stripe which I'm moving to 
new storage.  I will be moving it to a ZFS Mirror, however at the 
moment I'm having problems with ZFS Panic'ing the system during a send 
| recv.


set your failmode to continue.  If you don't have a failmode parameter,
then you'll want to try a later version of Solaris.

I blogged about some techniques to deal with borken files a while back...
http://blogs.sun.com/relling/entry/holy_smokes_a_holey_file
http://blogs.sun.com/relling/entry/dd_tricks_for_holey_files
http://blogs.sun.com/relling/entry/more_on_holey_files

NB: if it is the receive that is panicing, then please file a bug.
-- richard



I don't know exactly how much data is valid.  Everything appears to 
run as expected and applications aren't crashing.


Doing an $( ls -lR | grep -i IO Error ) returns roughly 10-15 files 
which are affected.Luckily, these files ls is returning aren't 
super critical.


Is it possible to tell ZFS to do a emergency copy as much valid data 
off this file system?  

I've tried disabling checkums on the corrupted source zpool.   But 
even still, once ZFS runs into an error the zpool is FAULTED and the 
kernel panic's and the system crashes.   Is it possible to tell the 
zpool to ignore any errors and continue without faulting the zpool?


We have a backup of the data, which is 2 months old.  Is it slightly 
possible to bring this backup online, and 'sync as much as it can' 
between the two volumes?  Could this just be a rsync job?


Thanks



[root]# zpool status -v apps
  pool: apps
 state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
appsONLINE   0 0 120
  c1t1d0ONLINE   0 0 60
  c1t2d0ONLINE   0 0 0
  c1t3d0ONLINE   0 0 60

errors: Permanent errors have been detected in the following files:

apps:0x0
0x1d2:0x0



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Recovering data from a corrupted zpool

2009-03-26 Thread Matthew Angelo
Hi there,

Is there a way to get as much data as possible off an existing slightly
corrupted zpool?  I have a 2 disk stripe which I'm moving to new storage.  I
will be moving it to a ZFS Mirror, however at the moment I'm having problems
with ZFS Panic'ing the system during a send | recv.

I don't know exactly how much data is valid.  Everything appears to run as
expected and applications aren't crashing.

Doing an $( ls -lR | grep -i IO Error ) returns roughly 10-15 files which
are affected.Luckily, these files ls is returning aren't super critical.

Is it possible to tell ZFS to do a emergency copy as much valid data off
this file system?

I've tried disabling checkums on the corrupted source zpool.   But even
still, once ZFS runs into an error the zpool is FAULTED and the kernel
panic's and the system crashes.   Is it possible to tell the zpool to ignore
any errors and continue without faulting the zpool?

We have a backup of the data, which is 2 months old.  Is it slightly
possible to bring this backup online, and 'sync as much as it can' between
the two volumes?  Could this just be a rsync job?

Thanks



[root]# zpool status -v apps
  pool: apps
 state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
appsONLINE   0 0 120
  c1t1d0ONLINE   0 0 60
  c1t2d0ONLINE   0 0 0
  c1t3d0ONLINE   0 0 60

errors: Permanent errors have been detected in the following files:

apps:0x0
0x1d2:0x0
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recovering data from a corrupted zpool

2009-03-26 Thread Fajar A. Nugraha
2009/3/27 Matthew Angelo bang...@gmail.com:
 Doing an $( ls -lR | grep -i IO Error ) returns roughly 10-15 files which
 are affected.

If ls works then tar, cpio, etc. should work.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-11-07 Thread Krzys
I was wondering if this ever made to zfs as a fix for bad labels?

On Wed, 7 May 2008, Jeff Bonwick wrote:

 Yes, I think that would be useful.  Something like 'zpool revive'
 or 'zpool undead'.  It would not be completely general-purpose --
 in a pool with multiple mirror devices, it could only work if
 all replicas were detached in the same txg -- but for the simple
 case of a single top-level mirror vdev, or a clean 'zpool split',
 it's actually pretty straightforward.

 Jeff

 On Tue, May 06, 2008 at 11:16:25AM +0100, Darren J Moffat wrote:
 Great tool, any chance we can have it integrated into zpool(1M) so that
 it can find and fixup on import detached vdevs as new pools ?

 I'd think it would be reasonable to extend the meaning of
 'zpool import -D' to list detached vdevs as well as destroyed pools.

 --
 Darren J Moffat
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


 !DSPAM:122,482161a8460825014478!

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-10-10 Thread MC
I'm wondering if this bug is fixed and if not, what is the bug number:

 If your entire pool consisted of a single mirror of
 two disks, A and B,
 and you detached B at some point in the past, you
 *should* be able to
 recover the pool as it existed when you detached B.
  However, I just
 ried that experiment on a test pool and it didn't
 work.  

PS: Thanks for helping that guy (just a fellow user) out :)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-10-09 Thread Ron Halstead
Jeff, Sorry this is so late. Thanks for the labelfix binary. I would like to 
have one compiled for sparc. I tried compiling your source code but it threw up 
with many errors. I'm not a programmer and reading the source code means 
absolutely nothing to me. One error was:
 cc labelfix.c
labelfix.c, line 1: #include directive missing file name

Many more of those plus others. Which compiler did you use? I tried gcc and 
SUNWspro with the same results. This tool would really be handy at work as 
almost all of our Solaris 10 machines have mirrored zpools for data.

Hope you can help.

--ron
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-05-08 Thread Robert Milkowski
Hello Darren,

Tuesday, May 6, 2008, 11:16:25 AM, you wrote:

DJM Great tool, any chance we can have it integrated into zpool(1M) so that
DJM it can find and fixup on import detached vdevs as new pools ?

I remember long time ago some posts about 'zpool split' so one could
split a pool in two (assuming pool is mirrored).


-- 
Best regards,
 Robert Milkowskimailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-05-08 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Darren J Moffat wrote:
| Great tool, any chance we can have it integrated into zpool(1M) so that
| it can find and fixup on import detached vdevs as new pools ?
|
| I'd think it would be reasonable to extend the meaning of
| 'zpool import -D' to list detached vdevs as well as destroyed pools.

+inf :-)

- --
Jesus Cea Avion _/_/  _/_/_/_/_/_/
[EMAIL PROTECTED] - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:[EMAIL PROTECTED] _/_/_/_/  _/_/_/_/_/
~   _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.8 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBSCPNGZlgi5GaxT1NAQLXowQAnF/fWQ5SmBzRait+9wgVJdKEQ9Phh5D3
py3Bq75yQb4ljQ2PLbT1hU7QgNxavCLjx8NTz5pfnT9+m7E4SG5kQdfXXHgPMfHd
7Mp1ckRtcVZh+XWj2ESe/4ZDIIz/EvaeL4j7j9uFpDVWXGNPNZx1LyGcBuxt8uya
jdchjKgwyZM=
=xPth
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-05-07 Thread Jeff Bonwick
Yes, I think that would be useful.  Something like 'zpool revive'
or 'zpool undead'.  It would not be completely general-purpose --
in a pool with multiple mirror devices, it could only work if
all replicas were detached in the same txg -- but for the simple
case of a single top-level mirror vdev, or a clean 'zpool split',
it's actually pretty straightforward.

Jeff

On Tue, May 06, 2008 at 11:16:25AM +0100, Darren J Moffat wrote:
 Great tool, any chance we can have it integrated into zpool(1M) so that 
 it can find and fixup on import detached vdevs as new pools ?
 
 I'd think it would be reasonable to extend the meaning of
 'zpool import -D' to list detached vdevs as well as destroyed pools.
 
 --
 Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-05-07 Thread Darren J Moffat
Jeff Bonwick wrote:
 Yes, I think that would be useful.  Something like 'zpool revive'
 or 'zpool undead'.  

Why a new subcommand when 'zpool import' got '-D' to revive destroyed 
pools ?

  It would not be completely general-purpose --
 in a pool with multiple mirror devices, it could only work if
 all replicas were detached in the same txg -- but for the simple
 case of a single top-level mirror vdev, or a clean 'zpool split',
 it's actually pretty straightforward.

zpool split is the functionality need - take a side of a mirror and make 
a new unmirrored pool from it.

However I think many people are likely to attempt 'zpool detach' because 
of experience with volume managers such as SVM (ODS, LVM what ever you 
want to call it this week) where you type 'metadetach'.  Though of 
course that won't work in the case where there is actually a stripe of 
mirrors so 'zpool split' is need to deal with the non trivial case anyway.

-- 
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-05-06 Thread Robert Milkowski
Hello Cyril,

Sunday, May 4, 2008, 11:34:28 AM, you wrote:

CP On Sun, May 4, 2008 at 11:42 AM, Jeff Bonwick [EMAIL PROTECTED] wrote:
 Oh, and here's the source code, for the curious:


CP [snipped]


 label_write(fd, offsetof(vdev_label_t, vl_uberblock),
 1ULL  UBERBLOCK_SHIFT, ub);

 label_write(fd, offsetof(vdev_label_t, vl_vdev_phys),
 VDEV_PHYS_SIZE, vl.vl_vdev_phys);


CP Jeff,

CP is it enough to overwrite only one label ? Isn't there four of them ?


If checksum is ok IIRC the last one (most recent timestamp) is going
to be used.




-- 
Best regards,
 Robert Milkowski   mailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-05-06 Thread Darren J Moffat
Great tool, any chance we can have it integrated into zpool(1M) so that 
it can find and fixup on import detached vdevs as new pools ?

I'd think it would be reasonable to extend the meaning of
'zpool import -D' to list detached vdevs as well as destroyed pools.

--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-05-04 Thread Jeff Bonwick
Oh, you're right!  Well, that will simplify things!  All we have to do
is convince a few bits of code to ignore ub_txg == 0.  I'll try a
couple of things and get back to you in a few hours...

Jeff

On Fri, May 02, 2008 at 03:31:52AM -0700, Benjamin Brumaire wrote:
 Hi,
 
 while diving deeply in zfs in order to recover data I found that every 
 uberblock in label0 does have the same ub_rootbp and a zeroed ub_txg. Does it 
 means only ub_txg was touch while detaching?  
 
 Hoping  it is the case, I modified ub_txg from one uberblock to match the tgx 
 from the label and now I try to  calculate the new SHA256 checksum but I 
 failed. Can someone explain what I did wrong? And of course how to do it 
 correctly?
 
 bbr
 
 
 The example is from a valid uberblock which belongs an other pool.
 
 Dumping the active uberblock in Label 0:
 
 # dd if=/dev/dsk/c0d1s4 bs=1 iseek=247808 count=1024 | od -x 
 1024+0 records in
 1024+0 records out
 000 b10c 00ba   0009   
 020 8bf2    8eef f6db c46f 4dcc
 040 bba8 481a   0001   
 060 05e6 0003   0001   
 100 05e6 005b   0001   
 120 44e9 00b2   0001  0703 800b
 140        
 160     8bf2   
 200 0018    a981 2f65 0008 
 220 e734 adf2 037a  cedc d398 c063 
 240 da03 8a6e 26fc 001c    
 260        
 *
 0001720     7a11 b10c da7a 0210
 0001740 3836 20fb e2a7 a737 a947 feed 43c5 c045
 0001760 82a8 133d 0ba7 9ce7 e5d5 64e2 2474 3b03
 0002000
 
 Checksum is at pos 01740 01760
 
 I try to calculate it assuming only uberblock is relevant. 
 #dd if=/dev/dsk/c0d1s4 bs=1 iseek=247808 count=168 | digest -a sha256
 168+0 records in
 168+0 records out
 710306650facf818e824db5621be394f3b3fe934107bdfc861bbc82cb9e1bbf3
 
 Helas not matching  :-(
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-05-04 Thread Jeff Bonwick
OK, here you go.  I've successfully recovered a pool from a detached
device using the attached binary.  You can verify its integrity
against the following MD5 hash:

# md5sum labelfix
ab4f33d99fdb48d9d20ee62b49f11e20  labelfix

It takes just one argument -- the disk to repair:

# ./labelfix /dev/rdsk/c0d1s4

If all goes according to plan, your old pool should be importable.
If you do a zpool status -v, it will complain that the old mirrors
are no longer there.  You can clean that up by detaching them:

# zpool detach mypool guid

where guid is the long integer that zpool status -v reports
as the name of the missing device.

Good luck, and please let us know how it goes!

Jeff

On Sat, May 03, 2008 at 10:48:34PM -0700, Jeff Bonwick wrote:
 Oh, you're right!  Well, that will simplify things!  All we have to do
 is convince a few bits of code to ignore ub_txg == 0.  I'll try a
 couple of things and get back to you in a few hours...
 
 Jeff
 
 On Fri, May 02, 2008 at 03:31:52AM -0700, Benjamin Brumaire wrote:
  Hi,
  
  while diving deeply in zfs in order to recover data I found that every 
  uberblock in label0 does have the same ub_rootbp and a zeroed ub_txg. Does 
  it means only ub_txg was touch while detaching?  
  
  Hoping  it is the case, I modified ub_txg from one uberblock to match the 
  tgx from the label and now I try to  calculate the new SHA256 checksum but 
  I failed. Can someone explain what I did wrong? And of course how to do it 
  correctly?
  
  bbr
  
  
  The example is from a valid uberblock which belongs an other pool.
  
  Dumping the active uberblock in Label 0:
  
  # dd if=/dev/dsk/c0d1s4 bs=1 iseek=247808 count=1024 | od -x 
  1024+0 records in
  1024+0 records out
  000 b10c 00ba   0009   
  020 8bf2    8eef f6db c46f 4dcc
  040 bba8 481a   0001   
  060 05e6 0003   0001   
  100 05e6 005b   0001   
  120 44e9 00b2   0001  0703 800b
  140        
  160     8bf2   
  200 0018    a981 2f65 0008 
  220 e734 adf2 037a  cedc d398 c063 
  240 da03 8a6e 26fc 001c    
  260        
  *
  0001720     7a11 b10c da7a 0210
  0001740 3836 20fb e2a7 a737 a947 feed 43c5 c045
  0001760 82a8 133d 0ba7 9ce7 e5d5 64e2 2474 3b03
  0002000
  
  Checksum is at pos 01740 01760
  
  I try to calculate it assuming only uberblock is relevant. 
  #dd if=/dev/dsk/c0d1s4 bs=1 iseek=247808 count=168 | digest -a sha256
  168+0 records in
  168+0 records out
  710306650facf818e824db5621be394f3b3fe934107bdfc861bbc82cb9e1bbf3
  
  Helas not matching  :-(
   
   
  This message posted from opensolaris.org
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


labelfix
Description: Binary data
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-05-04 Thread Jeff Bonwick
Oh, and here's the source code, for the curious:

#include devid.h
#include dirent.h
#include errno.h
#include libintl.h
#include stdlib.h
#include string.h
#include sys/stat.h
#include unistd.h
#include fcntl.h
#include stddef.h

#include sys/vdev_impl.h

/*
 * Write a label block with a ZBT checksum.
 */
static void
label_write(int fd, uint64_t offset, uint64_t size, void *buf)
{
zio_block_tail_t *zbt, zbt_orig;
zio_cksum_t zc;

zbt = (zio_block_tail_t *)((char *)buf + size) - 1;
zbt_orig = *zbt;

ZIO_SET_CHECKSUM(zbt-zbt_cksum, offset, 0, 0, 0);

zio_checksum(ZIO_CHECKSUM_LABEL, zc, buf, size);

VERIFY(pwrite64(fd, buf, size, offset) == size);

*zbt = zbt_orig;
}

int
main(int argc, char **argv)
{
int fd;
vdev_label_t vl;
nvlist_t *config;
uberblock_t *ub = (uberblock_t *)vl.vl_uberblock;
uint64_t txg;
char *buf;
size_t buflen;

VERIFY(argc == 2);
VERIFY((fd = open(argv[1], O_RDWR)) != -1);
VERIFY(pread64(fd, vl, sizeof (vdev_label_t), 0) ==
sizeof (vdev_label_t));
VERIFY(nvlist_unpack(vl.vl_vdev_phys.vp_nvlist,
sizeof (vl.vl_vdev_phys.vp_nvlist), config, 0) == 0);
VERIFY(nvlist_lookup_uint64(config, ZPOOL_CONFIG_POOL_TXG, txg) == 0);
VERIFY(txg == 0);
VERIFY(ub-ub_txg == 0);
VERIFY(ub-ub_rootbp.blk_birth != 0);

txg = ub-ub_rootbp.blk_birth;
ub-ub_txg = txg;

VERIFY(nvlist_remove_all(config, ZPOOL_CONFIG_POOL_TXG) == 0);
VERIFY(nvlist_add_uint64(config, ZPOOL_CONFIG_POOL_TXG, txg) == 0);
buf = vl.vl_vdev_phys.vp_nvlist;
buflen = sizeof (vl.vl_vdev_phys.vp_nvlist);
VERIFY(nvlist_pack(config, buf, buflen, NV_ENCODE_XDR, 0) == 0);

label_write(fd, offsetof(vdev_label_t, vl_uberblock),
1ULL  UBERBLOCK_SHIFT, ub);

label_write(fd, offsetof(vdev_label_t, vl_vdev_phys),
VDEV_PHYS_SIZE, vl.vl_vdev_phys);

fsync(fd);

return (0);
}

Jeff

On Sun, May 04, 2008 at 01:21:27AM -0700, Jeff Bonwick wrote:
 OK, here you go.  I've successfully recovered a pool from a detached
 device using the attached binary.  You can verify its integrity
 against the following MD5 hash:
 
 # md5sum labelfix
 ab4f33d99fdb48d9d20ee62b49f11e20  labelfix
 
 It takes just one argument -- the disk to repair:
 
 # ./labelfix /dev/rdsk/c0d1s4
 
 If all goes according to plan, your old pool should be importable.
 If you do a zpool status -v, it will complain that the old mirrors
 are no longer there.  You can clean that up by detaching them:
 
 # zpool detach mypool guid
 
 where guid is the long integer that zpool status -v reports
 as the name of the missing device.
 
 Good luck, and please let us know how it goes!
 
 Jeff
 
 On Sat, May 03, 2008 at 10:48:34PM -0700, Jeff Bonwick wrote:
  Oh, you're right!  Well, that will simplify things!  All we have to do
  is convince a few bits of code to ignore ub_txg == 0.  I'll try a
  couple of things and get back to you in a few hours...
  
  Jeff
  
  On Fri, May 02, 2008 at 03:31:52AM -0700, Benjamin Brumaire wrote:
   Hi,
   
   while diving deeply in zfs in order to recover data I found that every 
   uberblock in label0 does have the same ub_rootbp and a zeroed ub_txg. 
   Does it means only ub_txg was touch while detaching?  
   
   Hoping  it is the case, I modified ub_txg from one uberblock to match the 
   tgx from the label and now I try to  calculate the new SHA256 checksum 
   but I failed. Can someone explain what I did wrong? And of course how to 
   do it correctly?
   
   bbr
   
   
   The example is from a valid uberblock which belongs an other pool.
   
   Dumping the active uberblock in Label 0:
   
   # dd if=/dev/dsk/c0d1s4 bs=1 iseek=247808 count=1024 | od -x 
   1024+0 records in
   1024+0 records out
   000 b10c 00ba   0009   
   020 8bf2    8eef f6db c46f 4dcc
   040 bba8 481a   0001   
   060 05e6 0003   0001   
   100 05e6 005b   0001   
   120 44e9 00b2   0001  0703 800b
   140        
   160     8bf2   
   200 0018    a981 2f65 0008 
   220 e734 adf2 037a  cedc d398 c063 
   240 da03 8a6e 26fc 001c    
   260        
   *
   0001720     7a11 b10c da7a 0210
   0001740 3836 20fb e2a7 a737 a947 feed 43c5 c045
   0001760 82a8 133d 0ba7 9ce7 e5d5 64e2 2474 3b03
   0002000
   
   Checksum is at pos 01740 01760
   
   I try to calculate it assuming only uberblock is relevant. 
   #dd if=/dev/dsk/c0d1s4 bs=1 iseek=247808 count=168 | digest -a sha256
   168+0 records in
   168+0 records out
   710306650facf818e824db5621be394f3b3fe934107bdfc861bbc82cb9e1bbf3
   
   

Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-05-04 Thread Cyril Plisko
On Sun, May 4, 2008 at 11:42 AM, Jeff Bonwick [EMAIL PROTECTED] wrote:
 Oh, and here's the source code, for the curious:


[snipped]


 label_write(fd, offsetof(vdev_label_t, vl_uberblock),
 1ULL  UBERBLOCK_SHIFT, ub);

 label_write(fd, offsetof(vdev_label_t, vl_vdev_phys),
 VDEV_PHYS_SIZE, vl.vl_vdev_phys);


Jeff,

is it enough to overwrite only one label ? Isn't there four of them ?


 fsync(fd);

 return (0);
  }


-- 
Regards,
 Cyril
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-05-04 Thread Mario Goebbels
 Oh, and here's the source code, for the curious:

The forensics project will be all over this, I hope, and wrap it up in a
nice command line tool.

-mg
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-05-04 Thread Benjamin Brumaire
Well, thanks your program, I could recover the data on the detach disk. Now I m 
copying the data on other disks and resilver  it inside the pool.

Warm words aren't enough to express how I feel. This community is great. Thanks 
you very much.

bbr
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-05-02 Thread Benjamin Brumaire
Hi,

while diving deeply in zfs in order to recover data I found that every 
uberblock in label0 does have the same ub_rootbp and a zeroed ub_txg. Does it 
means only ub_txg was touch while detaching?  

Hoping  it is the case, I modified ub_txg from one uberblock to match the tgx 
from the label and now I try to  calculate the new SHA256 checksum but I 
failed. Can someone explain what I did wrong? And of course how to do it 
correctly?

bbr


The example is from a valid uberblock which belongs an other pool.

Dumping the active uberblock in Label 0:

# dd if=/dev/dsk/c0d1s4 bs=1 iseek=247808 count=1024 | od -x 
1024+0 records in
1024+0 records out
000 b10c 00ba   0009   
020 8bf2    8eef f6db c46f 4dcc
040 bba8 481a   0001   
060 05e6 0003   0001   
100 05e6 005b   0001   
120 44e9 00b2   0001  0703 800b
140        
160     8bf2   
200 0018    a981 2f65 0008 
220 e734 adf2 037a  cedc d398 c063 
240 da03 8a6e 26fc 001c    
260        
*
0001720     7a11 b10c da7a 0210
0001740 3836 20fb e2a7 a737 a947 feed 43c5 c045
0001760 82a8 133d 0ba7 9ce7 e5d5 64e2 2474 3b03
0002000

Checksum is at pos 01740 01760

I try to calculate it assuming only uberblock is relevant. 
#dd if=/dev/dsk/c0d1s4 bs=1 iseek=247808 count=168 | digest -a sha256
168+0 records in
168+0 records out
710306650facf818e824db5621be394f3b3fe934107bdfc861bbc82cb9e1bbf3

Helas not matching  :-(
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-05-02 Thread Darren J Moffat
Benjamin Brumaire wrote:
 I try to calculate it assuming only uberblock is relevant. 
 #dd if=/dev/dsk/c0d1s4 bs=1 iseek=247808 count=168 | digest -a sha256
 168+0 records in
 168+0 records out
 710306650facf818e824db5621be394f3b3fe934107bdfc861bbc82cb9e1bbf3

Is this on SPARC or x86 ?

ZFS stores the SHA256 checksum in 4 words in big endian format, see

http://src.opensolaris.org/source/xref/zfs-crypto/gate/usr/src/uts/common/fs/zfs/sha256.c


-- 
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-05-02 Thread Benjamin Brumaire
it is on x86.

Does it means that I have to split the output from digest in 4 words (each 8 
bytes) and reverse each before comparing with the stored value?

bbr
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-04-29 Thread Jeff Bonwick
If your entire pool consisted of a single mirror of two disks, A and B,
and you detached B at some point in the past, you *should* be able to
recover the pool as it existed when you detached B.  However, I just
tried that experiment on a test pool and it didn't work.  I will
investigate further and get back to you.  I suspect it's perfectly
doable, just currently disallowed due to some sort of error check
that's a little more conservative than necessary.  Keep that disk!

Jeff

On Mon, Apr 28, 2008 at 10:33:32PM -0700, Benjamin Brumaire wrote:
 Hi,
 
 my system (solaris b77) was physically destroyed and i loosed data saved in a 
 zpool mirror. The only thing left is a dettached vdev from the pool. I'm 
 aware that uberblock is gone and that i can't import the pool. But i still 
 hope their is a way or a tool (like tct http://www.porcupine.org/forensics/) 
 i can go too recover at least partially some data)
 
 thanks in advance for any hints.
 
 bbr
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-04-29 Thread Benjamin Brumaire
Jeff thank you very much for taking time to look at this.

My entire pool consisted of a single mirror of two slices on different disks A 
and B. I attach a third slice on disk C and wait for resilver and then detach 
it. Now disks A and B burned and I have only disk C at hand.

bbr
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-04-29 Thread Jeff Bonwick
Urgh.  This is going to be harder than I thought -- not impossible,
just hard.

When we detach a disk from a mirror, we write a new label to indicate
that the disk is no longer in use.  As a side effect, this zeroes out
all the old uberblocks.  That's the bad news -- you have no uberblocks.

The good news is that the uberblock only contains one field that's hard
to reconstruct: ub_rootbp, which points to the root of the block tree.
The root block *itself* is still there -- we just have to find it.

The root block has a known format: it's a compressed objset_phys_t,
almost certainly one sector in size (could be two, but very unlikely
because the root objset_phys_t is highly compressible).

It should be possible to write a program that scans the disk, reading
each sector and attempting to decompress it.  If it decompresses into
exactly 1K (size of an uncompressed objset_phys_t), then we can look
at all the fields to see if they look plausible.  Among all candidates
we find, the one whose embedded meta-dnode has the highest birth time
in its dn_blkptr is the one we want.

I need to get some sleep now, but I'll code this up in a couple of
days and we can take it from there.  If this is time-sensitive,
let me know and I'll see if I can find someone else to drive it.
[ I've got a bunch of commitments tomorrow, plus I'm supposed to
be on vacation... typical...  ;-)  ]

Jeff

On Tue, Apr 29, 2008 at 12:15:21AM -0700, Benjamin Brumaire wrote:
 Jeff thank you very much for taking time to look at this.
 
 My entire pool consisted of a single mirror of two slices on different disks 
 A and B. I attach a third slice on disk C and wait for resilver and then 
 detach it. Now disks A and B burned and I have only disk C at hand.
 
 bbr
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-04-29 Thread Benjamin Brumaire
If  I understand you correctly the steps to follow are:

 read each sector   (dd bs=512 count=1 split=n is enough?)
 decompress it   (any tools implementing the algo  lzjb?)
 size = 1024?
 structure might be objset_phys_t?
 take the oldest birth time as the root block
 construction of the uberblocks 

Unfortunately I can't help with a C program but if I will be happy to support 
you in any other way.
Don't consider it's time sensitive, those data are very important but I can 
continue my business without it.

Again thanks you very much for your help. I really appreciate.

bbr
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] recovering data from a dettach mirrored vdev

2008-04-28 Thread Benjamin Brumaire
Hi,

my system (solaris b77) was physically destroyed and i loosed data saved in a 
zpool mirror. The only thing left is a dettached vdev from the pool. I'm aware 
that uberblock is gone and that i can't import the pool. But i still hope their 
is a way or a tool (like tct http://www.porcupine.org/forensics/) i can go too 
recover at least partially some data)

thanks in advance for any hints.

bbr
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss