Re: [zfs-discuss] ZFS forensics

2011-11-23 Thread Gary Driggs
On Nov 23, 2011, Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D. :

> did you see this link

Thank you for this. Some of the other refs it lists will come in handy as well.

kind regards,
Gary
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS forensics

2011-11-23 Thread Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.

did you see this link
http://www.solarisinternals.com/wiki/index.php/ZFS_forensics_scrollback_script
may be out of date already
regards


On 11/23/2011 11:14 AM, Gary Driggs wrote:

Is zdb still the only way to dive in to the file system? I've seen the 
extensive work by Max Bruning on this but wonder if there are any tools that 
make this easier...?

-Gary
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--
Hung-Sheng Tsao Ph D.
Founder&  Principal
HopBit GridComputing LLC
cell: 9734950840
http://laotsao.wordpress.com/
http://laotsao.blogspot.com/

<>___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS forensics

2011-11-23 Thread Gary Driggs
Is zdb still the only way to dive in to the file system? I've seen the 
extensive work by Max Bruning on this but wonder if there are any tools that 
make this easier...?

-Gary
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS forensics/revert/restore shellscript and how-to.

2010-06-24 Thread Eric Jones
Where is the link to the script, and does it work with RAIDZ arrays?  Thanks so 
much.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS forensics/revert/restore shellscript and how-to.

2010-04-19 Thread fred pam
Hi Max,

Thanks, that's what I was looking for. 

So, after reading it I come to the conclusion that it's actually the fact I've 
lost my MOS that makes it 'impossible' to retrieve the data.

My understanding of it all (growing yet still meager ;-): 
Uberblocks do not point to different MOS-es but refer to a transaction history 
within the MOS; within an uberblock it is in fact not the blockpointer (as it 
only points to the MOS) but the TGX that's determining what the system 'sees' 
as data.
The fact that I may or may not have older uberblocks is then made irrelevant, 
right?  

This seems, from a forensics perspective, quite a quick and powerful way of 
destroying data (especially when also encrypted). Mind you, I do not 
necessarily think this is a bad thing, I just like to be sure I understand the 
consequences...

Grtz, Fred
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS forensics/revert/restore shellscript and how-to.

2010-04-16 Thread m...@bruningsystems.com

Hi Fred,

Have you read the ZFS On Disk Format Specification paper
at: 
http://hub.opensolaris.org/bin/download/Community+Group+zfs/docs/ondiskformat0822.pdf?




Ifred pam wrote:

Hi Richard, thanks for your time, I really appreciate it, but I'm still unclear 
on how this works.

So uberblocks point to the MOS. Why do you then require multiple uberblocks? Or are there actually multiple MOS'es? 
Or is there one MOS and multiple delta's to it (and its predecessors) and do the uberblocks then point to the latest delta?

In the latter case I can understand why Nullifying the latest uberblocks reverts to a previous 
situation, otherwise I don't see the difference between "Nullifying the first uberblocks" 
and "Nullifying the last uberblocks".
  
One reason for multiple uberblocks is that uberblocks, like everything 
else, are copy-on-write.
The reason you have 4 copies (2 labels at front and 2 labels at the end 
of every disk) is
redundancy.  No, there are not multiple MOS'es  in one pool (though 
there may be multiple copies
of the MOS via "ditto" blocks).  The current (or "active") uberblock is 
the one with the highest
transaction id with valid checksum.  Transaction ids are basically 
monotonically increasing,

so nullifying the last uberblock can revert you to a previous state.

max

Thanks, Fred
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS forensics/revert/restore shellscript and how-to.

2010-04-16 Thread fred pam
Hi Richard, thanks for your time, I really appreciate it, but I'm still unclear 
on how this works.

So uberblocks point to the MOS. Why do you then require multiple uberblocks? Or 
are there actually multiple MOS'es? 
Or is there one MOS and multiple delta's to it (and its predecessors) and do 
the uberblocks then point to the latest delta?
In the latter case I can understand why Nullifying the latest uberblocks 
reverts to a previous situation, otherwise I don't see the difference between 
"Nullifying the first uberblocks" and "Nullifying the last uberblocks".

Thanks, Fred
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS forensics/revert/restore shellscript and how-to.

2010-04-15 Thread Richard Elling
On Apr 15, 2010, at 12:39 PM, fred pam wrote:

> Hi Richard,
> 
> Hm, I guess I misunderstand the function of uberblocks. I thought uberblocks 
> contained pointers (to...?) which the system then uses to retrieve the files.

uberblocks are the trunk of the tree.

> If I'm incorrect in thinking that I could use an older uberblock to retrieve 
> the data, what am I missing? 

uberblocks point to the meta-object set (MOS) which describes the configuration
of the pool and ultimately the datasets and files.  What you've done is plant 
another
tree over the previous tree and it is unlikely that the previous tree remains 
intact.

> I've tried to find some basic zpool<->uberblock relation info without much 
> success (eh... well, Wiki wasn't helpful and I try to avoid reading RFC/IEEE 
> documents since I still value my sanity ;-)

This is a design detail. In practical terms, you clobbered the previous ZFS pool
by creating a new one on top of it.
 -- richard

ZFS storage and performance consulting at http://www.RichardElling.com
ZFS training on deduplication, NexentaStor, and NAS performance
Las Vegas, April 29-30, 2010 http://nexenta-vegas.eventbrite.com 





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS forensics/revert/restore shellscript and how-to.

2010-04-15 Thread fred pam
Hi Richard,

Hm, I guess I misunderstand the function of uberblocks. I thought uberblocks 
contained pointers (to...?) which the system then uses to retrieve the files.
If I'm incorrect in thinking that I could use an older uberblock to retrieve 
the data, what am I missing? 

I've tried to find some basic zpool<->uberblock relation info without much 
success (eh... well, Wiki wasn't helpful and I try to avoid reading RFC/IEEE 
documents since I still value my sanity ;-)

Grtz, Fred
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS forensics/revert/restore shellscript and how-to.

2010-04-14 Thread Richard Elling
On Apr 14, 2010, at 5:13 AM, fred pam wrote:

> I have a similar problem that differs in a subtle way. I moved a zpool 
> (single disk) from one system to another. Due to my inexperience I did not 
> import the zpool but (doh!) 'zpool create'-ed it (I may also have used a -f 
> somewhere in there...) 

You have destroyed the previous pool. There is a reason the "-f" flag is 
required,
though it is human nature to ignore such reasons.

> Interestingly the script still gives me the old uberblocks but in this case 
> the first couple (lowest TXG's) are actually younger (later timestamp) than 
> the higher TXG ones. Obviously removing the highest TXG's will actually 
> remove the uberblocks I want to keep. 

This is because creation of the new pool did not zero-out the uberblocks.

> Is there a way to copy an uberblock over another one? Or could I perhaps 
> remove the low-TXG uberblocks instead of the high-TXG ones (and would that 
> mean the old pool becomes available again). Or are more things missing than 
> just the uberblocks and should I move to a file-based approach (on ZFS?)

I do not believe you can recover the data on the previous pool without 
considerable
time and effort.
 -- richard

ZFS storage and performance consulting at http://www.RichardElling.com
ZFS training on deduplication, NexentaStor, and NAS performance
Las Vegas, April 29-30, 2010 http://nexenta-vegas.eventbrite.com 





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS forensics/revert/restore shellscript and how-to.

2010-04-14 Thread fred pam
I have a similar problem that differs in a subtle way. I moved a zpool (single 
disk) from one system to another. Due to my inexperience I did not import the 
zpool but (doh!) 'zpool create'-ed it (I may also have used a -f somewhere in 
there...) 

Interestingly the script still gives me the old uberblocks but in this case the 
first couple (lowest TXG's) are actually younger (later timestamp) than the 
higher TXG ones. Obviously removing the highest TXG's will actually remove the 
uberblocks I want to keep. 

Is there a way to copy an uberblock over another one? Or could I perhaps remove 
the low-TXG uberblocks instead of the high-TXG ones (and would that mean the 
old pool becomes available again). Or are more things missing than just the 
uberblocks and should I move to a file-based approach (on ZFS?)

Regards, Fred
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS forensics/revert/restore shellscript and how-to.

2009-11-16 Thread Martin Vool
I have no idea why this forum just makes files dissapear??? I will put a link 
tomorrow...a file was attached before...
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS forensics/revert/restore shellscript and how-to.

2009-11-16 Thread Martin Vool
The links work fine if you take the * off from the end...sorry bout that
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS forensics/revert/restore shellscript and how-to.

2009-11-16 Thread Martin Vool
I forgot to add the script
-- 
This message posted from opensolaris.org

zfs_revert.py
Description: Binary data
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS forensics/revert/restore shellscript and how-to.

2009-11-16 Thread Martin Vool
I have written an python script that enables to get back already deleted files 
and pools/partitions. This is highly experimental, but I managed to get back a 
moths work when all the partitions were deleted by accident(and of course 
backups are for the weak ;-) 

I hope someone can pass this information to the ZFS forensics project or where 
this should be..

First the basics and the HOW-TO is after that.

And i am not an solaris or ZFS expert, i am sure here are many things to 
improve, i hope you can help me out with some problems this still has.

[b]Basics:[/b]
Basically this script finds all the uberblocks, reads their metadata and orders 
them by time, then enables you to destroy all the uberblocks that were created 
after the event that you want to scroll back. Then destroy the cache and make 
the machine boot up again.
This will only work if the discs are not very full and there was not very much 
activity after the bad event. I managed to get back files from an ZFS partition 
after it was deleted(several) and created new ones.


I got so far by the help of these materials, the ones with * are the key parts:
*http://mbruning.blogspot.com/2008/08/recovering-removed-file-on-zfs-disk.html*
http://blogs.sun.com/blogfinger/entry/zfs_and_the_uberblock
*http://www.opensolaris.org/jive/thread.jspa?threadID=85794&%u205Etstart=0*
http://opensolaris.org/os/project/forensics/ZFS-Forensics/
http://docs.huihoo.com/opensolaris/solaris-zfs-administration-guide/html/ch04s06.html
http://www.lildude.co.uk/zfs-cheatsheet/

[b]How-to[/b]
This is the scenario i had...

First check the pool status:
$zpool status zones 

>From there you will get the disc name e.g:c2t60060E800457AB0057AB0146d0

Now we look up the history of the pool so we can find the timeline and some 
uberblocks(their TXG-s) where to scroll back:
zpool history -il zones
Save this output for later use.

You will defently want to backup the disk before you continue from this point:
e.g. ssh r...@host "dd if=/dev/dsk/c..." | dd of=Desktop/zones.dd

Now take the script that i have attached zfs_revert.py
It has two options:
-bs  is block size, by default 512 (never tested)
-tb is number of blocks:[this is mandatory, maybe someone could automate this]

To find the block size in solaris you can use
prtvtoc /dev/dsk/c2t60060E800457AB0057AB0146d0 | grep sectors
>From there look at the "sectors" row.
If you have a file/loop device just sizeinbytes/blocksize=total blocks

Now run the script for example:
./zfs_revert.py -bs=512 -tb=41944319 
/dev/dsk/c2t60060E800457AB0057AB0146d0

This will use dd, od and grep(GNU) to find the required information. This 
script should work on linux and on solaris.

It should give you a representation of the found uberblocks(i tested it with a 
20GB pool, did not take very long since the uberblocks are only at the 
beginning  and ending of the disk)

Something like this, but probably much more:
TXG, time-stamp, unixtime, addresses(there are 4 copy's of uberblocks)
411579  05 Oct 2009 14:39:511254742791  [630, 1142, 41926774, 41927286]
411580  05 Oct 2009 14:40:211254742821  [632, 1144, 41926776, 41927288]
411586  05 Oct 2009 14:43:211254743001  [644, 1156, 41926788, 41927300]
411590  05 Oct 2009 14:45:211254743121  [652, 1164, 41926796, 41927308]

Now comes the FUN part, take a wild guess witch block might be the one, it took 
me about 10 tryes to get it right, and i have no idea what are the "good" 
blocks or how to check this up. You will see later what i mean by that.

Enter the last TXG you want to KEEP.

Now the script writes zeroes to all of the uberblocs after the TXG you inserted.

Now clear the ZFS cache and reboot(better solution someone???)
rm -rf /etc/zfs/zpool.cache && reboot

After the box comes up you have to hurry, you don't have much time, if any at 
all since ZFS will realize in about a minute or two that something is fishy.

First try to import the pool if it is not imported yet.
zpool import -f zones

Now see if it can import it or fail miserably. There is a good chance that you 
will hit Corrupt data and unable to import, but as i said earlier it took me 
about 10 tries to get it right. I did not have to restore the whole thing every 
time, i just took baby steps and every time deleted some more blocks until i 
found something stable(not quite, it will still crash after few minutes, but 
this is enough time to get back conf files or some code)


Problems and unknown factors:
1) After the machine boots up you have limited time before ZFS realizes that it 
has been corrupted(checksums? I tried to turn them off but as soon as I turn 
checksumming off it crashes and when i could turn it of then the data might be 
corrupted)
2) If you copy files and one of them is corrupted the whole thing halts/crashes 
and you have to start with the zfs_revert.py script and reboot again.
3) It might be that reverting to a TXG where the pool was exported then there 
is a better