Hi all,

The broken disk is being currently cloned using recoverdisk utility. It
takes long, because it has a lot of bad sectors, but it seems like it
passed most of them already. I will notify the post, when it is finished.

Thank you,
Evgeny.
On Jun 5, 2016 9:04 AM, "Arne Jansen" <li...@die-jansens.de> wrote:

> On 05/06/16 17:32, Pawel Jakub Dawidek wrote:
>
>> Unfortunately tools like photorec won't work (I assume the tool scans
>> the disk and looks for some JPEG headers, etc.) as this was RAIDZ1 pool,
>> so none of the disks contains recognizable data.
>>
>
> Are you sure? Isn't only the parity information unusable? So 2/3 of the
> information should be there in plain. It would be good to revive
> the broken disk, though.
>
>
>> Two questions come to my mind:
>> 1. Is it possible to recover the data now?
>> 2. Is it possible to recover the data in theory?
>>
>> I'm afraid the answer to the first question is 'no', because we are not
>> aware of any existing tools that could help you with the process and to
>> do it now you would need to find ZFS expert willing to spend some
>> serious amount of time, which I'm sure will cost a lot.
>>
>> But if I understand your description correctly, the data is still on the
>> disks, so in theory it should be possible to recover it. Maybe not now,
>> but in 5 years? 10 years? Family memories is probably not something you
>> need immediately and after some time someone may develop a tool to help
>> recover data from the pools like yours. Even if noone will, maybe you
>> will become rich enough (or maybe you are already, no idea) to sponsor
>> development of such a tool.
>>
>
> Writing such a tool would be fun, but it would probably take weeks to
> months to write it. The question is if there are enough broken pools to
> make it worth investing the time.
>
> -Arne
>
>
>> My advise? Hold on to your data and don't lose hope!
>>
>> On Sun, Jun 05, 2016 at 05:02:39PM +0200, Arne Jansen wrote:
>>
>>> Hello Evgeny,
>>>
>>> without trying to understand what exactly you've done to the pool:
>>> If you think that in theory that data should still be there you can
>>> try to use tools like photorec. There's a chance it can recover most
>>> of your data, but probably without filenames. Tons of work awaiting
>>> you there...
>>> On the other hand, to truly recover the data it might need an expert
>>> dedicating hours and hours to the task. I don't know if anyone has
>>> written a general-purpose recovery tool for messed-up zfs pools.
>>> It's a good thing you've created bit-copies of the 2 drives. What about
>>> the third? Is it truly gone? Maybe it is possible to scrape some data
>>> off it, too. There are companies specialized in this.
>>>
>>> I'd first give the first variant a try...
>>>
>>> -Arne
>>>
>>> On 03/06/16 19:56, esamorokov wrote:
>>>
>>>> Hello All,
>>>>
>>>>     My name is Evgeny and I have 3 x 3TB in RAIDZ1, where one drive is
>>>> gone and I accidentally
>>>>     screwed the other two. The data should be fine, just need to revert
>>>>     uberblock in point of time, where i started doing changes.
>>>>
>>>>     I AM KINDLY ASKING FOR HELP! The pool had all of the family memories
>>>> for many years Thanks in advance!
>>>>
>>>>     I am not a FreeBSD guru and have been using ZFS for a couple of
>>>> years, but I know Linux and do some programming/scripting.
>>>>     Since I got that incident I started learning the depth of the ZFS,
>>>> but I definitely need help on it at this point.
>>>>     Please don't ask me why I did not have backups, I was building
>>>> backup server in my garage, when it happened
>>>>
>>>> History:
>>>>     I was using WEB GUI of FreeNas and it reported a failed drive
>>>>     I shutdown the computer and replaced the drive, but I did not
>>>> noticed that I accidentally disconnected power of another drive
>>>>     I powered on the server and expanded the pool where there only one
>>>> drive of the pool was active
>>>>     Then I began to really learn ZFS and messing up with bits
>>>>     At some point I created a backup bit-to-bit images of the two drives
>>>> from the pool (using R-Studio)
>>>>
>>>>
>>>> Specs(ORIGINAL):
>>>>     OS: FreeBSD 9.2-RELEASE (FREENAS.amd64) #0 r+2315ea3: Fri Dec 20
>>>> 12:48:50 PST 2013
>>>>     RAID:   [root@juicy] ~# camcontrol devlist
>>>>     <ST3000DM001-1CH166 CC29>          at scbus1 target 0 lun 0
>>>> (pass1,ada1)
>>>>     <ST3000DM001-1CH166 CC29>          at scbus2 target 0 lun 0
>>>> (ada2,pass2)
>>>>     <ST3000DM001-9YN166 CC4H>          at scbus3 target 0 lun 0
>>>> (pass3,ada3)
>>>>     [root@juicy] ~# zdb
>>>> zh_vol:
>>>>     version: 5000
>>>>     name: 'zh_vol'
>>>>     state: 0
>>>>     txg: 14106447
>>>>     pool_guid: 2918670121059000644
>>>>     hostid: 1802987710
>>>>     hostname: ''
>>>>     vdev_children: 1
>>>>     vdev_tree:
>>>>         type: 'root'
>>>>         id: 0
>>>>         guid: 2918670121059000644
>>>>         create_txg: 4
>>>>         children[0]:
>>>>             type: 'raidz'
>>>>             id: 0
>>>>             guid: 14123440993587991088
>>>>             nparity: 1
>>>>             metaslab_array: 34
>>>>             metaslab_shift: 36
>>>>             ashift: 12
>>>>             asize: 8995321675776
>>>>             is_log: 0
>>>>             create_txg: 4
>>>>             children[0]:
>>>>                 type: 'disk'
>>>>                 id: 0
>>>>                 guid: 17624020450804741401
>>>>                 path: '/dev/gptid/6e5cea27-7f52-11e3-9cd8-d43d7ed5b587'
>>>>                 whole_disk: 1
>>>>                 DTL: 137
>>>>                 create_txg: 4
>>>>             children[1]:
>>>>                 type: 'disk'
>>>>                 id: 1
>>>>                 guid: 3253299067537287428
>>>>                 path: '/dev/gptid/2b70d9c0-8e40-11e3-aa1c-d43d7ed5b587'
>>>>                 whole_disk: 1
>>>>                 DTL: 133
>>>>                 create_txg: 4
>>>>             children[2]:
>>>>                 type: 'disk'
>>>>                 id: 2
>>>>                 guid: 17999524418015963258
>>>>                 path: '/dev/gptid/1e898758-9488-11e3-a86e-d43d7ed5b587'
>>>>                 whole_disk: 1
>>>>                 DTL: 134
>>>>                 create_txg: 4
>>>>
>>>> State summary of the drives/pool:
>>>>
>>>>     *State #1*      *State #2*      *State #3*      *State #4*
>>>> *State #5*
>>>> *ada1 (ada1p2)*     OK      OK      DISCONNECTED    CONNECTED (OLD
>>>> POOL)    CLONED to a new
>>>> drive, Prev. state
>>>> *ada2 (ada2p2)*     OK      OK      OK      OK (new Pool)   CLONED to a
>>>> new drive, Prev. state
>>>> *ada3 (ada3p2)*     OK      FAILED  REPLACED        CONNECTED FAILED
>>>> DRIVE (OLD POOL)
>>>> CLONED to ada4, Prev. state
>>>> *ZH_VOL (pool)*     OK      DEGRADED        DESTROYED       RECREATED
>>>>      Previous State
>>>>
>>>>
>>>>
>>>> Specs (CURRENT):
>>>>
>>>>     root@juicy] ~# camcontrol devlist
>>>>     <Patriot Pyro SE 332ABBF0>         at scbus0 target 0 lun 0
>>>> (ada0,pass0)
>>>>     <ST3000DM001-1ER166 CC25>          at scbus1 target 0 lun 0
>>>> (ada1,pass1)
>>>>     <ST3000DM001-1ER166 CC25>          at scbus2 target 0 lun 0
>>>> (ada2,pass2)
>>>>     <ST3000DM001-9YN166 CC4H>          at scbus3 target 0 lun 0
>>>> (ada3,pass3)
>>>>     <ST3000DM001-1ER166 CC26>          at scbus5 target 0 lun 0
>>>> (ada4,pass4)
>>>>     <Marvell 91xx Config 1.01>         at scbus11 target 0 lun 0 (pass5)
>>>>
>>>>
>>>>     [root@juicy] ~# zdb
>>>>     zh_vol:
>>>>         version: 5000
>>>>         name: 'zh_vol'
>>>>         state: 0
>>>>         txg: 1491
>>>>         pool_guid: 10149654347507244742
>>>>         hostid: 1802987710
>>>>         hostname: 'juicy.zhelana.local'
>>>>         vdev_children: 2
>>>>         vdev_tree:
>>>>             type: 'root'
>>>>             id: 0
>>>>             guid: 10149654347507244742
>>>>             create_txg: 4
>>>>             children[0]:
>>>>                 type: 'disk'
>>>>                 id: 0
>>>>                 guid: 5892508334691495384
>>>>                 path: '/dev/ada0s2'
>>>>                 whole_disk: 1
>>>>                 metaslab_array: 33
>>>>                 metaslab_shift: 23
>>>>                 ashift: 12
>>>>                 asize: 983564288
>>>>                 is_log: 0
>>>>                 create_txg: 4
>>>>             children[1]:
>>>>                 type: 'disk'
>>>>                 id: 1
>>>>                 guid: 296669430778697937
>>>>                 path: '/dev/ada2p2'
>>>>                 whole_disk: 1
>>>>                 metaslab_array: 37
>>>>                 metaslab_shift: 34
>>>>                 ashift: 12
>>>>                 asize: 2997366816768
>>>>                 is_log: 0
>>>>                 create_txg: 1489
>>>>
>>>> Thnks in advance!
>>>> Evgeny.
>>>>
>>>> *openzfs-developer* | Archives
>>>> <https://www.listbox.com/member/archive/274414/=now>
>>>> <https://www.listbox.com/member/archive/rss/274414/28015265-27880bfd> |
>>>> Modify
>>>> <https://www.listbox.com/member/?&;>
>>>> Your Subscription   [Powered by Listbox] <http://www.listbox.com>
>>>>
>>>>
>>>
>>>
>>
>>
> 
> 



-------------------------------------------
openzfs-developer
Archives: https://www.listbox.com/member/archive/274414/=now
RSS Feed: https://www.listbox.com/member/archive/rss/274414/28015062-cce53afa
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=28015062&id_secret=28015062-f966d51c
Powered by Listbox: http://www.listbox.com

Reply via email to