Ullo...

I tried a couple of incantations using --assemble before I did the create and got similar results, hence my boggle. :-)

I do have space to image the drives off (it's 3x750G partitions) but couldn't think of any tests/tools I could run against the images that would give me any different results from the physical drives...

Just tried creating the array with just the three drives gives the same result. Lots of small text files full of 'w'.

There are no backups at all....

I'm off at home with a head cold, so might surf a bit more aimlessly and see if there's any inspiration from Mr Google. :-)

Cheers, Chris H.

On 14/02/2013 3:00 p.m., C. Falconer wrote:
Chris Hellyar wrote, On 14/02/13 14:03:
Hi folks..

Got a bit of a conundrum...

A friend has (had) a nas box that failed a drive in a raid 5 array during expansion of the array from 3 to 4 drives. The drive that failed was the 'new' one..

I've got the three drives here, and managed to get mdadm to create the array using:

mdadm --create /dev/md1 --assume-clean --level=5 --raid-devices=4 /dev/sda3 /dev/sdb3 /dev/sdc3 missing

That's the problem - you've created an array rather than --assemble

       Assemble
Assemble the components of a previously created array into an active array. Components can be explicitly given or can be searched for. mdadm checks that the components do form a bona fide array, and can, on request, fiddle superblock information
              so as to assemble a faulty array.

Create Create a new array with per-device metadata (superblocks). Appropriate metadata is written to each device, and then the array comprising those devices is activated. A 'resync' process is started to make sure that the array is consistent (e.g. both sides of a mirror contain the same data) but the content of the
              device is left otherwise untouched.

So the data is still physically on the disk but its lost all trace of the metadata. I'd say its probably all gone now.

You *might* have some luck with

mdadm --create /dev/md1 --assume-clean --level=5 --raid-devices=3 /dev/sda3 /dev/sdb3 /dev/sdc3

which will write metadata for a three-disk raid5 provided they're not too far gone. Depends how much data has moved to the new disk / how far through the conversion it got and what the conversion actually does to the data.

Have you got enough spare disks to image the whole lot off with a dd ?


If there are any backups at all it might be easier to simply restore.

Photorec is fantastic but it won't understand how the data was broken up on the raid, and the new raid.




fdisk says there is no valid partition on the resulting device /dev/md1 and photorec after many hours of trying only ever recovered some small fragments of mp3 files, some text files and some small images...

The drives have not been written to since the failure apart from me re-writing the superblocks per the above, but it was some what through expanding the array so who knows what state the data was in.

I'm assuming that the small / framented files found by photorec is a result of re-creating the array with incorrect geometry with respect to chunk size etc.

Anyone got any hot tips for this one? I did a bit of googling and forum reading and didn't really get all that far. The array came out of a QNAP TS-419P which I understand is an ext3/4 filesystem (depending on firmware version).

The other partitions of the drive when joined as arrays have the linux OS and system files for the QNAP intact, it's just the large storage array that's bust.

The owner is pretty much resigned to loosing the data (no backups, duh) but it'd be nice to pull a rabbit outa the hat.

Cheers, Chris H.

_______________________________________________
Linux-users mailing list
[email protected]
http://lists.canterbury.ac.nz/mailman/listinfo/linux-users




_______________________________________________
Linux-users mailing list
[email protected]
http://lists.canterbury.ac.nz/mailman/listinfo/linux-users

Reply via email to