On 9/15/06, Reza Naima <[EMAIL PROTECTED]> wrote:
I've picked up two 500G disks, and am in the process of dd'ing the
contents of the raid partitions over.  The 2nd failed disk came up just
fine, and has been coping the data over without fault.  I expect it to
finish, but thought I would send this email now.  I will include some
data that I captured before the system failed.

Note that if there are bad blocks, dd might not be reliable,
"ddrescue" will do a better job of filling the gaps with zeroes
(silent data corruption may result).

The theory with your problem is to recreate the raid in degraded mode
(the replaced drive as missing), something like

mdadm --create /dev/md0 --chunk=256 --layout=left-symmetric
--raid-devices=4 /dev/hda3 missing /dev/hdf1 /dev/hdg1

after which see if the filesystem mounts and is ok, if so, add the
remaining drive back to the array.

It's recommended to use a script to scrub the raid device regularly,
to detect sleeping bad blocks early.
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to