Off topic, but hopefully someone in the audience can help me...

I've got a set of three 80G WD drives that are in a sw RAID5 array.  After
a hard reboot a week or so ago I found that things "backed up".  A bit
of investigation revealed that during the re-sync of the parity of the
raid array it stalled at around 33%.  Further investigation revealed
that it was something with one of the drives.

I took the drive out of the array with raidhotremove and ran 
badblocks -sv /dev/hdf1 
on it.  It "backed up" (backed up being not a freeze, there is still
control, but the load starts going up, programs won't die, and the 
halt/reboot don't work, and a hard boot is needed) around 26M blocks
into the 78M blocks on the drive.

Badblocks never reported any errors, but that's because it seems to not
be able to get past something on the drive (??) or finds a condition it
can't handle or something... same with software raid.

I repartitioned, re-ran mke2fs on it, ran badblocks again, same thing.
I figure I can take it back to the store, or RMA to western digital, so
I am trying to "prove" it's not working with something windows/dos based.

I just finished putting it on my windows machine and formatting it with
ntfs then writing about 50G of data to it with no problems.  I then
downloaded their data lifeguard diagnostics tool and it passed the quick
and extended test with no errors.  I'm in the progress of writing 0s to
the drive (low level format I guess).

Is there any advice that anyone can give me?  How to better diagnose
this, windows tools that do bad block scanning, other drive diagnostic
tools from windows (I've played with the smart tools but they all report
that everything is dandy), or anything like that?

TIA

-- 
Alan <[EMAIL PROTECTED]> - http://arcterex.net
---------------------------------------------------------------------
"The only thing that experience teaches us is that experience teaches 
us nothing.             -- Andre Maurois (Emile Herzog)

--
[EMAIL PROTECTED] mailing list

Reply via email to