Hi,
I think I've hit a reproducible bug in the raid 10 driver, tried on two
different machines with kernels 2.6.20 and 2.6.18. This is a script to
simulate the problem:
==
#!/bin/bash
modprobe loop
for ID in 1 2 3 ; do
echo -n Creating loopback device $ID...
dd
Eyal Lebedinsky wrote:
Disks are sealed, and a dessicant is present in each to keep humidity
down. If you ever open a disk drive (e.g. for the magnets, or the mirror
quality platters, or for fun) then you can see the dessicant sachet.
Actually, they aren't sealed 100%.
On wd's at least,
After I sent the message I received the 6 patches from Neil Brown. I
applied the first one (Fix Raid10 recovery problem) and it seems to be
taking care of the issue I am describing. Probably due to the rounding
fixes.
Thanks
-
To unsubscribe from this list: send the line unsubscribe
I sent this message directly to Neil Brown first, but then I read
his homepage and found out I should have sent it here, so here goes.
I've got 5 250GB drives, 4 of which I used to create an
RAID5-md-device, after all that was done (striping and all that) I
added the last drive. Problem is that
Another paper on hard drive failures.
http://www.usenix.org/events/fast07/tech/schroeder/schroeder_html/index.html
Regards,
Richard
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at