Neil Brown wrote:
This isn't quite right.
Thanks :)
Firstly, it is mdadm which decided to make one drive a 'spare' for
raid5, not the kernel.
Secondly, it only applies to raid5, not raid6 or raid1 or raid10.
For raid6, the initial resync (just like the resync after an unclean
shutdown)
[EMAIL PROTECTED] wrote:
On Thu, 21 Jun 2007, David Chinner wrote:
On Thu, Jun 21, 2007 at 12:56:44PM +1000, Neil Brown wrote:
I have that - apparently naive - idea that drives use strong checksum,
and will never return bad data, only good data or an error. If this
isn't right, then it
On 21 Jun 2007, Neil Brown stated:
I have that - apparently naive - idea that drives use strong checksum,
and will never return bad data, only good data or an error. If this
isn't right, then it would really help to understand what the cause of
other failures are before working out how to
Michael wrote:
Thank you;
Not that I want to, but where did you find a SATA PCI card that fit 15 drives?
Areca have a few - a range of PCI-X cards that do up to 24 SATA drives
(ARC-1170) and PCI-e up to 24 drives (ARC-1280).
Regards,
Richard
-
To unsubscribe from this list: send the line
I didn't get a comment on my suggestion for a quick and dirty fix for
-assume-clean issues...
Bill Davidsen wrote:
Neil Brown wrote:
On Thursday June 14, [EMAIL PROTECTED] wrote:
it's now churning away 'rebuilding' the brand new array.
a few questions/thoughts.
why does it need to do a
On Thu, 21 Jun 2007, Raz wrote:
What is your raid configuration ?
Please note that the stripe_cache_size is acting as a bottle neck in some
cases.
Well, it's 3x SATA drives in raid5. 320G drives each, and I'm using a
314G partition from each disk (the rest of the space is quiescent).
On