On Mittwoch 10 Februar 2010, Iain Buchanan wrote:
> On Wed, 2010-02-10 at 07:31 +0100, Volker Armin Hemmann wrote:
> > On Mittwoch 10 Februar 2010, Iain Buchanan wrote:
> > > so long as you didn't have any non-detectable disk errors before
> > > removing the disk, or any drive failure while one of the drives were
> > > removed.  And the deterioration in performance while each disk was
> > > removed in turn might take more time than its worth.  Of course RAID 1
> > > wouldn't suffer from this (with >2 disks)...
> > 
> > Raid 6. Two disks can go down.
> 
> not that I know enough about RAID to comment on this page, but you might
> find it interesting:
> http://www.baarf.com/
> specifically:
> http://www.miracleas.com/BAARF/RAID5_versus_RAID10.txt

and that is very wrong:

 but if
the drive is going these will not last very long and will run out and SCSI
does NOT report correctable errors back to the OS!  Therefore you will not
know the drive is becoming unstable until it is too late and there are no
more replacement sectors and the drive begins to return garbage.  [Note
that the recently popular IDE/ATA drives do not (TMK) include bad sector
remapping in their hardware so garbage is returned that much sooner.]

so if the author is wrong on that, what is with the rest of his text?

And why do you think Raid6 was created?

With Raid6 one disk can fail and another return garbage and it is still able 
to recover. 

Another reason to use raid6 is the error rate. One bit per 10^16 sounds good - 
until you are fiddling with terabyte disks.


>Conclusion?  For safety and performance favor RAID10 first, RAID3 second,
RAID4 third, and RAID5 last! 

and that is just mega stupid. You can google. Or just go straight to 
wikipedia, if you don't know why.

Reply via email to