On 2014-12-31 12:27, ashf...@whisperpc.com wrote:
No, some rather simple math will tell you that a 4 disk BTRFS filesystem in raid10 mode has exactly a 50% chance of surviving a dual disk failure, and that as the number of disks goes up, the chance of survival will asymptotically approach 100% (but never reach it). This is the case for _every_ RAID-10 implementation that I have ever seen, including hardware raid controllers; the only real difference is in the stripe length (usually 512 bytes * half the number of disks for hardware raid, 4k * half the number of disks for software raid, and the filesystem block size (default is 16k in current versions) * half the number of disks for BTRFS).PhillipI had a similar question a year or two ago ( specifically about raid10 ) so I both experimented and read the code myself to find out. I was disappointed to find that it won't do raid10 on 3 disks since the chunk metadata describes raid10 as a stripe layered on top of a mirror. Jose's point was also a good one though; one chunk may decide to mirror disks A and B, so a failure of A and C it could recover from, but a different chunk could choose to mirror on disks A and C, so that chunk would be lost if A and C fail. It would probably be nice if the chunk allocator tried to be more deterministic about that.I see this as a CRITICAL design flaw. The reason for calling it CRITICAL is that System Administrators have been trained for >20 years that RAID-10 can usually handle a dual-disk failure, but the BTRFS implementation has effectively ZERO chance of doing so.
smime.p7s
Description: S/MIME Cryptographic Signature