14.08.2016 19:20, Chris Murphy пишет: ... > > This volume now has about a dozen chunks created by kernel code, and > the stripe X to devid Y mapping is identical. Using dd and hexdump, > I'm finding that stripe 0 and 1 are mirrored pairs, they contain > identical information. And stripe 2 and 3 are mirrored pairs. And the > raid0 striping happens across 01 and 23 such that odd-numbered 64KiB > (default) stripe elements go on 01, and even-numbered stripe elements > go on 23. If the stripe to devid pairing were always consistent, I > could lose more than one device and still have a viable volume, just > like a conventional raid10. Of course you can't lose both of any > mirrored pair, but you could lose one of every mirrored pair. That's > why raid10 is considered scalable. > > But apparently the pairing is different between mkfs and kernel code.
My understanding is, that chunk allocation code is using devices in order they have been discovered. Which implies that that can change between reboots or even during system run if devices are added/removed. Also I think code may skip devices under some condition (most obvious being not enough space, may happen if you mix allocation profiles). So the only thing that is guaranteed right now is that every stripe element will be on different device. -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html