On Sun, Dec 27, 2015 at 5:39 PM, Christoph Anton Mitterer <cales...@scientia.net> wrote: > On Sun, 2015-12-27 at 11:29 -0700, Chris Murphy wrote: >> then the scrub request is effectively a >> scrub for a volume with a missing drive which you probably wouldn't >> ever do, you'd first replace the missing device. > While that's probably the normal work flow,... it should still work the > other way round... and if not, I'd consider that a bug.
I think it's more complicated than that. I don't see a good use case for scrubbing a degraded array. First make the array healthy, then scrub. But I've not tested this with mdadm or lvm raid. I don't know how they behave. But even if either of them tolerate it, it's a legitimate design decision for Btrfs developers to refuse supporting the scrub of a degraded array. Same for balancing for that matter. The problem here is, Btrfs itself may not even know the array state is degraded. There's a degraded mount option, but since there's no device faulty state yet, I don't see how it can know to go degraded, at which point it could legitimately refuse to scrub. Of course, the fs should get worse, there shouldn't be a crash, even if degraded scrub isn't supported. -- Chris Murphy -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html