On Sun, Jan 29, 2017 at 3:16 PM, Adam Borowski <[email protected]> wrote: > On Sun, Jan 29, 2017 at 08:12:56AM -0500, Subscription Account wrote: >> I had to remove one disk from raid 1 and I rebooted the system and was >> able to mount in degraded mode. Then I powered off the system added a >> new disk and when I am trying to mount the btrfs filesystem in >> degraded mode it will no longer mount it read-write. I can mount >> read-only though. >> >> [ 2506.816795] BTRFS: missing devices(1) exceeds the limit(0), >> writeable mount is not allowed >> >> In the read-only mode I am not able to add a new device or replace :(. >> Please help. > > A known problem; you can mount rw degraded only once, if you don't fix the > degradation somehow (by adding a device or converting down), you can't mount > rw again.
Uh oh! I wish I know that I only had one shot :(. Just to be clear, once the rw mount is done first time, and a replace operation or a add followed by delete missing is executed, irrespective of raid rebuild status, if the machine reboots, it will still be mounted without issues and rebuild/balance will continue? Because my concern is that there could still be blocks with single copy if the balance has not completed? > > If you know how to build a kernel, here's a crude patch. > > I am feeling a little lucky because I still have the other disk and I am assuming if I remove my current disk and put the other disk in, I would be able to mount it once again? Also, since I have couple times already tried btrfs repair etc, can I trust the current disk anyways? if I get into issue again, I would definitely use the patch to recompile the kernel and thanks a lot for the same. Thanks again, -- Raj > Meow > -- > Autotools hint: to do a zx-spectrum build on a pdp11 host, type: > ./configure --host=zx-spectrum --build=pdp11 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to [email protected] More majordomo info at http://vger.kernel.org/majordomo-info.html
