Warning: Btrfs user here; no knowledge of the inside working of btrfs. If I am in the wrong mailing list, please redirect me and accept my apologies.

At home, lacking of disks and free SATA ports, I created a raid1 btrfs filesystem by converting an existing single btrfs instance into a degraded raid1, then added the other driver. The exact commands I used have been lost.

Worked well, until one of my drive died. Total death; the OS does not detect it anymore. I bought another drive, but alas, I cannot add it:

# btrfs replace start -B 2 /dev/sdd /mnt/brtfs-raid1-b
ERROR: ioctl(DEV_REPLACE_START) failed on "/mnt/brtfs-raid1-b": Read-only file system

Here is the command I used to mount it:

mount -t btrfs -o ro,degraded,recovery,nosuid,nodev,nofail,x-gvfs-show /dev/disk/by-uuid/975bdbb3-9a9c-4a72-ad67-6cda545fda5e /mnt/brtfs-raid1-b

If I remove 'ro' from the option, I cannot get the filesystem mounted because of the following error:

BTRFS: missing devices(1) exceeds the limit(0), writeable mount is not allowed

So I am stuck. I can only mount the filesystem as read-only, which prevents me to add a disk.

It seams related to bug:

I am using Ubuntu 16.04 LTS with kernel 4.4.0-59-generic. Is there any hope to add a disk? Else, can I recreate a raid1 with only one disk and add another, but never suffer from the same problem again? I did not lost any data, but I do have some serious downtime because of this. I wish that if a drive fail, the btrfs filesystem still mounts rw and leave the OS running, but warns the user of a failing disk and easily allow the addition of a new drive to reintroduce redundancy. However, this scenarios seams impossible with the current state of affair. Am I right?

Best regards and thank you for your contribution to the open source movement,
Hans Deragon
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to
More majordomo info at

Reply via email to