On 02/05/2014 03:15 PM, Roman Mamedov wrote:
Hello,

On a freshly-created RAID1 filesystem of two 1TB disks:

# df -h /mnt/p2/
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda2       1.8T  1.1M  1.8T   1% /mnt/p2

I cannot write 2TB of user data to that RAID1, so this estimate is clearly
misleading. I got tired of looking at the bogus disk free space on all my
RAID1 btrfs systems, so today I decided to do something about this:

--- fs/btrfs/super.c.orig       2014-02-06 01:28:36.636164982 +0600
+++ fs/btrfs/super.c    2014-02-06 01:28:58.304164370 +0600
@@ -1481,6 +1481,11 @@
        }
kfree(devices_info);
+
+       if (type & BTRFS_BLOCK_GROUP_RAID1) {
+               do_div(avail_space, min_stripes);
+       }
+
        *free_bytes = avail_space;
        return 0;
  }

This needs to be more flexible, and also this causes the problem where now you show the actual usable amount of space _but_ you are also showing twice the amount of used space. I'm ok with going in this direction, but we need to convert everybody over so it works for raid10 as well and the used values need to be adjusted. Thanks,

Josef
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to