This is a great tool Hans! This kind of overview should be a part of
btrfs-progs.
Mine looks currently like this, I have a few more days to go with
rebalancing :)
flags num_stripes physical virtual
----- ----------- -------- -------
DATA|RAID5 3 5.29TiB 3.53TiB
DATA|RAID5 4 980.00GiB 735.00GiB
SYSTEM|RAID1 2 128.00MiB 64.00MiB
METADATA|RAID1 2 314.00GiB 157.00GiB
Btw, I checked the other utils in your python-btrfs and it seems that
they are, sadly, not installed with simple pip install, which would be
great. Maybe it needs a few lines in setup.py (i'm not too familiar with
python packaging)?
On 16. 03. 19 20:51, Hans van Kranenburg wrote:
On 3/16/19 5:34 PM, Hans van Kranenburg wrote:
On 3/16/19 7:07 AM, Andrei Borzenkov wrote:
[...]
This thread actually made me wonder - is there any guarantee (or even
tentative promise) about RAID stripe width from btrfs at all? Is it
possible that RAID5 degrades to mirror by itself due to unfortunate
space distribution?
For RAID5, minimum is two disks. So yes, if you add two disks and don't
forcibly rewrite all your data, it will happily start adding two-disk
RAID5 block groups if the other disks are full.
Attached an example that shows a list of used physical and virtual space
ordered by chunk type (== block group flags) and also num_stripes (how
many disks (or, dev extents)) are used. The btrfs-usage-report does not
add this level of detail. (But maybe it would be interesting to add, but
then I would add it into the btrfs.fs_usage code...)
For the RAID56 with a big mess of different block groups with different
"horizontal size" this will be more interesting than what it shows here
as test:
-# ./chunks_stripes_report.py /
flags num_stripes physical virtual
----- ----------- -------- -------
DATA 1 759.00GiB 759.00GiB
SYSTEM|DUP 2 64.00MiB 32.00MiB
METADATA|DUP 2 7.00GiB 3.50GiB
Hans