On Wed, Mar 13, 2019 at 3:58 PM Jakub Husák <ja...@husak.pro> wrote: > > Hi, > > I added another disk to my 3-disk raid5 and ran a balance command.
What exact commands did you use for the two operations? >After > few hours I looked to output of `fi usage` to see that no data are being > used on the new disk. I got the same result even when balancing my raid5 > data or metadata. > > Next I tried to convert my raid5 metadata to raid1 (a good idea anyway) > and the new disk started to fill immediately (even though it received > the whole amount of metadata with replicas being spread among the other > drives, instead of being really "balanced". I know why this happened, I > don't like it but I can live with it, let's not go off topic here :)). They could be related problems. Unclear. I suggest grabbing btrfs-debugfs from upstream btrfs-progs and run `sudo btrfs-debugfs -b /mntpoint/` and let's see what the block group distribution looks like. https://github.com/kdave/btrfs-progs/blob/master/btrfs-debugfs > > I'm now running `fi balance -dusage=10` (and rising the usage limit). I > can see that the unallocated space is rising as it's freeing the little > used chunks but still no data are being stored on the new disk. > > I it some bug? It's possible, but not enough information. The balance code is complicated. > If so, shouldn't it be really balancing (spreading) the data among all > the drives to use all the IOPS capacity, even when the raid5 redundancy > constraint is currently satisfied? I'd expect that it should copy extents from old 3 strip block groups to new 4 strip block groups. However, there have been some improvements related to block group management and enospc avoidance where existing block groups get filled first, before new block groups are created, and I wonder if that's what's going on here, but it's speculation. What do you get for btrfs insp dump-t -t 5 /dev/ ##device, not mountpoint, will work if fs is mounted but ideally not in-use btrfs insp dump-s -f /dev/ ##same Also, no significant changes in raid56.c between 4.19.16 and 5.0.2. But there have been some volume.c changes. https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/diff/fs/btrfs/volumes.c?id=v5.0.2&id2=v4.19.16 Anyway, I would stop making changes for now and make sure your backups are up to date as a top priority. And then it's safer to poke this with a stick and see what's going on and how to get it to cooperate. -- Chris Murphy