Cloud Admin posted on Thu, 10 Aug 2017 22:00:08 +0200 as excerpted:
> I had a disc failure and must replace it. I followed the description on
> ces and started the replacement.
> Setup is a two disc RAID1!
> After it was done, I called 'btrfs fi us /mn/btrfsroot' and I got the
> output below. What is wrong?
> Is it a rebalancing issue? I thought the replace command started it
> Data,single: Size:1.00GiB, Used:0.00B
> /dev/mapper/luks-3ff6c412-4d3a-4d33-85a3-cc70e95c26f8 1.00
It's not entirely clear what you're referring to with the "what's wrong"
question, but I'll assume it's all those single and dup chunks that
Unlike device delete, which does an implicit rebalance, replace simply
replaces one device with another in terms of content, and doesn't
rebalance anything on remaining devices. This tends to make it much
faster, with less danger of something else bad (like another device going
bad) happening in the process, but it pretty much copies verbatim as much
as possible so unlike device delete with its implicit balance or an
explicit balance, existing chunks remain as they were.
Which means any existing single and dup chunks didn't get removed, as I'm
guessing you expected.
But most or all of them are 0 used, so an explicit rebalance to eliminate
them should be quite short as it'll just delete the references to them.
Try this (path from your post above):
btrfs balance start -dusage=0 -musage=0 /mn/btrfsroot
That should eliminate the 0-usage chunks, making the usage output easier
to follow even if you do need to post updates because my guess as to what
you were referring to with the "what's wrong" was incorrect. And as I
said it should be much faster (almost instantaneous on ssd, probably not
/quite/ that on spinning rust) than rebalancing chunks that weren't
empty, too. =:^)
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html