Hugo Mills posted on Wed, 17 Jun 2015 13:27:36 +0000 as excerpted: >> Yes, on this 80% full 6x4TB RAID10 -dusage=15 took 2 seconds and >> relocated "0 out of 3026 chunks”. >> >> Out of curiosity, I had to use -dusage=90 to have it relocate only 1 >> chunk and it took les than 30 seconds. >> >> So I put a -dusage=25 in the weekly cron just before the scrub. > > In most cases, all you need to do is clean up one data chunk to > give the metadata enough space to work in. Instead of manually iterating > through several values of usage= until you get a useful response, you > can use limit=<n> to stop after <n> successful block group relocations.
Thanks, Hugo. It wasn't previously clear to me what the practical usage for the (relatively new) limit= filter was. Very useful explanation. =:^) -- Duncan - List replies preferred. No HTML msgs. "Every nonfree program has a lord, a master -- and if you use the program, he is your master." Richard Stallman -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to [email protected] More majordomo info at http://vger.kernel.org/majordomo-info.html
