Indeed! That is a VERY good question and one I have not seen a good answer to thus far. A traditional defragger in the Windows realm, which defrags on a global basis rather than directory by directory, file by file, also "restacks" the file and directory structure in a way that minimizes free space and the result is also close to a maximum amount of contiguous free space somewhere within the filesystem after the defrag process finishes. As you correctly point out, no one has indicated a method as to how that can be accomplished with btrfs at this point. I am not even sure that rebuilding the partition from scratch would do the job. My solution so far is just to always make sure to have a ridiculous amount of free space available, but that cannot go on forever (and is very inefficient). So it would be nice to know what the solution to this conundrum would be. Thanks for pointing this out.

On 07/18/2013 12:12 AM, Adam Ryczkowski wrote:
On 07/18/2013 02:17 AM, George Mitchell wrote:
find /home -type d -mtime -3 -o -type f -mtime -3 | egrep -v
"Cache|cache" | while read file; do /usr/sbin/btrfs filesystem defrag
-f -v "${file}"; done
Thank you for your answer.

This still defragments file-after-file. But what about a little
consolidation of free space? As I understand, if there is no consecutive
free space for a file's extents, it is impossible to defragment it, no
matter how many times I run the above command, am I right?

Adam Ryczkowski

<http://www.google.com/>




--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to