On 2017-11-15 16:31, Duncan wrote:
Austin S. Hemmelgarn posted on Wed, 15 Nov 2017 07:57:06 -0500 as
excerpted:

The 'compress' and 'compress-force' mount options only impact newly
written data.  The compression used is stored with the metadata for the
extents themselves, so any existing data on the volume will be read just
fine with whatever compression method it was written with, while new
data will be written with the specified compression method.

If you want to convert existing files, you can use the '-c' option to
the defrag command to do so.

... Being aware of course that using defrag to recompress files like that
will break 100% of the existing reflinks, effectively (near) doubling
data usage if the files are snapshotted, since the snapshot will now
share 0% of its extents with the newly compressed files.
Good point, I forgot to mention that.

(The actual effect shouldn't be quite that bad, as some files are likely
to be uncompressed due to not compressing well, and I'm not sure if
defrag -c rewrites them or not.  Further, if there's multiple snapshots
data usage should only double with respect to the latest one, the data
delta between it and previous snapshots won't be doubled as well.)
I'm pretty sure defrag is equivalent to 'compress-force', not 'compress', but I may be wrong.

While this makes sense if you think about it, it may not occur to some
people until they've actually tried it, and see their data usage go way
up instead of going down as they intuitively expected.  There have been
posts to the list...

Of course if the data isn't snapshotted this doesn't apply and defrag -c
to zstd should be fine. =:^)


--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to