My fstrim tests threw a bit of a wrench into my typical LVM setup.  I
often use quite large thin pool chunks, to get better performance.

As most of my filesystems use btrfs, I am able to perform a btrfs
balance.  Internally btrfs manages 1GB chunks for data.  A balance can
be used to lessen the number of in-use chunks, by moving data from
less used chunks into others.  This has the effect of when running
fstrim, more chunks can be discarded.

With NTFS volumes, there's a tradeoff between running smaller thin
pool chunks to get a higher percentage discard rate (64k chunk = 91.5%
discard) and medium thin pool chunks to get higher performance (1MB
chunk = 67.5% discard.)  I'll need to do some benchmarking to see how
much better going much higher actually is, but I've been using the
maximum thin volume chunk size (128MB.)  I haven't copied the windows
7 NTFS volume onto a 128MB chunk size and checked its discard rate,
but I'm imagining that nearly nothing will be discarded in that case,
with my particular test volume.

It looks like ntfsresize might be helpful here.  If I were to shrink
the partition to its minimum, then enlarge it to its original size, I
think I should be able to use a larger chunk size and still
periodically free unused space back to the thin pool.

I'd love to see an option added to ntfsresize which only performs the
reallocating data process to meet the shrunk size, and skip the actual
part where it shrinks the filesystem.  I'm hoping it would be
trivially easy to implement.  I took a brief look at the code, and I'm
thinking given this option, it might be able to run through
relocate_inodes(), then skip truncate_badclust_file(),
truncate_bitmap_file(), and maybe delayed_updates() and
update_bootsector().

My original thought was a "--compact" option, but since "-c" exists,
perhaps "-r, --reallocate-only" would be better.


_______________________________________________
ntfs-3g-devel mailing list
ntfs-3g-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ntfs-3g-devel

Reply via email to