Hi,

> First of all, I am thinking now and I will think that defragmentation should 
> be
> a part of GC activity. Usually, from my point of view, users choose NILFS2
> because of using flash storage (SSD and so on). So, GC is a consequence of
> log-structured nature of NILFS2. But it needs to think about flash aging, 
> anyway.
> Because activity of GC or other auxiliary subsystems should take into account
> NAND flash wear-leveling. If activity of auxiliary subsystems will be 
> significant
> then NAND flash will fail soon without any clear reasons from an user's
> viewpoint.

I have no idea of the actual implementation or the code, so my comment
just represents the POV of a fs-user.
I've tried Nilfs2 and Btrfs (both cow) on traditional mechanical
harddisks to get cheap, efficient snapshoting.
However, due to cow, files with high random-write activity (like
firefox's internal sqlite database) fragmented so heavily those
filesystems were practiacally unuseable on HDDs.

Btrfs actually offers both, manual defragmentation as well as an
autodefrag mount option - which is useful even for SSDs when the
average continous segment size becomes as low as 4kb.
While autodefrag would be great for nilfs2, a manual tool at least
wouldn't hurt ;)

Regards
--
To unsubscribe from this list: send the line "unsubscribe linux-nilfs" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to