Hello there,
I apologize in advance if this e-mail is redundant. I'm not a developer, but
log-based filesystems intrigue me. I have seen some benchmarks (I think on a
linux-mag.com article), which didn't reflect the performance I had been
expecting.

My basic understanding of how NILFS operates is that it appends a file which
is to be committed to disk to the head of the log, eliminating,
theoretically, the need to carry out random writes. From this I extrapolate
, writes should be incredibly fast, since even budget ssds are capable of
burst-write speeds of 200 mb/s. What real-life factor exactly holds back
NILFS's speed?

What happens with huge files? Is it possible to commit the modifications
applied to a file as separate differential file, leaving the original file
intact? What about defragging? Does the garbage collection system move
rarely modified files out of the way?

I presume read performance is not the primary consideration in the
development of NILFS, since reads are cached and modern ssds are incredibly
fast at random reads anyway. Since the burst read performance of both hdds
and ssds is blazing fast, does it make sense to burst-read an entire cluster
of adjacent small files and pick the desired files in system memory? I
wouldn't mind if the unnecessary files were left in ram either, since modern
computers have vast unused ram space.

Thanks in advance.
_______________________________________________
users mailing list
[email protected]
https://www.nilfs.org/mailman/listinfo/users

Reply via email to