Hi:

I've been working on f2fs compression for a while, I'm confused on f2fs
compression performance, after a while reserch, 
I found some problem, maybe need some discuss.
I use AndroBench test performance on mobile, after enable compression, the
benchmark scores have dropped a lot.
Specifically:
1. 32M sequential read has dropped to 50% of original. Test case open file
with O_RDONLY|O_DIRECT, and set POSIX_FADV_RANDOM, the major resaon 
is disable readahed. For now,I didn't found any patch can improve this. 
2. 4K random read has dropped to 40% of original, after merge `f2fs:
compress: add compress_inode to cache compressed blocks`,
significant improvement in random read performance, up to 90% of original,
maybe more.
3. 32M sequential overwrite has dropped to 10% of original, after merge
`f2fs: compress: remove unneeded read when rewrite whole cluster`
up to 30% of original.
4. 4K random read has dropped to 1% of original, yes only 1% of original, I
found  open file with O_WRONLY|O_DSYNC|O_DIRECT is  an important reason, 
every time sync a compress inode need do checkpoint, after I remove
checkpoint on compress inode, up to 10% of original. And I think major
reason of this
is we need read whole cluster and rewrite it ,but I did't think of any
method to improve this.

I want to know is there any idea can help to improve this.
And I want to know do we have goal for the performance of compression, is it
possible to achieve the original performance?

Thanks.



_______________________________________________
Linux-f2fs-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

Reply via email to