I did more test on random overwrite, I set write size equal to cluster size, make sure always write one cluster, no cross cluster write and merge this patch https://lore.kernel.org/linux-f2fs-devel/[email protected]/T/#t , it can up to 45% of original. This would seem to indicate that there are reasons other than the need for read whole cluster and rewrite it.
        

On 2021/7/2 14:24, Jaegeuk Kim wrote:
On 07/02, Fengnan Chang wrote:
Yes, I had enable compress_cache and extent_cache, compress_cache indeed can
improve random read performance, but couldn't improve other test case.And
extent_cache was default enabled in my test.

Sorry, we enabled extent_cache for RO partition only.

static inline bool f2fs_may_extent_tree(struct inode *inode)
{
        struct f2fs_sb_info *sbi = F2FS_I_SB(inode);

        if (!test_opt(sbi, EXTENT_CACHE) ||
                        is_inode_flag_set(inode, FI_NO_EXTENT) ||
                        (is_inode_flag_set(inode, FI_COMPRESSED_FILE) &&
                         !f2fs_sb_has_readonly(sbi)))
                return false;
...




Fix the description of the previous email:
  4. 4K random overwrite has dropped to 1% of original, yes only 1% of
original, I found  open file with O_WRONLY|O_DSYNC|O_DIRECT is an important
reason, every time sync a compress inode need do checkpoint, after I remove
checkpoint on compress inode, up to 10% of original. If I set write size
equal to cluster size,  it can up to 35% of original. And I think major
reason of this is we need read whole cluster and rewrite it , I've been
trying to get around this restriction recently, but haven't made any
progress yet.

Thanks.

On 2021/7/2 1:06, Jaegeuk Kim wrote:
On 06/04, [email protected] wrote:
Hi:

I've been working on f2fs compression for a while, I'm confused on f2fs
compression performance, after a while reserch,
I found some problem, maybe need some discuss.
I use AndroBench test performance on mobile, after enable compression, the
benchmark scores have dropped a lot.
Specifically:
1. 32M sequential read has dropped to 50% of original. Test case open file
with O_RDONLY|O_DIRECT, and set POSIX_FADV_RANDOM, the major resaon
is disable readahed. For now,I didn't found any patch can improve this.
2. 4K random read has dropped to 40% of original, after merge `f2fs:
compress: add compress_inode to cache compressed blocks`,
significant improvement in random read performance, up to 90% of original,
maybe more.
3. 32M sequential overwrite has dropped to 10% of original, after merge
`f2fs: compress: remove unneeded read when rewrite whole cluster`
up to 30% of original.
4. 4K random read has dropped to 1% of original, yes only 1% of original, I
found  open file with O_WRONLY|O_DSYNC|O_DIRECT is  an important reason,
every time sync a compress inode need do checkpoint, after I remove
checkpoint on compress inode, up to 10% of original. And I think major
reason of this
is we need read whole cluster and rewrite it ,but I did't think of any
method to improve this.

I want to know is there any idea can help to improve this.
And I want to know do we have goal for the performance of compression, is it
possible to achieve the original performance?

Could you please check compress_cache and extent_cache that can improve read
performance? Both were done quite recently.


Thanks.




_______________________________________________
Linux-f2fs-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

Reply via email to