Hi Xiang,

On 2020/12/4 15:43, Gao Xiang wrote:
Hi Chao,

On Fri, Dec 04, 2020 at 03:09:20PM +0800, Chao Yu wrote:
On 2020/12/4 8:31, Gao Xiang wrote:
could make more sense), could you leave some CR numbers about these
algorithms on typical datasets (enwik9, silisia.tar or else.) with 16k
cluster size?

Just from a quick test with enwik9 on vm:

Original blocks:        244382

                        lz4                     lz4hc-9
compressed blocks       170647                  163270
compress ratio          69.8%                   66.8%
speed                   16.4207 s, 60.9 MB/s    26.7299 s, 37.4 MB/s

compress ratio = after / before

Thanks for the confirmation. it'd be better to add this to commit message
if needed when adding a new algorithm to show the benefits.

Sure, will add this.


About the speed, I think which is also limited to storage device and other
conditions (I mean the CPU loading during the writeback might be different
between lz4 and lz4hc-9 due to many other bounds, e.g. UFS 3.0 seq
write is somewhat higher vs VM. lz4 may have higher bandwidth on high

Yeah, I guess my VM have been limited on its storage bandwidth, and its back-end
could be low-end rotating disk...

level devices since it seems some IO bound here... I guess but not sure,
since pure in-memory lz4 is fast according to lzbench / lz4 homepage.)

Anyway, it's up to f2fs folks if it's useful :) (the CR number is what
I expect though... I'm a bit of afraid the CPU runtime loading.)

I just have a glance at CPU usage numbers (my VM has 16 cores):
lz4hc takes 11% in first half and downgrade to 6% at second half.
lz4 takes 6% in whole process.

But that's not accruate...

Thanks,

Thanks for your time!

Thanks,
Gao Xiang


Thanks,


.



_______________________________________________
Linux-f2fs-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

Reply via email to