On (11/27/18 16:19), Dave Rodgman wrote:
> > Right. The number is data dependent. Not all swapped out pages can be
> > compressed; compressed pages that end up being >= zs_huge_class_size() are
> > considered incompressible and stored as it.
> > 
> > I'd say that on my setups around 50-60% of pages are incompressible.
> 
> So, just to give a bit more detail: the test setup was a Samsung
> Chromebook Pro, cycling through 80 tabs in Chrome. With lzo-rle, only
> 5% of pages increased in size, and 90% of pages compress to 75% of
> original size (or better). Mean compression ratio was 41%. Importantly
> for lzo-rle, there are a lot of low-entropy pages where it can do well:
> in total about 20% of the data is zeros forming part of a run of 4 or
> more bytes.
> 
> As a quick summary of the impact of these patches on bigger chunks of
> data, I've compared the performance of four different variants of lzo
> on two large (~40 MB) files. The numbers show round-trip throughput
> in MB/s:
> 
> Variant         | Low-entropy | High-entropy
> Current lzo     |  242        | 157
> Arm opts        |  290        | 159
> RLE             |  876        | 151
> Arm opts + RLE  | 1150        | 181
> 
> So both the Arm optimisations (8,16-byte copy & CTZ patches), and the
> RLE implementation make a significant contribution to the overall
> performance uplift.

Cool!

        -ss

Reply via email to