On Wed, Jun 10, 2020 at 1:38 AM Michael MacIsaac <mike99...@gmail.com>
wrote:

> Hello list,
>
> I heard about the new DFLTCC instruction on the z15, aka on board
> compression.  I tried a quick experiment to see the difference from a z14.
> Disclaimer: I am not a performance expert.
>
> Here are three commands to create, compress and decompress a 1G file on a
> z14:
>
> # grep Type: /proc/sysinfo
> Type:                 3906
>
> # time dd if=/dev/zero of=1G.file bs=1G count=1
>

Hi Michael,
for compression (and encryption) blocks of only zero's are a notorious
special case.
It might be very much off the result you'd otherwise see.
I'd recommend using the data type of your choice (text, structured
documents, images, ...) and scale them to the size you want to test with.

>From far away your numbers suggest it might actually be the I/O.
For example your uncompress is 1.8+4.1=24.8 vs 1.2+3.9=6.1 - and 24 vs 6
seems like 4x.
If you just consider the CPU time (which is where your instruction would
be) it only is 1.8+4.1=5.9 vs 1.2+3.9=4.1 = x 1.5
You'd want to eliminate disk I/O from your equation if you want to
compare CPUs, so maybe put it in memory on tmpfs or such at least?

You'd want to isolate the actual core compression function and it's
instructions if that is what you want to check.
This might be too deep, but the underlying element is zlib.
So maybe holding a defined pattern (or multiple variants thereof) in memory
and
throwing it at the zlib API, to measure just that time, might be more what
you want.
The zlib source has test/minigzip.c which might be a good start if you want
to code something for it.

And finally deactivate anything in the background and do all this in a loop
to check for deviations.

/me stops and realizes you triggered the old perf engineer in my  :-)

1+0 records in
> 1+0 records out
> 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 21.93 s, 49.0 MB/s
>
> real    0m22.047s
> user    0m0.001s
> sys     0m3.669s
>
> # time cat 1G.file | gzip -c > 1G.compressed.file
>
> real    0m7.603s
> user    0m5.362s
> sys     0m0.789s
>
> # time cat 1G.compressed.file | gzip -d > 1G.file
>
> real    0m24.833s
> user    0m4.103s
> sys     0m1.845s
>
> Here's the same commands on z15:
>
> # grep Type: /proc/sysinfo
> Type:                 8561
>
> # time dd if=/dev/zero of=1G.file bs=1G count=1
> 1+0 records in
> 1+0 records out
> 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.59126 s, 675 MB/s
>
> real    0m1.621s
> user    0m0.000s
> sys     0m1.216s
>
> # time cat 1G.file | gzip -c > 1G.compressed.file
>
> real    0m5.722s
> user    0m4.946s
> sys     0m0.510s
> # time cat 1G.compressed.file | gzip -d > 1G.file
>
> real    0m6.150s
> user    0m3.922s
> sys     0m1.290s
>
> Wow more than 10x faster on dd - was not expecting that as I didn't think
> it uses compression. But the compress with gzip -c, was only 25% faster on
> the z15 while the decompress was about 4x.
>
> Are these results expected?
>
> Thanks.
>
>
> --
>      -Mike MacIsaac
>
> ----------------------------------------------------------------------
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www2.marist.edu/htbin/wlvindex?LINUX-390
>


--
Christian Ehrhardt
Staff Engineer, Ubuntu Server
Canonical Ltd

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390

Reply via email to