Reviewed by: Dan Kimmel <dan.kim...@delphix.com>
Reviewed by: John Kennedy <john.kenn...@delphix.com>

Work by Stephen Blinick.

Nightly performance runs typically consist of two levels of concurrency;
and both are fairly high.

Since the IO runs are to a ZFS filesystem, within a zpool, which is
based on some variable number of vdev's, the amount of IO driven to each
device is variable. Additionally, different device types (HDD vs SSD,
etc) can generally handle a different amount of concurrent IO before

Nevertheless, in practice, it appears that most tests are well past the
concurrency saturation point and therefore both perform with the same
throughput, the maximum of the device. Because the queuedepth to the
device(s) is so high however, the latency is much higher than the best
possible at that throughput, and increases linearly with the increase in

This means that changes in code that impact latency during normal
operation (before saturation) may not be apparent when a large component
of the measured latency is from the IO sitting in a queue to be
serviced. Therefore, changing the concurrency settings is recommended
You can view, comment on, or merge this pull request online at:


-- Commit Summary --

  * 9076 ZFS Performance test concurrency should be lowered for better latency

-- File Changes --

    M usr/src/test/zfs-tests/include/commands.cfg (1)
    M usr/src/test/zfs-tests/tests/perf/regression/random_reads.ksh (13)
    M usr/src/test/zfs-tests/tests/perf/regression/random_readwrite.ksh (13)
    M usr/src/test/zfs-tests/tests/perf/regression/random_writes.ksh (13)
    M usr/src/test/zfs-tests/tests/perf/regression/sequential_reads.ksh (13)
    M usr/src/test/zfs-tests/tests/perf/regression/sequential_writes.ksh (13)

-- Patch Links --


You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:

Powered by Topicbox: https://topicbox.com

Reply via email to