Reviewed by: Paul Dagnelie <p...@delphix.com>
Reviewed by: Matthew Ahrens <mahr...@delphix.com>
Reviewed by: George Wilson <george.wil...@delphix.com>

This patch hijacks the existing 'zinject -D MSECS -d GUID' command,
slightly modifying it's interface and behavior to allow a user to
set per disk target latencies for IO requests. This is useful when
trying to run performance tests, and/or make performance critical
changes, but need to rule out the normal variance of hard drives. Using
this facility, the user can configure a disk to complete each IO request
in a configurable amount of time while also configuring the number of
'lanes' of the device (i.e. the number concurrent IO requests possible).
For example, the following will configure the disk to complete each IO
request in 10ms while only allowing a single IO request to be processed
at a time:

        $ sudo zinject -D 10:1 -d c2t1d0 tank
        Added handler 22 with the following properties:
          pool: tank
          vdev: 6c6fab9bf550c844

If using a pool made up of file vdevs (perhaps on ramfs), the same
command applies:

        $ sudo zinject -D 10:1 -d /tmp/file-vdev-1 files-tank
        Added handler 1 with the following properties:
          pool: files-tank
          vdev: eed48fc47ba4edfd

To allow multiple IO requests to be processed concurrently, simply
use a value greater than 1 for the number of lanes. For example, the
following will configure the device to have a 10ms latency for each IO
but allow for up to 1024 IO requests to be processed concurrently:

        $ sudo zinject -D 10:1024 -d c2t1d0 tank
        Added handler 33 with the following properties:
          pool: tank
          vdev: 6c6fab9bf550c844

The command was also updated to print more information about each
delay registered when run without any parameters:

        $ sudo zinject
         ID  POOL             DELAY (ms)       LANES            GUID
        ---  ---------------  ---------------  ---------------  ----------------
          2  tank             10               1                5e9b9483b76405bd
          3  tank             11               1                5e9b9483b76405bd
          4  tank             12               1                5e9b9483b76405bd
          5  tank             13               3                5e9b9483b76405bd
          6  tank             17               7                5e9b9483b76405bd
          7  tank             21               11               5e9b9483b76405bd
          8  tank             21               113              5e9b9483b76405bd

Just as before, the delay can be removed using 'zinject -c ID', or even
'zinject -c all'.
You can view, comment on, or merge this pull request online at:

  https://github.com/openzfs/openzfs/pull/39

-- Commit Summary --

  * DLPX-34122 Provide mechanism to artificially limit disk performance

-- File Changes --

    M usr/src/cmd/zinject/zinject.c (111)
    M usr/src/uts/common/fs/zfs/sys/zfs_ioctl.h (1)
    M usr/src/uts/common/fs/zfs/sys/zio.h (5)
    M usr/src/uts/common/fs/zfs/vdev_disk.c (6)
    M usr/src/uts/common/fs/zfs/vdev_file.c (6)
    M usr/src/uts/common/fs/zfs/vdev_queue.c (3)
    M usr/src/uts/common/fs/zfs/zio.c (52)
    M usr/src/uts/common/fs/zfs/zio_inject.c (257)

-- Patch Links --

https://github.com/openzfs/openzfs/pull/39.patch
https://github.com/openzfs/openzfs/pull/39.diff

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/39
_______________________________________________
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer

Reply via email to