On Wed, Oct 27, 2021 at 06:28:51AM +0200, Markus Armbruster wrote:
Stefano Garzarella <[email protected]> writes:Commit d7ddd0a161 ("linux-aio: limit the batch size using `aio-max-batch` parameter") added a way to limit the batch size of Linux AIO backend for the entire AIO context. The same AIO context can be shared by multiple devices, so latency-sensitive devices may want to limit the batch size even more to avoid increasing latency. For this reason we add the `aio-max-batch` option to the file backend, which will be used by the next commits to limit the size of batches including requests generated by this device. Suggested-by: Kevin Wolf <[email protected]> Reviewed-by: Kevin Wolf <[email protected]> Signed-off-by: Stefano Garzarella <[email protected]> --- Notes: v2: - @aio-max-batch documentation rewrite [Stefan, Kevin] qapi/block-core.json | 7 +++++++ block/file-posix.c | 9 +++++++++ 2 files changed, 16 insertions(+) diff --git a/qapi/block-core.json b/qapi/block-core.json index 6d3217abb6..fef76b0ea2 100644 --- a/qapi/block-core.json +++ b/qapi/block-core.json @@ -2896,6 +2896,12 @@ # for this device (default: none, forward the commands via SG_IO; # since 2.11) # @aio: AIO backend (default: threads) (since: 2.8) +# @aio-max-batch: maximum number of requests to batch together into a single +# submission in the AIO backend. The smallest value between +# this and the aio-max-batch value of the IOThread object is +# chosen. +# 0 means that the AIO backend will handle it automatically. +# (default: 0, since 6.2)"(default 0) (since 6.2)" seems to be more common.
Indeed I wasn't sure, so I followed @drop-cache, the last one added in @BlockdevOptionsFile.
I'll fix in v3 :-) Thanks, Stefano
