Branch: refs/heads/master
  Home:   https://github.com/qemu/qemu
  Commit: 000a41b69c3c34ae132ebc737dbe56f08fab50a9
      
https://github.com/qemu/qemu/commit/000a41b69c3c34ae132ebc737dbe56f08fab50a9
  Author: Kevin Wolf <kw...@redhat.com>
  Date:   2025-03-11 (Tue, 11 Mar 2025)

  Changed paths:
    M block/block-backend.c
    M include/system/block-backend-global-state.h

  Log Message:
  -----------
  block: Remove unused blk_op_is_blocked()

Commit fc4e394b28 removed the last caller of blk_op_is_blocked(). Remove
the now unused function.

Signed-off-by: Kevin Wolf <kw...@redhat.com>
Message-ID: <20250206165331.379033-1-kw...@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <phi...@linaro.org>
Reviewed-by: Stefan Hajnoczi <stefa...@redhat.com>
Signed-off-by: Kevin Wolf <kw...@redhat.com>


  Commit: b75c5f9879166b86ed7c48b772fdcd0693e8a9a3
      
https://github.com/qemu/qemu/commit/b75c5f9879166b86ed7c48b772fdcd0693e8a9a3
  Author: Kevin Wolf <kw...@redhat.com>
  Date:   2025-03-11 (Tue, 11 Mar 2025)

  Changed paths:
    M block/snapshot.c

  Log Message:
  -----------
  block: Zero block driver state before reopening

Block drivers assume in their .bdrv_open() implementation that their
state in bs->opaque has been zeroed; it is initially allocated with
g_malloc0() in bdrv_open_driver().

bdrv_snapshot_goto() needs to make sure that it is zeroed again before
calling drv->bdrv_open() to avoid that block drivers use stale values.

One symptom of this bug is VMDK running into a double free when the user
tries to apply an internal snapshot like 'qemu-img snapshot -a test
test.vmdk'. This should be a graceful error because VMDK doesn't support
internal snapshots.

==25507== Invalid free() / delete / delete[] / realloc()
==25507==    at 0x484B347: realloc (vg_replace_malloc.c:1801)
==25507==    by 0x54B592A: g_realloc (gmem.c:171)
==25507==    by 0x1B221D: vmdk_add_extent (../block/vmdk.c:570)
==25507==    by 0x1B1084: vmdk_open_sparse (../block/vmdk.c:1059)
==25507==    by 0x1AF3D8: vmdk_open (../block/vmdk.c:1371)
==25507==    by 0x1A2AE0: bdrv_snapshot_goto (../block/snapshot.c:299)
==25507==    by 0x205C77: img_snapshot (../qemu-img.c:3500)
==25507==    by 0x58FA087: (below main) (libc_start_call_main.h:58)
==25507==  Address 0x832f3e0 is 0 bytes inside a block of size 272 free'd
==25507==    at 0x4846B83: free (vg_replace_malloc.c:989)
==25507==    by 0x54AEAC4: g_free (gmem.c:208)
==25507==    by 0x1AF629: vmdk_close (../block/vmdk.c:2889)
==25507==    by 0x1A2A9C: bdrv_snapshot_goto (../block/snapshot.c:290)
==25507==    by 0x205C77: img_snapshot (../qemu-img.c:3500)
==25507==    by 0x58FA087: (below main) (libc_start_call_main.h:58)

This error was discovered by fuzzing qemu-img.

Cc: qemu-sta...@nongnu.org
Closes: https://gitlab.com/qemu-project/qemu/-/issues/2853
Closes: https://gitlab.com/qemu-project/qemu/-/issues/2851
Reported-by: Denis Rastyogin <ger...@altlinux.org>
Signed-off-by: Kevin Wolf <kw...@redhat.com>
Message-ID: <20250310104858.28221-1-kw...@redhat.com>
Signed-off-by: Kevin Wolf <kw...@redhat.com>


  Commit: 48170c2d865a5937092b1384421b01cd38113042
      
https://github.com/qemu/qemu/commit/48170c2d865a5937092b1384421b01cd38113042
  Author: Greg Kurz <gr...@kaod.org>
  Date:   2025-03-12 (Wed, 12 Mar 2025)

  Changed paths:
    M docs/devel/build-system.rst
    M docs/devel/kconfig.rst

  Log Message:
  -----------
  docs: Rename default-configs to configs

This was missed at the time.

Fixes: 812b31d3f91 ("configs: rename default-configs to configs and reorganise")
Signed-off-by: Greg Kurz <gr...@kaod.org>
Reviewed-by: Philippe Mathieu-Daudé <phi...@linaro.org>
Message-ID: <20250306174113.427116-1-gr...@kaod.org>
Signed-off-by: Thomas Huth <th...@redhat.com>


  Commit: 533b33d04bff604863deff3f7d41396d110b57e6
      
https://github.com/qemu/qemu/commit/533b33d04bff604863deff3f7d41396d110b57e6
  Author: Cédric Le Goater <c...@redhat.com>
  Date:   2025-03-12 (Wed, 12 Mar 2025)

  Changed paths:
    M tests/functional/test_ppc64_e500.py

  Log Message:
  -----------
  tests/functional: Require 'user' netdev for ppc64 e500 test

When commit 72cdd672e18c extended the ppc64 e500 test to add network
support, it forgot to require the 'user' netdev backend. Fix that.

Fixes: 72cdd672e18c ("tests/functional: Replace the ppc64 e500 advent calendar 
test")
Signed-off-by: Cédric Le Goater <c...@redhat.com>
Reviewed-by: Thomas Huth <th...@redhat.com>
Acked-by: Bernhard Beschow <shen...@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <phi...@linaro.org>
Message-ID: <20250308071328.193694-1-...@redhat.com>
Signed-off-by: Thomas Huth <th...@redhat.com>


  Commit: 8c63f9aa3f885f06edac3d1e4f21bde939f8c517
      
https://github.com/qemu/qemu/commit/8c63f9aa3f885f06edac3d1e4f21bde939f8c517
  Author: Peter Maydell <peter.mayd...@linaro.org>
  Date:   2025-03-12 (Wed, 12 Mar 2025)

  Changed paths:
    M tests/functional/meson.build

  Log Message:
  -----------
  tests/functional: Bump up arm_replay timeout

On my machine the arm_replay test takes over 2 minutes to run
in a config with Rust enabled and debug enabled:

$ time (cd build/rust ; PYTHONPATH=../../python:../../tests/functional
QEMU_TEST_QEMU_BINARY=./qemu-system-arm ./pyvenv/bin/python3
../../tests/functional/test_arm_replay.py)
TAP version 13
ok 1 test_arm_replay.ArmReplay.test_cubieboard
ok 2 test_arm_replay.ArmReplay.test_vexpressa9
ok 3 test_arm_replay.ArmReplay.test_virt
1..3

real    2m16.564s
user    2m13.461s
sys     0m3.523s

Bump up the timeout to 4 minutes.

Signed-off-by: Peter Maydell <peter.mayd...@linaro.org>
Reviewed-by: Thomas Huth <th...@redhat.com>
Message-ID: <20250310102830.3752440-1-peter.mayd...@linaro.org>
Signed-off-by: Thomas Huth <th...@redhat.com>


  Commit: 15ef93dd48c2635d0e9f9e9a4f6dd92a40e23bff
      
https://github.com/qemu/qemu/commit/15ef93dd48c2635d0e9f9e9a4f6dd92a40e23bff
  Author: Thomas Huth <th...@redhat.com>
  Date:   2025-03-12 (Wed, 12 Mar 2025)

  Changed paths:
    M docs/system/arm/bananapi_m2u.rst
    M docs/system/arm/orangepi.rst
    M docs/system/devices/igb.rst

  Log Message:
  -----------
  docs/system: Fix the information on how to run certain functional tests

The tests have been converted to the functional framework, so
we should not talk about Avocado here anymore.

Fixes: f7d6b772200 ("tests/functional: Convert BananaPi tests to the functional 
framework")
Fixes: 380f7268b7b ("tests/functional: Convert the OrangePi tests to the 
functional framework")
Fixes: 4c0a2df81c9 ("tests/functional: Convert some tests that download files 
via fetch_asset()")
Message-ID: <20250311160847.388670-1-th...@redhat.com>
Signed-off-by: Thomas Huth <th...@redhat.com>


  Commit: a5e8299d1a119b9d757ae28a57612f633894d2f6
      
https://github.com/qemu/qemu/commit/a5e8299d1a119b9d757ae28a57612f633894d2f6
  Author: Nicholas Piggin <npig...@gmail.com>
  Date:   2025-03-12 (Wed, 12 Mar 2025)

  Changed paths:
    M tests/functional/qemu_test/asset.py

  Log Message:
  -----------
  tests/functional/asset: Fail assert fetch when retries are exceeded

Currently the fetch code does not fail gracefully when retry limit is
exceeded, it just falls through the loop with no file, which ends up
hitting other errors.

Add a check for non-existing file, which indicates the retry limit was
exceeded.

Reviewed-by: Daniel P. Berrangé <berra...@redhat.com>
Signed-off-by: Nicholas Piggin <npig...@gmail.com>
Message-ID: <20250312130002.945508-2-npig...@gmail.com>
Signed-off-by: Thomas Huth <th...@redhat.com>


  Commit: 7524e1b33679dc1356e8bb4efdd18e83fc50f5cc
      
https://github.com/qemu/qemu/commit/7524e1b33679dc1356e8bb4efdd18e83fc50f5cc
  Author: Nicholas Piggin <npig...@gmail.com>
  Date:   2025-03-12 (Wed, 12 Mar 2025)

  Changed paths:
    M tests/functional/qemu_test/asset.py

  Log Message:
  -----------
  tests/functional/asset: Verify downloaded size

If the server provides a Content-Length header, use that to verify the
size of the downloaded file. This catches cases where the connection
terminates early, and gives the opportunity to retry. Without this, the
checksum will likely mismatch and fail without retry.

Reviewed-by: Daniel P. Berrangé <berra...@redhat.com>
Signed-off-by: Nicholas Piggin <npig...@gmail.com>
Message-ID: <20250312130002.945508-3-npig...@gmail.com>
Signed-off-by: Thomas Huth <th...@redhat.com>


  Commit: 28adad0a4d9f1f64f9f04748c6348a64ba7ad990
      
https://github.com/qemu/qemu/commit/28adad0a4d9f1f64f9f04748c6348a64ba7ad990
  Author: Nicholas Piggin <npig...@gmail.com>
  Date:   2025-03-12 (Wed, 12 Mar 2025)

  Changed paths:
    M tests/functional/qemu_test/asset.py

  Log Message:
  -----------
  tests/functional/asset: Add AssetError exception class

Assets are uniquely identified by human-readable-ish url, so make an
AssetError exception class that prints url with error message.

A property 'transient' is used to capture whether the client may retry
or try again later, or if it is a serious and likely permanent error.
This is used to retain the existing behaviour of treating HTTP errors
other than 404 as 'transient' and not causing precache step to fail.
Additionally, partial-downloads and stale asset caches that fail to
resolve after the retry limit are now treated as transient and do not
cause precache step to fail.

For background: The NetBSD archive is, at the time of writing, failing
with short transfer. Retrying the fetch at that position (as wget does)
results in a "503 backend unavailable" error. We would like to get that
error code directly, but I have not found a way to do that with urllib,
so treating the short-copy as a transient failure covers that case (and
seems like a reasonable way to handle it in general).

Reviewed-by: Thomas Huth <th...@redhat.com>
Reviewed-by: Daniel P. Berrangé <berra...@redhat.com>
Signed-off-by: Nicholas Piggin <npig...@gmail.com>
Message-ID: <20250312130002.945508-4-npig...@gmail.com>
Signed-off-by: Thomas Huth <th...@redhat.com>


  Commit: b3c03666fb10b4900e5bbff0a2b403731730e637
      
https://github.com/qemu/qemu/commit/b3c03666fb10b4900e5bbff0a2b403731730e637
  Author: Alex Bennée <alex.ben...@linaro.org>
  Date:   2025-03-13 (Thu, 13 Mar 2025)

  Changed paths:
    M tests/functional/test_aarch64_virt_gpu.py

  Log Message:
  -----------
  tests/functional: skip vulkan test if missing vulkaninfo

I could have sworn I had this is a previous iteration of the patches
but I guess it got lost in a re-base. As we are going to call
vulkaninfo to probe for "bad" drivers we need to skip if the binary
isn't available.

Fixes: 9f7e493d11 (tests/functional: skip vulkan tests with nVidia)
Signed-off-by: Alex Bennée <alex.ben...@linaro.org>
Message-ID: <20250312190314.1632357-1-alex.ben...@linaro.org>
Signed-off-by: Thomas Huth <th...@redhat.com>


  Commit: 984a32f17e8dab0dc3d2328c46cb3e0c0a472a73
      
https://github.com/qemu/qemu/commit/984a32f17e8dab0dc3d2328c46cb3e0c0a472a73
  Author: Kevin Wolf <kw...@redhat.com>
  Date:   2025-03-13 (Thu, 13 Mar 2025)

  Changed paths:
    M block/file-posix.c
    M block/io_uring.c
    M block/linux-aio.c
    M include/block/raw-aio.h
    M meson.build

  Log Message:
  -----------
  file-posix: Support FUA writes

Until now, FUA was always emulated with a separate flush after the write
for file-posix. The overhead of processing a second request can reduce
performance significantly for a guest disk that has disabled the write
cache, especially if the host disk is already write through, too, and
the flush isn't actually doing anything.

Advertise support for REQ_FUA in write requests and implement it for
Linux AIO and io_uring using the RWF_DSYNC flag for write requests. The
thread pool still performs a separate fdatasync() call. This can be
improved later by using the pwritev2() syscall if available.

As an example, this is how fio numbers can be improved in some scenarios
with this patch (all using virtio-blk with cache=directsync on an nvme
block device for the VM, fio with ioengine=libaio,direct=1,sync=1):

                              | old           | with FUA support
------------------------------+---------------+-------------------
bs=4k, iodepth=1, numjobs=1   |  45.6k iops   |  56.1k iops
bs=4k, iodepth=1, numjobs=16  | 183.3k iops   | 236.0k iops
bs=4k, iodepth=16, numjobs=1  | 258.4k iops   | 311.1k iops

However, not all scenarios are clear wins. On another slower disk I saw
little to no improvment. In fact, in two corner case scenarios, I even
observed a regression, which I however consider acceptable:

1. On slow host disks in a write through cache mode, when the guest is
   using virtio-blk in a separate iothread so that polling can be
   enabled, and each completion is quickly followed up with a new
   request (so that polling gets it), it can happen that enabling FUA
   makes things slower - the additional very fast no-op flush we used to
   have gave the adaptive polling algorithm a success so that it kept
   polling. Without it, we only have the slow write request, which
   disables polling. This is a problem in the polling algorithm that
   will be fixed later in this series.

2. With a high queue depth, it can be beneficial to have flush requests
   for another reason: The optimisation in bdrv_co_flush() that flushes
   only once per write generation acts as a synchronisation mechanism
   that lets all requests complete at the same time. This can result in
   better batching and if the disk is very fast (I only saw this with a
   null_blk backend), this can make up for the overhead of the flush and
   improve throughput. In theory, we could optionally introduce a
   similar artificial latency in the normal completion path to achieve
   the same kind of completion batching. This is not implemented in this
   series.

Compatibility is not a concern for the kernel side of io_uring, it has
supported RWF_DSYNC from the start. However, io_uring_prep_writev2() is
not available before liburing 2.2.

Linux AIO started supporting it in Linux 4.13 and libaio 0.3.111. The
kernel is not a problem for any supported build platform, so it's not
necessary to add runtime checks. However, openSUSE is still stuck with
an older libaio version that would break the build.

We must detect the presence of the writev2 functions in the user space
libraries at build time to avoid build failures.

Signed-off-by: Kevin Wolf <kw...@redhat.com>
Message-ID: <20250307221634.71951-2-kw...@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefa...@redhat.com>
Signed-off-by: Kevin Wolf <kw...@redhat.com>


  Commit: 2f3b6e61f692bade441230dd25c1c0f101bd2eef
      
https://github.com/qemu/qemu/commit/2f3b6e61f692bade441230dd25c1c0f101bd2eef
  Author: Kevin Wolf <kw...@redhat.com>
  Date:   2025-03-13 (Thu, 13 Mar 2025)

  Changed paths:
    M block/io.c

  Log Message:
  -----------
  block/io: Ignore FUA with cache.no-flush=on

For block drivers that don't advertise FUA support, we already call
bdrv_co_flush(), which considers BDRV_O_NO_FLUSH. However, drivers that
do support FUA still see the FUA flag with BDRV_O_NO_FLUSH and get the
associated performance penalty that cache.no-flush=on was supposed to
avoid.

Clear FUA for write requests if BDRV_O_NO_FLUSH is set.

Signed-off-by: Kevin Wolf <kw...@redhat.com>
Message-ID: <20250307221634.71951-3-kw...@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefa...@redhat.com>
Signed-off-by: Kevin Wolf <kw...@redhat.com>


  Commit: 518db1013cb0384dc19134585f227dbb7bf65e39
      
https://github.com/qemu/qemu/commit/518db1013cb0384dc19134585f227dbb7bf65e39
  Author: Kevin Wolf <kw...@redhat.com>
  Date:   2025-03-13 (Thu, 13 Mar 2025)

  Changed paths:
    M include/block/aio.h
    M util/aio-posix.c
    M util/async.c

  Log Message:
  -----------
  aio: Create AioPolledEvent

As a preparation for having multiple adaptive polling states per
AioContext, move the 'ns' field into a separate struct.

Signed-off-by: Kevin Wolf <kw...@redhat.com>
Message-ID: <20250307221634.71951-4-kw...@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefa...@redhat.com>
Signed-off-by: Kevin Wolf <kw...@redhat.com>


  Commit: cf2e226fc654072acc185c5d7fb1ff77774f4563
      
https://github.com/qemu/qemu/commit/cf2e226fc654072acc185c5d7fb1ff77774f4563
  Author: Kevin Wolf <kw...@redhat.com>
  Date:   2025-03-13 (Thu, 13 Mar 2025)

  Changed paths:
    M util/aio-posix.c

  Log Message:
  -----------
  aio-posix: Factor out adjust_polling_time()

Signed-off-by: Kevin Wolf <kw...@redhat.com>
Message-ID: <20250307221634.71951-5-kw...@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefa...@redhat.com>
Signed-off-by: Kevin Wolf <kw...@redhat.com>


  Commit: ee416407b3c0f45253779e98404acb41231a9279
      
https://github.com/qemu/qemu/commit/ee416407b3c0f45253779e98404acb41231a9279
  Author: Kevin Wolf <kw...@redhat.com>
  Date:   2025-03-13 (Thu, 13 Mar 2025)

  Changed paths:
    M include/block/aio.h
    M util/aio-posix.c
    M util/aio-posix.h
    M util/async.c

  Log Message:
  -----------
  aio-posix: Separate AioPolledEvent per AioHandler

Adaptive polling has a big problem: It doesn't consider that an event
loop can wait for many different events that may have very different
typical latencies.

For example, think of a guest that tends to send a new I/O request soon
after the previous I/O request completes, but the storage on the host is
rather slow. In this case, getting the new request from guest quickly
means that polling is enabled, but the next thing is performing the I/O
request on the backend, which is slow and disables polling again for the
next guest request. This means that in such a scenario, polling could
help for every other event, but is only ever enabled when it can't
succeed.

In order to fix this, keep a separate AioPolledEvent for each
AioHandler. We will then know that the backend file descriptor always
has a high latency and isn't worth polling for, but we also know that
the guest is always fast and we should poll for it. This solves at least
half of the problem, we can now keep polling for those cases where it
makes sense and get the improved performance from it.

Since the event loop doesn't know which event will be next, we still do
some unnecessary polling while we're waiting for the slow disk. I made
some attempts to be more clever than just randomly growing and shrinking
the polling time, and even to let callers be explicit about when they
expect a new event, but so far this hasn't resulted in improved
performance or even caused performance regressions. For now, let's just
fix the part that is easy enough to fix, we can revisit the rest later.

Signed-off-by: Kevin Wolf <kw...@redhat.com>
Message-ID: <20250307221634.71951-6-kw...@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefa...@redhat.com>
Signed-off-by: Kevin Wolf <kw...@redhat.com>


  Commit: f76d3bee754a2f8d73373d5959dc983169a93eee
      
https://github.com/qemu/qemu/commit/f76d3bee754a2f8d73373d5959dc983169a93eee
  Author: Kevin Wolf <kw...@redhat.com>
  Date:   2025-03-13 (Thu, 13 Mar 2025)

  Changed paths:
    M util/aio-posix.c

  Log Message:
  -----------
  aio-posix: Adjust polling time also for new handlers

aio_dispatch_handler() adds handlers to ctx->poll_aio_handlers if
polling should be enabled. If we call adjust_polling_time() for all
polling handlers before this, new polling handlers are still left at
poll->ns = 0 and polling is only actually enabled after the next event.
Move the adjust_polling_time() call after aio_dispatch_handler().

This fixes test-nested-aio-poll, which expects that polling becomes
effective the first time around.

Signed-off-by: Kevin Wolf <kw...@redhat.com>
Message-ID: <20250311141912.135657-1-kw...@redhat.com>
Signed-off-by: Kevin Wolf <kw...@redhat.com>


  Commit: 71e1369bad01d441113ede02334175647275652d
      
https://github.com/qemu/qemu/commit/71e1369bad01d441113ede02334175647275652d
  Author: Thomas Huth <th...@redhat.com>
  Date:   2025-03-13 (Thu, 13 Mar 2025)

  Changed paths:
    M tests/qemu-iotests/tests/qsd-migrate

  Log Message:
  -----------
  iotests: Limit qsd-migrate to working formats

qsd-migrate is currently only working for raw, qcow2 and qed.
Other formats are failing, e.g. because they don't support migration.
Thus let's limit this test to the three usable formats now.

Suggested-by: Kevin Wolf <kw...@redhat.com>
Signed-off-by: Thomas Huth <th...@redhat.com>
Message-ID: <20250224214058.205889-1-th...@redhat.com>
Reviewed-by: Kevin Wolf <kw...@redhat.com>
Signed-off-by: Kevin Wolf <kw...@redhat.com>


  Commit: b2e3659d0d769c84f5b15239a93a722c8012bffa
      
https://github.com/qemu/qemu/commit/b2e3659d0d769c84f5b15239a93a722c8012bffa
  Author: Stefan Hajnoczi <stefa...@redhat.com>
  Date:   2025-03-13 (Thu, 13 Mar 2025)

  Changed paths:
    M hw/scsi/scsi-disk.c

  Log Message:
  -----------
  scsi-disk: drop unused SCSIDiskState->bh field

Commit 71544d30a6f8 ("scsi: push request restart to SCSIDevice") removed
the only user of SCSIDiskState->bh.

Signed-off-by: Stefan Hajnoczi <stefa...@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <phi...@linaro.org>
Reviewed-by: Kevin Wolf <kw...@redhat.com>
Message-ID: <20250311132616.1049687-2-stefa...@redhat.com>
Tested-by: Peter Krempa <pkre...@redhat.com>
Signed-off-by: Kevin Wolf <kw...@redhat.com>


  Commit: a89c3c9b2cc4107658c7260ecf329d869888fd51
      
https://github.com/qemu/qemu/commit/a89c3c9b2cc4107658c7260ecf329d869888fd51
  Author: Stefan Hajnoczi <stefa...@redhat.com>
  Date:   2025-03-13 (Thu, 13 Mar 2025)

  Changed paths:
    M hw/ide/core.c
    M hw/ide/macio.c
    M hw/scsi/scsi-disk.c
    M include/system/dma.h
    M system/dma-helpers.c

  Log Message:
  -----------
  dma: use current AioContext for dma_blk_io()

In the past a single AioContext was used for block I/O and it was
fetched using blk_get_aio_context(). Nowadays the block layer supports
running I/O from any AioContext and multiple AioContexts at the same
time. Remove the dma_blk_io() AioContext argument and use the current
AioContext instead.

This makes calling the function easier and enables multiple IOThreads to
use dma_blk_io() concurrently for the same block device.

Signed-off-by: Stefan Hajnoczi <stefa...@redhat.com>
Reviewed-by: Kevin Wolf <kw...@redhat.com>
Message-ID: <20250311132616.1049687-3-stefa...@redhat.com>
Tested-by: Peter Krempa <pkre...@redhat.com>
Signed-off-by: Kevin Wolf <kw...@redhat.com>


  Commit: 7eecba37788f48d34c015954f1207cc7b52728f5
      
https://github.com/qemu/qemu/commit/7eecba37788f48d34c015954f1207cc7b52728f5
  Author: Stefan Hajnoczi <stefa...@redhat.com>
  Date:   2025-03-13 (Thu, 13 Mar 2025)

  Changed paths:
    M hw/scsi/scsi-bus.c
    M hw/scsi/scsi-disk.c
    M include/hw/scsi/scsi.h

  Log Message:
  -----------
  scsi: track per-SCSIRequest AioContext

Until now, a SCSIDevice's I/O requests have run in a single AioContext.
In order to support multiple IOThreads it will be necessary to move to
the concept of a per-SCSIRequest AioContext.

Signed-off-by: Stefan Hajnoczi <stefa...@redhat.com>
Reviewed-by: Kevin Wolf <kw...@redhat.com>
Message-ID: <20250311132616.1049687-4-stefa...@redhat.com>
Tested-by: Peter Krempa <pkre...@redhat.com>
Signed-off-by: Kevin Wolf <kw...@redhat.com>


  Commit: 1cf18cc9bf5e9f88ad92f89886652e0361e2f41f
      
https://github.com/qemu/qemu/commit/1cf18cc9bf5e9f88ad92f89886652e0361e2f41f
  Author: Stefan Hajnoczi <stefa...@redhat.com>
  Date:   2025-03-13 (Thu, 13 Mar 2025)

  Changed paths:
    M hw/scsi/scsi-bus.c
    M include/hw/scsi/scsi.h

  Log Message:
  -----------
  scsi: introduce requests_lock

SCSIDevice keeps track of in-flight requests for device reset and Task
Management Functions (TMFs). The request list requires protection so
that multi-threaded SCSI emulation can be implemented in commits that
follow.

Signed-off-by: Stefan Hajnoczi <stefa...@redhat.com>
Reviewed-by: Kevin Wolf <kw...@redhat.com>
Message-ID: <20250311132616.1049687-5-stefa...@redhat.com>
Tested-by: Peter Krempa <pkre...@redhat.com>
Signed-off-by: Kevin Wolf <kw...@redhat.com>


  Commit: b348ca2e043c0f7c9ecc1bbbd7dd87db47887e9f
      
https://github.com/qemu/qemu/commit/b348ca2e043c0f7c9ecc1bbbd7dd87db47887e9f
  Author: Stefan Hajnoczi <stefa...@redhat.com>
  Date:   2025-03-13 (Thu, 13 Mar 2025)

  Changed paths:
    M hw/scsi/virtio-scsi.c
    M include/hw/virtio/virtio-scsi.h

  Log Message:
  -----------
  virtio-scsi: introduce event and ctrl virtqueue locks

Virtqueues are not thread-safe. Until now this was not a major issue
since all virtqueue processing happened in the same thread. The ctrl
queue's Task Management Function (TMF) requests sometimes need the main
loop, so a BH was used to schedule the virtqueue completion back in the
thread that has virtqueue access.

When IOThread Virtqueue Mapping is introduced in later commits, event
and ctrl virtqueue accesses from other threads will become necessary.
Introduce an optional per-virtqueue lock so the event and ctrl
virtqueues can be protected in the commits that follow.

The addition of the ctrl virtqueue lock makes
virtio_scsi_complete_req_from_main_loop() and its BH unnecessary.
Instead, take the ctrl virtqueue lock from the main loop thread.

The cmd virtqueue does not have a lock because the entirety of SCSI
command processing happens in one thread. Only one thread accesses the
cmd virtqueue and a lock is unnecessary.

Signed-off-by: Stefan Hajnoczi <stefa...@redhat.com>
Reviewed-by: Kevin Wolf <kw...@redhat.com>
Message-ID: <20250311132616.1049687-6-stefa...@redhat.com>
Tested-by: Peter Krempa <pkre...@redhat.com>
Signed-off-by: Kevin Wolf <kw...@redhat.com>


  Commit: 7d8ab5b2f77d84a21dbeb5b254e26320e1943af4
      
https://github.com/qemu/qemu/commit/7d8ab5b2f77d84a21dbeb5b254e26320e1943af4
  Author: Stefan Hajnoczi <stefa...@redhat.com>
  Date:   2025-03-13 (Thu, 13 Mar 2025)

  Changed paths:
    M hw/scsi/virtio-scsi.c
    M include/hw/virtio/virtio-scsi.h

  Log Message:
  -----------
  virtio-scsi: protect events_dropped field

The block layer can invoke the resize callback from any AioContext that
is processing requests. The virtqueue is already protected but the
events_dropped field also needs to be protected against races. Cover it
using the event virtqueue lock because it is closely associated with
accesses to the virtqueue.

Signed-off-by: Stefan Hajnoczi <stefa...@redhat.com>
Reviewed-by: Kevin Wolf <kw...@redhat.com>
Message-ID: <20250311132616.1049687-7-stefa...@redhat.com>
Tested-by: Peter Krempa <pkre...@redhat.com>
Signed-off-by: Kevin Wolf <kw...@redhat.com>


  Commit: da6eebb33b08131d2dc7c2594f0998012fe69e2f
      
https://github.com/qemu/qemu/commit/da6eebb33b08131d2dc7c2594f0998012fe69e2f
  Author: Stefan Hajnoczi <stefa...@redhat.com>
  Date:   2025-03-13 (Thu, 13 Mar 2025)

  Changed paths:
    M hw/scsi/virtio-scsi.c

  Log Message:
  -----------
  virtio-scsi: perform TMFs in appropriate AioContexts

With IOThread Virtqueue Mapping there will be multiple AioContexts
processing SCSI requests. scsi_req_cancel() and other SCSI request
operations must be performed from the AioContext where the request is
running.

Introduce a virtio_scsi_defer_tmf_to_aio_context() function and the
necessary VirtIOSCSIReq->remaining refcount infrastructure to move the
TMF code into the AioContext where the request is running.

For the time being there is still just one AioContext: the main loop or
the IOThread. When the iothread-vq-mapping parameter is added in a later
patch this will be changed to per-virtqueue AioContexts.

Signed-off-by: Stefan Hajnoczi <stefa...@redhat.com>
Reviewed-by: Kevin Wolf <kw...@redhat.com>
Message-ID: <20250311132616.1049687-8-stefa...@redhat.com>
Tested-by: Peter Krempa <pkre...@redhat.com>
Signed-off-by: Kevin Wolf <kw...@redhat.com>


  Commit: 366b5811d6170f4ed74329a60dc77b1633e13798
      
https://github.com/qemu/qemu/commit/366b5811d6170f4ed74329a60dc77b1633e13798
  Author: Stefan Hajnoczi <stefa...@redhat.com>
  Date:   2025-03-13 (Thu, 13 Mar 2025)

  Changed paths:
    M hw/block/virtio-blk.c

  Log Message:
  -----------
  virtio-blk: extract cleanup_iothread_vq_mapping() function

This is the cleanup function that must be called after
apply_iothread_vq_mapping() succeeds. virtio-scsi will need this
function too, so extract it.

Signed-off-by: Stefan Hajnoczi <stefa...@redhat.com>
Reviewed-by: Kevin Wolf <kw...@redhat.com>
Message-ID: <20250311132616.1049687-9-stefa...@redhat.com>
Tested-by: Peter Krempa <pkre...@redhat.com>
Signed-off-by: Kevin Wolf <kw...@redhat.com>


  Commit: 2fa67a7b1d7957dc0cc482136ba58c460463ecb6
      
https://github.com/qemu/qemu/commit/2fa67a7b1d7957dc0cc482136ba58c460463ecb6
  Author: Stefan Hajnoczi <stefa...@redhat.com>
  Date:   2025-03-13 (Thu, 13 Mar 2025)

  Changed paths:
    M hw/block/virtio-blk.c

  Log Message:
  -----------
  virtio-blk: tidy up iothread_vq_mapping functions

Use noun_verb() function naming instead of verb_noun() because the
former is the most common naming style for APIs. The next commit will
move these functions into a header file so that virtio-scsi can call
them.

Shorten iothread_vq_mapping_apply()'s iothread_vq_mapping_list argument
to just "list" like in the other functions.

Signed-off-by: Stefan Hajnoczi <stefa...@redhat.com>
Reviewed-by: Kevin Wolf <kw...@redhat.com>
Message-ID: <20250311132616.1049687-10-stefa...@redhat.com>
Tested-by: Peter Krempa <pkre...@redhat.com>
Signed-off-by: Kevin Wolf <kw...@redhat.com>


  Commit: b50629c335804e193b51936867d6cb7ea3735d72
      
https://github.com/qemu/qemu/commit/b50629c335804e193b51936867d6cb7ea3735d72
  Author: Stefan Hajnoczi <stefa...@redhat.com>
  Date:   2025-03-13 (Thu, 13 Mar 2025)

  Changed paths:
    M hw/block/virtio-blk.c
    A hw/virtio/iothread-vq-mapping.c
    M hw/virtio/meson.build
    A include/hw/virtio/iothread-vq-mapping.h

  Log Message:
  -----------
  virtio: extract iothread-vq-mapping.h API

The code that builds an array of AioContext pointers indexed by the
virtqueue is not specific to virtio-blk. virtio-scsi will need to do the
same thing, so extract the functions.

Signed-off-by: Stefan Hajnoczi <stefa...@redhat.com>
Reviewed-by: Kevin Wolf <kw...@redhat.com>
Message-ID: <20250311132616.1049687-11-stefa...@redhat.com>
Tested-by: Peter Krempa <pkre...@redhat.com>
Signed-off-by: Kevin Wolf <kw...@redhat.com>


  Commit: 2e8e18c2e46307a355e547129b5a7a7000a0cf0d
      
https://github.com/qemu/qemu/commit/2e8e18c2e46307a355e547129b5a7a7000a0cf0d
  Author: Stefan Hajnoczi <stefa...@redhat.com>
  Date:   2025-03-13 (Thu, 13 Mar 2025)

  Changed paths:
    M hw/scsi/virtio-scsi-dataplane.c
    M hw/scsi/virtio-scsi.c
    M include/hw/virtio/virtio-scsi.h
    M tests/qemu-iotests/051.pc.out

  Log Message:
  -----------
  virtio-scsi: add iothread-vq-mapping parameter

Allow virtio-scsi virtqueues to be assigned to different IOThreads. This
makes it possible to take advantage of host multi-queue block layer
scalability by assigning virtqueues that have affinity with vCPUs to
different IOThreads that have affinity with host CPUs. The same feature
was introduced for virtio-blk in the past:
https://developers.redhat.com/articles/2024/09/05/scaling-virtio-blk-disk-io-iothread-virtqueue-mapping

Here are fio randread 4k iodepth=64 results from a 4 vCPU guest with an
Intel P4800X SSD:
iothreads IOPS
------------------------------
1         189576
2         312698
4         346744

Signed-off-by: Stefan Hajnoczi <stefa...@redhat.com>
Message-ID: <20250311132616.1049687-12-stefa...@redhat.com>
Tested-by: Peter Krempa <pkre...@redhat.com>
[kwolf: Updated 051 output, virtio-scsi can now use any iothread]
Reviewed-by: Kevin Wolf <kw...@redhat.com>
Signed-off-by: Kevin Wolf <kw...@redhat.com>


  Commit: bcede51d2d1ae03f99ccb2569e52b5062033d40d
      
https://github.com/qemu/qemu/commit/bcede51d2d1ae03f99ccb2569e52b5062033d40d
  Author: Stefan Hajnoczi <stefa...@redhat.com>
  Date:   2025-03-13 (Thu, 13 Mar 2025)

  Changed paths:
    M hw/scsi/virtio-scsi-dataplane.c
    M hw/scsi/virtio-scsi.c
    M include/hw/virtio/virtio-scsi.h

  Log Message:
  -----------
  virtio-scsi: handle ctrl virtqueue in main loop

Previously the ctrl virtqueue was handled in the AioContext where SCSI
requests are processed. When IOThread Virtqueue Mapping was added things
become more complicated because SCSI requests could run in other
AioContexts.

Simplify by handling the ctrl virtqueue in the main loop where reset
operations can be performed. Note that BHs are still used canceling SCSI
requests in their AioContexts but at least the mean loop activity
doesn't need BHs anymore.

Signed-off-by: Stefan Hajnoczi <stefa...@redhat.com>
Message-ID: <20250311132616.1049687-13-stefa...@redhat.com>
Tested-by: Peter Krempa <pkre...@redhat.com>
Signed-off-by: Kevin Wolf <kw...@redhat.com>


  Commit: 40aa38a651a8d4ca99c70e591176a97abcae5295
      
https://github.com/qemu/qemu/commit/40aa38a651a8d4ca99c70e591176a97abcae5295
  Author: Stefan Hajnoczi <stefa...@redhat.com>
  Date:   2025-03-13 (Thu, 13 Mar 2025)

  Changed paths:
    M hw/scsi/virtio-scsi-dataplane.c

  Log Message:
  -----------
  virtio-scsi: only expose cmd vqs via iothread-vq-mapping

Peter Krempa and Kevin Wolf observed that iothread-vq-mapping is
confusing to use because the control and event virtqueues have a fixed
location before the command virtqueues but need to be treated
differently.

Only expose the command virtqueues via iothread-vq-mapping so that the
command-line parameter is intuitive: it controls where SCSI requests are
processed.

The control virtqueue needs to be hardcoded to the main loop thread for
technical reasons anyway. Kevin also pointed out that it's better to
place the event virtqueue in the main loop thread since its no poll
behavior would prevent polling if assigned to an IOThread.

This change is its own commit to avoid squashing the previous commit.

Suggested-by: Kevin Wolf <kw...@redhat.com>
Suggested-by: Peter Krempa <pkre...@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefa...@redhat.com>
Message-ID: <20250311132616.1049687-14-stefa...@redhat.com>
Tested-by: Peter Krempa <pkre...@redhat.com>
Signed-off-by: Kevin Wolf <kw...@redhat.com>


  Commit: df957115c46845e2c0ccc29ac0a75eb9700a9a0d
      
https://github.com/qemu/qemu/commit/df957115c46845e2c0ccc29ac0a75eb9700a9a0d
  Author: Alberto Garcia <be...@igalia.com>
  Date:   2025-03-13 (Thu, 13 Mar 2025)

  Changed paths:
    A scripts/qcow2-to-stdout.py

  Log Message:
  -----------
  scripts/qcow2-to-stdout.py: Add script to write qcow2 images to stdout

This tool converts a disk image to qcow2, writing the result directly
to stdout. This can be used for example to send the generated file
over the network.

This is equivalent to using qemu-img to convert a file to qcow2 and
then writing the result to stdout, with the difference that this tool
does not need to create this temporary qcow2 file and therefore does
not need any additional disk space.

Implementing this directly in qemu-img is not really an option because
it expects the output file to be seekable and it is also meant to be a
generic tool that supports all combinations of file formats and image
options. Instead, this tool can only produce qcow2 files with the
basic options, without compression, encryption or other features.

The input file is read twice. The first pass is used to determine
which clusters contain non-zero data and that information is used to
create the qcow2 header, refcount table and blocks, and L1 and L2
tables. After all that metadata is created then the second pass is
used to write the guest data.

By default qcow2-to-stdout.py expects the input to be a raw file, but
if qemu-storage-daemon is available then it can also be used to read
images in other formats. Alternatively the user can also run qemu-nbd
or qemu-storage-daemon manually instead.

Signed-off-by: Alberto Garcia <be...@igalia.com>
Signed-off-by: Madeeha Javed <ja...@igalia.com>
Message-ID: <20240730141552.60404-1-be...@igalia.com>
Reviewed-by: Kevin Wolf <kw...@redhat.com>
Signed-off-by: Kevin Wolf <kw...@redhat.com>


  Commit: 28ea66f6f9856c398afa75f2cabb1f21c8b04208
      
https://github.com/qemu/qemu/commit/28ea66f6f9856c398afa75f2cabb1f21c8b04208
  Author: Stefan Hajnoczi <stefa...@redhat.com>
  Date:   2025-03-14 (Fri, 14 Mar 2025)

  Changed paths:
    M docs/devel/build-system.rst
    M docs/devel/kconfig.rst
    M docs/system/arm/bananapi_m2u.rst
    M docs/system/arm/orangepi.rst
    M docs/system/devices/igb.rst
    M tests/functional/meson.build
    M tests/functional/qemu_test/asset.py
    M tests/functional/test_aarch64_virt_gpu.py
    M tests/functional/test_ppc64_e500.py

  Log Message:
  -----------
  Merge tag 'pull-request-2025-03-13' of https://gitlab.com/thuth/qemu into 
staging

* Various fixes for functional tests
* Fix the name of the "configs" directory in the documentation

# -----BEGIN PGP SIGNATURE-----
#
# iQJFBAABCAAvFiEEJ7iIR+7gJQEY8+q5LtnXdP5wLbUFAmfSjagRHHRodXRoQHJl
# ZGhhdC5jb20ACgkQLtnXdP5wLbWBmA//RhAHuF/fTmQagBsZPETXjU1g8ifw9aqm
# WPZcQEXyQFlqYYQZmtV7dk3aTGEw4kBDmm+SKTSQz1yUcBGptMl8xuWaxgdpcOw0
# Bqt+lYNgwGL9/OocCdNolU3+aVbETljr5l+rzbnwsTVIqGk63Qhmtwdupb8h1nfY
# 4vCXU+sY3BkvBF8HbV6Wb1aPtqC+iH/Ln8+yoKkC8UePD623dK58SsOVrhUQDfFr
# U/HUy4BZlHFCfGGmDVGBjHdEbOzQkLQ9N3ilsNSWcF87RPkWPft+qLs4RjDFW+oT
# oksXEFHcr8XQO03fwHBNTyv+NUfnrvDY8V+gl6C9ItQr58SZzse57caZKWrYppZ3
# l5iHoaLMV3juZFDNXNHkWHuveXi05+0V0UbZihzBeC4+zjNRyh3e1GuDoh5VoG8o
# XIb55RxU8eBG2/ulHZ71eAYrGpxO+tDdsdnak1coPFsU8HrC9QzRfywiAZe1Wwmx
# 5t5AHbZ7RdnxgStU1lWTUT2IDVSini4DKevt/FzhKkv1aD8NbhI/ooGDC0zbS6SU
# XK6PP2G5a5OnjQ904oRCQbnhrxFa5qNfryylvvreT2bVgX0BiE4pJ9JXdgQOMYlP
# kZERZZQcv3y6VVavAT67yeNKQpyb4HSHdTDQ2irgXP1UwHRpwLpKdqB1UhzNJ8m8
# k0faA8RXir4=
# =VtGZ
# -----END PGP SIGNATURE-----
# gpg: Signature made Thu 13 Mar 2025 15:47:52 HKT
# gpg:                using RSA key 27B88847EEE0250118F3EAB92ED9D774FE702DB5
# gpg:                issuer "th...@redhat.com"
# gpg: Good signature from "Thomas Huth <th.h...@gmx.de>" [full]
# gpg:                 aka "Thomas Huth <th...@redhat.com>" [full]
# gpg:                 aka "Thomas Huth <h...@tuxfamily.org>" [full]
# gpg:                 aka "Thomas Huth <th.h...@posteo.de>" [unknown]
# Primary key fingerprint: 27B8 8847 EEE0 2501 18F3  EAB9 2ED9 D774 FE70 2DB5

* tag 'pull-request-2025-03-13' of https://gitlab.com/thuth/qemu:
  tests/functional: skip vulkan test if missing vulkaninfo
  tests/functional/asset: Add AssetError exception class
  tests/functional/asset: Verify downloaded size
  tests/functional/asset: Fail assert fetch when retries are exceeded
  docs/system: Fix the information on how to run certain functional tests
  tests/functional: Bump up arm_replay timeout
  tests/functional: Require 'user' netdev for ppc64 e500 test
  docs: Rename default-configs to configs

Signed-off-by: Stefan Hajnoczi <stefa...@redhat.com>


  Commit: 0462a32b4f63b2448b4a196381138afd50719dc4
      
https://github.com/qemu/qemu/commit/0462a32b4f63b2448b4a196381138afd50719dc4
  Author: Stefan Hajnoczi <stefa...@redhat.com>
  Date:   2025-03-14 (Fri, 14 Mar 2025)

  Changed paths:
    M block/block-backend.c
    M block/file-posix.c
    M block/io.c
    M block/io_uring.c
    M block/linux-aio.c
    M block/snapshot.c
    M hw/block/virtio-blk.c
    M hw/ide/core.c
    M hw/ide/macio.c
    M hw/scsi/scsi-bus.c
    M hw/scsi/scsi-disk.c
    M hw/scsi/virtio-scsi-dataplane.c
    M hw/scsi/virtio-scsi.c
    A hw/virtio/iothread-vq-mapping.c
    M hw/virtio/meson.build
    M include/block/aio.h
    M include/block/raw-aio.h
    M include/hw/scsi/scsi.h
    A include/hw/virtio/iothread-vq-mapping.h
    M include/hw/virtio/virtio-scsi.h
    M include/system/block-backend-global-state.h
    M include/system/dma.h
    M meson.build
    A scripts/qcow2-to-stdout.py
    M system/dma-helpers.c
    M tests/qemu-iotests/051.pc.out
    M tests/qemu-iotests/tests/qsd-migrate
    M util/aio-posix.c
    M util/aio-posix.h
    M util/async.c

  Log Message:
  -----------
  Merge tag 'for-upstream' of https://repo.or.cz/qemu/kevin into staging

Block layer patches

- virtio-scsi: add iothread-vq-mapping parameter
- Improve writethrough performance
- Fix missing zero init in bdrv_snapshot_goto()
- Added scripts/qcow2-to-stdout.py
- Code cleanup and iotests fixes

# -----BEGIN PGP SIGNATURE-----
#
# iQJFBAABCAAvFiEE3D3rFZqa+V09dFb+fwmycsiPL9YFAmfTDysRHGt3b2xmQHJl
# ZGhhdC5jb20ACgkQfwmycsiPL9Yz6A//asOl37zjbtf9pYjY/gliH859TQOppPGD
# LB9IIr+nTDME0wfUkCOlag+CeEYZwkeo2PF+XeopsyzlJeBOk4tL7AkY57XYe3lZ
# M5hlnNrn6l3gb6iioMg60pEKSMrpKprB16vT3nAtyN6aEXsm9TvtPkWPFTCFGVeK
# W74VCr7wuXbfdEJcOGd8WhB9ZHIgwoWYnoL41tvCoefW2yNaMA6X0TLn98toXzOi
# il50ZnnchTQngns5R+n+1R1Ma995t393D+CArQcYVRzxKGOs5p0y4otz4gCkMhdp
# GVL09R7Ge4TteSJ2myxlN/EjYOxmdoMrVDajr4xPdHBw12MKzgk8i82h4/Es/Q5o
# 3Npgx74+jDyqlICb/czTVM5KJINpyO80vO3N3WpYUOQGyTCcYgv7pIpy8pB2o6Te
# RPlv0W9bHVSSgThFFLQ0Ud8WRGJe1K/ar8bdmiWN08Wez1avENWaYmsv5zGnFL24
# vD6cNXMR4mF7mzyeWda/5hGKv75djVgX+ZfzvWNT3qgizD56JBOA3RdCRwBZJOJb
# TvJkfi5RGyaji9BfKVCYBL3/iDELJEVDW8jxvIIUrS0aPcTHpAQ5gTO7VAokreqZ
# 5Smll11eeoEgPPvNLw8ikmOGTWOMkJGrmExP2K1ApANq3kSbBSU4jroEr0BG9PZT
# 6Y0hUdtFSdU=
# =w2Ri
# -----END PGP SIGNATURE-----
# gpg: Signature made Fri 14 Mar 2025 01:00:27 HKT
# gpg:                using RSA key DC3DEB159A9AF95D3D7456FE7F09B272C88F2FD6
# gpg:                issuer "kw...@redhat.com"
# gpg: Good signature from "Kevin Wolf <kw...@redhat.com>" [full]
# Primary key fingerprint: DC3D EB15 9A9A F95D 3D74  56FE 7F09 B272 C88F 2FD6

* tag 'for-upstream' of https://repo.or.cz/qemu/kevin: (23 commits)
  scripts/qcow2-to-stdout.py: Add script to write qcow2 images to stdout
  virtio-scsi: only expose cmd vqs via iothread-vq-mapping
  virtio-scsi: handle ctrl virtqueue in main loop
  virtio-scsi: add iothread-vq-mapping parameter
  virtio: extract iothread-vq-mapping.h API
  virtio-blk: tidy up iothread_vq_mapping functions
  virtio-blk: extract cleanup_iothread_vq_mapping() function
  virtio-scsi: perform TMFs in appropriate AioContexts
  virtio-scsi: protect events_dropped field
  virtio-scsi: introduce event and ctrl virtqueue locks
  scsi: introduce requests_lock
  scsi: track per-SCSIRequest AioContext
  dma: use current AioContext for dma_blk_io()
  scsi-disk: drop unused SCSIDiskState->bh field
  iotests: Limit qsd-migrate to working formats
  aio-posix: Adjust polling time also for new handlers
  aio-posix: Separate AioPolledEvent per AioHandler
  aio-posix: Factor out adjust_polling_time()
  aio: Create AioPolledEvent
  block/io: Ignore FUA with cache.no-flush=on
  ...

Signed-off-by: Stefan Hajnoczi <stefa...@redhat.com>


Compare: https://github.com/qemu/qemu/compare/4c33c097f3a8...0462a32b4f63

To unsubscribe from these emails, change your notification settings at 
https://github.com/qemu/qemu/settings/notifications


Reply via email to