Hi Jonathan,

To the best of my knowledge, all patches in the series have been thoroughly
reviewed. Admittedly, I got a bit distracted with other things though, so
I've been dragging my feet on follow-through. Sorry about that.

I've just taken another look and found it no longer applies cleanly to
master due to a minor merge conflict. I've just sent a rebased version to
address that.

Stefan, are you OK to pick this up for merging at your next convenience?

Thanks,
Mattias



On Fri, Feb 9, 2024 at 6:39 PM Jonathan Cameron <jonathan.came...@huawei.com>
wrote:

> On Wed,  1 Nov 2023 06:16:06 -0700
> Mattias Nissler <mniss...@rivosinc.com> wrote:
>
> > This series adds basic support for message-based DMA in qemu's vfio-user
> > server. This is useful for cases where the client does not provide file
> > descriptors for accessing system memory via memory mappings. My
> motivating use
> > case is to hook up device models as PCIe endpoints to a hardware design.
> This
> > works by bridging the PCIe transaction layer to vfio-user, and the
> endpoint
> > does not access memory directly, but sends memory requests TLPs to the
> hardware
> > design in order to perform DMA.
> >
> > Note that more work is needed to make message-based DMA work well: qemu
> > currently breaks down DMA accesses into chunks of size 8 bytes at
> maximum, each
> > of which will be handled in a separate vfio-user DMA request message.
> This is
> > quite terrible for large DMA accesses, such as when nvme reads and writes
> > page-sized blocks for example. Thus, I would like to improve qemu to be
> able to
> > perform larger accesses, at least for indirect memory regions. I have
> something
> > working locally, but since this will likely result in more involved
> surgery and
> > discussion, I am leaving this to be addressed in a separate patch.
> >
> Hi Mattias,
>
> I was wondering what the status of this patch set is - seems no
> outstanding issues
> have been raised?
>
> I'd run into a similar problem with multiple DMA mappings using the bounce
> buffer
> when using the emulated CXL memory with virtio-blk-pci accessing it.
>
> In that particular case virtio-blk is using the "memory" address space, but
> otherwise your first 2 patches work for me as well so I'd definitely like
> to see those get merged!
>
> Thanks,
>
> Jonathan
>
> > Changes from v1:
> >
> > * Address Stefan's review comments. In particular, enforce an allocation
> limit
> >   and don't drop the map client callbacks given that map requests can
> fail when
> >   hitting size limits.
> >
> > * libvfio-user version bump now included in the series.
> >
> > * Tested as well on big-endian s390x. This uncovered another byte order
> issue
> >   in vfio-user server code that I've included a fix for.
> >
> > Changes from v2:
> >
> > * Add a preparatory patch to make bounce buffering an
> AddressSpace-specific
> >   concept.
> >
> > * The total buffer size limit parameter is now per AdressSpace and can be
> >   configured for PCIDevice via a property.
> >
> > * Store a magic value in first bytes of bounce buffer struct as a best
> effort
> >   measure to detect invalid pointers in address_space_unmap.
> >
> > Changes from v3:
> >
> > * libvfio-user now supports twin-socket mode which uses separate sockets
> for
> >   client->server and server->client commands, respectively. This
> addresses the
> >   concurrent command bug triggered by server->client DMA access
> commands. See
> >   https://github.com/nutanix/libvfio-user/issues/279 for details.
> >
> > * Add missing teardown code in do_address_space_destroy.
> >
> > * Fix bounce buffer size bookkeeping race condition.
> >
> > * Generate unmap notification callbacks unconditionally.
> >
> > * Some cosmetic fixes.
> >
> > Changes from v4:
> >
> > * Fix accidentally dropped memory_region_unref, control flow restored to
> match
> >   previous code to simplify review.
> >
> > * Some cosmetic fixes.
> >
> > Changes from v5:
> >
> > * Unregister indirect memory region in libvfio-user dma_unregister
> callback.
> >
> > I believe all patches in the series have been reviewed appropriately, so
> my
> > hope is that this will be the final iteration. Stefan, Peter, Jag,
> thanks for
> > your feedback, let me know if there's anything else needed from my side
> before
> > this can get merged.
> >
> > Mattias Nissler (5):
> >   softmmu: Per-AddressSpace bounce buffering
> >   softmmu: Support concurrent bounce buffers
> >   Update subprojects/libvfio-user
> >   vfio-user: Message-based DMA support
> >   vfio-user: Fix config space access byte order
> >
> >  hw/pci/pci.c                  |   8 ++
> >  hw/remote/trace-events        |   2 +
> >  hw/remote/vfio-user-obj.c     | 104 +++++++++++++++++++++----
> >  include/exec/cpu-common.h     |   2 -
> >  include/exec/memory.h         |  41 +++++++++-
> >  include/hw/pci/pci_device.h   |   3 +
> >  subprojects/libvfio-user.wrap |   2 +-
> >  system/dma-helpers.c          |   4 +-
> >  system/memory.c               |   8 ++
> >  system/physmem.c              | 141 ++++++++++++++++++----------------
> >  10 files changed, 226 insertions(+), 89 deletions(-)
> >
>
>

Reply via email to