Hi Frank, On Fri, Nov 8, 2019 at 12:10 AM Frank Yang <l...@google.com> wrote: > > So I'm not really sure why people are having issues sharing buffers that live > on the GPU. Doesn't that show up as some integer ID on the host, and some > $GuestFramework (dmabuf, gralloc) ID on the guest, and it all works out due > to maintaining the correspondence in your particular stack of virtual > devices? For example, if you want to do video decode in hardware on an > Android guest, there should be a gralloc buffer whose handle contains enough > information to reconstruct the GPU buffer ID on the host, because gralloc is > how processes communicate gpu buffer ids to each other on Android.
I don't think we really have any issues with that. :) We just need a standard for: a) assignment of buffer IDs that the guest can refer to, b) making all virtual devices understand the IDs from a) when such are passed to them by the guest. > > BTW, if we have a new device just for this, this should also be more flexible > than being udmabuf on the host. There are other OSes than Linux. Keep in > mind, also, that across different drivers even on Linux, e.g., NVIDIA > proprietary, dmabuf might not always be available. > > As for host CPU memory that is allocated in various ways, I think Android > Emulator has built a very flexible/general solution, esp if we need to share > a host CPU buffer allocated via something thats not completely under our > control, such as Vulkan. We reserve a PCI BAR for that and map memory > directly from the host Vk drier into there, via the address space device. It's > > https://android.googlesource.com/platform/external/qemu/+/refs/heads/emu-master-dev/hw/pci/goldfish_address_space.c > https://android.googlesource.com/platform/external/qemu/+/refs/heads/emu-master-dev/android/android-emu/android/emulation/address_space_device.cpp#205 I recall that we already agreed on exposing host memory to the guests using PCI BARs. There should be work-in-progress patches for virtio-gpu to use that instead of shadow buffers and transfers. > > Number of copies is also completely under the user's control, unlike ivshmem. > It also is not tied to any particular device such as gpu or codec. Since the > memory is owned by the host and directly mapped to the guest PCI without any > abstraction, it's contiguous, it doesn't carve out guest RAM, doesn't waste > CMA, etc. That's one of the reasons we use host-based allocations in VMs running on Chrome OS. That said, I think everyone here agrees that it's a good optimization that should be specified and implemented. P.S. The common mailing list netiquette recommends bottom posting and plain text emails. Best regards, Tomasz > > On Thu, Nov 7, 2019 at 4:13 AM Stefan Hajnoczi <stefa...@gmail.com> wrote: >> >> On Wed, Nov 6, 2019 at 1:50 PM Gerd Hoffmann <kra...@redhat.com> wrote: >> > > In the graphics buffer sharing use case, how does the other side >> > > determine how to interpret this data? >> > >> > The idea is to have free form properties (name=value, with value being >> > a string) for that kind of metadata. >> > >> > > Shouldn't there be a VIRTIO >> > > device spec for the messaging so compatible implementations can be >> > > written by others? >> > >> > Adding a list of common properties to the spec certainly makes sense, >> > so everybody uses the same names. Adding struct-ed properties for >> > common use cases might be useful too. >> >> Why not define VIRTIO devices for wayland and friends? >> >> This new device exposes buffer sharing plus properties - effectively a >> new device model nested inside VIRTIO. The VIRTIO device model has >> the necessary primitives to solve the buffer sharing problem so I'm >> struggling to see the purpose of this new device. >> >> Custom/niche applications that do not wish to standardize their device >> type can maintain out-of-tree VIRTIO devices. Both kernel and >> userspace drivers can be written for the device and there is already >> VIRTIO driver code that can be reused. They have access to the full >> VIRTIO device model, including feature negotiation and configuration >> space. >> >> Stefan >>