Re: [RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf drv
Abandoning this series as a new version was submitted for the review "[RFC PATCH v2 0/9] hyper_dmabuf: Hyper_DMABUF driver" On Tue, Dec 19, 2017 at 11:29:17AM -0800, Kim, Dongwon wrote: > Upload of intial version of hyper_DMABUF driver enabling > DMA_BUF exchange between two different VMs in virtualized > platform based on hypervisor such as KVM or XEN. > > Hyper_DMABUF drv's primary role is to import a DMA_BUF > from originator then re-export it to another Linux VM > so that it can be mapped and accessed by it. > > The functionality of this driver highly depends on > Hypervisor's native page sharing mechanism and inter-VM > communication support. > > This driver has two layers, one is main hyper_DMABUF > framework for scatter-gather list management that handles > actual import and export of DMA_BUF. Lower layer is about > actual memory sharing and communication between two VMs, > which is hypervisor-specific interface. > > This driver is initially designed to enable DMA_BUF > sharing across VMs in Xen environment, so currently working > with Xen only. > > This also adds Kernel configuration for hyper_DMABUF drv > under Device Drivers->Xen driver support->hyper_dmabuf > options. > > To give some brief information about each source file, > > hyper_dmabuf/hyper_dmabuf_conf.h > : configuration info > > hyper_dmabuf/hyper_dmabuf_drv.c > : driver interface and initialization > > hyper_dmabuf/hyper_dmabuf_imp.c > : scatter-gather list generation and management. DMA_BUF > ops for DMA_BUF reconstructed from hyper_DMABUF > > hyper_dmabuf/hyper_dmabuf_ioctl.c > : IOCTLs calls for export/import and comm channel creation > unexport. > > hyper_dmabuf/hyper_dmabuf_list.c > : Database (linked-list) for exported and imported > hyper_DMABUF > > hyper_dmabuf/hyper_dmabuf_msg.c > : creation and management of messages between exporter and > importer > > hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c > : comm ch management and ISRs for incoming messages. > > hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c > : Database (linked-list) for keeping information about > existing comm channels among VMs > > Signed-off-by: Dongwon Kim> Signed-off-by: Mateusz Polrola > --- > drivers/xen/Kconfig| 2 + > drivers/xen/Makefile | 1 + > drivers/xen/hyper_dmabuf/Kconfig | 14 + > drivers/xen/hyper_dmabuf/Makefile | 34 + > drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h | 2 + > drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c| 54 ++ > drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h| 101 +++ > drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c| 852 > + > drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h| 31 + > drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c | 462 +++ > drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c | 119 +++ > drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h | 40 + > drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c| 212 + > drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h| 45 ++ > drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h | 16 + > drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h | 70 ++ > .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c | 328 > .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h | 62 ++ > .../hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c | 106 +++ > .../hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h | 35 + > 20 files changed, 2586 insertions(+) > create mode 100644 drivers/xen/hyper_dmabuf/Kconfig > create mode 100644 drivers/xen/hyper_dmabuf/Makefile > create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h > create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c > create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h > create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c > create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h > create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c > create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c > create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h > create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c > create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h > create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h > create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h > create mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c > create mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h > create mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c > create mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h > > diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig > index d8dd546..b59b0e3 100644 > --- a/drivers/xen/Kconfig > +++ b/drivers/xen/Kconfig > @@ -321,4 +321,6 @@ config XEN_SYMS > config
Re: [RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf drv
On Wed, Dec 20, 2017 at 10:59:57AM +0100, Daniel Vetter wrote: > On Tue, Dec 19, 2017 at 03:27:31PM -0800, Dongwon Kim wrote: > > I forgot to include this brief information about this patch series. > > > > This patch series contains the implementation of a new device driver, > > hyper_dmabuf, which provides a method for DMA-BUF sharing across > > different OSes running on the same virtual OS platform powered by > > a hypervisor. > > > > Detailed information about this driver is described in a high-level doc > > added by the second patch of the series. > > > > [RFC PATCH 02/60] hyper_dmabuf: added a doc for hyper_dmabuf sharing > > > > I am attaching 'Overview' section here as a summary. > > > > -- > > Section 1. Overview > > -- > > > > Hyper_DMABUF driver is a Linux device driver running on multiple Virtual > > achines (VMs), which expands DMA-BUF sharing capability to the VM > > environment > > where multiple different OS instances need to share same physical data > > without > > data-copy across VMs. > > > > To share a DMA_BUF across VMs, an instance of the Hyper_DMABUF drv on the > > exporting VM (so called, “exporter”) imports a local DMA_BUF from the > > original > > producer of the buffer, then re-exports it with an unique ID, > > hyper_dmabuf_id > > for the buffer to the importing VM (so called, “importer”). > > > > Another instance of the Hyper_DMABUF driver on importer registers > > a hyper_dmabuf_id together with reference information for the shared > > physical > > pages associated with the DMA_BUF to its database when the export happens. > > > > The actual mapping of the DMA_BUF on the importer’s side is done by > > the Hyper_DMABUF driver when user space issues the IOCTL command to access > > the shared DMA_BUF. The Hyper_DMABUF driver works as both an importing and > > exporting driver as is, that is, no special configuration is required. > > Consequently, only a single module per VM is needed to enable cross-VM > > DMA_BUF > > exchange. > > So I know that most dma-buf implementations (especially lots of importers > in drivers/gpu) break this, but fundamentally only the original exporter > is allowed to know about the underlying pages. There's various scenarios > where a dma-buf isn't backed by anything like a struct page. > > So your first step of noodling the underlying struct page out from the > dma-buf is kinda breaking the abstraction, and I think it's not a good > idea to have that. Especially not for sharing across VMs. > > I think a better design would be if hyper-dmabuf would be the dma-buf > exporter in both of the VMs, and you'd import it everywhere you want to in > some gpu/video/whatever driver in the VMs. That way hyper-dmabuf is always > in control of the pages, and a lot of the troubling forwarding you > currently need to do disappears. It could be another way to implement dma-buf sharing however, it would break the flexibility and transparency that this driver has now. With suggested method, there will be two different types of dma-buf exist in general usage model, one is local-dmabuf, a traditional dmabuf that can be shared only within in the same OS instance and the other is cross-vm sharable dmabuf created by hyper_dmabuf driver. The problem with this approach is that an application needs to know whether the contents will be shared or not across VMs in advance before deciding what type of dma-buf it needs to create. Otherwise, the application should always use hyper_dmabuf as the exporter for all contents that can be possibly shared in the future and I think this will require significant amount of application changes and also adds unnecessary dependency on hyper_dmabuf driver. > > 2nd thing: This seems very much related to what's happening around gvt and > allowing at least the host (in a kvm based VM environment) to be able to > access some of the dma-buf (or well, framebuffers in general) that the > client is using. Adding some mailing lists for that. I think you are talking about exposing framebuffer to another domain via GTT memory sharing. And yes, one of primary use cases for hyper_dmabuf is to share a framebuffer or other graphic object across VMs but it is designed to do it via more general way using existing dma-buf framework. Also, we wanted to make this feature available virtually for any sharable contents which can currently be shared via dma-buf locally. > -Daniel > > > > > -- > > > > There is a git repository at github.com where this series of patches are all > > integrated in Linux kernel tree based on the commit: > > > > commit ae64f9bd1d3621b5e60d7363bc20afb46aede215 > > Author: Linus Torvalds> > Date: Sun Dec 3 11:01:47 2017 -0500 > > > >
Re: [RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf drv
On 26 December 2017 at 19:19, Matt Roperwrote: > On Wed, Dec 20, 2017 at 10:59:57AM +0100, Daniel Vetter wrote: >> On Tue, Dec 19, 2017 at 03:27:31PM -0800, Dongwon Kim wrote: >> > I forgot to include this brief information about this patch series. >> > >> > This patch series contains the implementation of a new device driver, >> > hyper_dmabuf, which provides a method for DMA-BUF sharing across >> > different OSes running on the same virtual OS platform powered by >> > a hypervisor. >> > >> > Detailed information about this driver is described in a high-level doc >> > added by the second patch of the series. >> > >> > [RFC PATCH 02/60] hyper_dmabuf: added a doc for hyper_dmabuf sharing >> > >> > I am attaching 'Overview' section here as a summary. >> > >> > -- >> > Section 1. Overview >> > -- >> > >> > Hyper_DMABUF driver is a Linux device driver running on multiple Virtual >> > achines (VMs), which expands DMA-BUF sharing capability to the VM >> > environment >> > where multiple different OS instances need to share same physical data >> > without >> > data-copy across VMs. >> > >> > To share a DMA_BUF across VMs, an instance of the Hyper_DMABUF drv on the >> > exporting VM (so called, “exporter”) imports a local DMA_BUF from the >> > original >> > producer of the buffer, then re-exports it with an unique ID, >> > hyper_dmabuf_id >> > for the buffer to the importing VM (so called, “importer”). >> > >> > Another instance of the Hyper_DMABUF driver on importer registers >> > a hyper_dmabuf_id together with reference information for the shared >> > physical >> > pages associated with the DMA_BUF to its database when the export happens. >> > >> > The actual mapping of the DMA_BUF on the importer’s side is done by >> > the Hyper_DMABUF driver when user space issues the IOCTL command to access >> > the shared DMA_BUF. The Hyper_DMABUF driver works as both an importing and >> > exporting driver as is, that is, no special configuration is required. >> > Consequently, only a single module per VM is needed to enable cross-VM >> > DMA_BUF >> > exchange. >> >> So I know that most dma-buf implementations (especially lots of importers >> in drivers/gpu) break this, but fundamentally only the original exporter >> is allowed to know about the underlying pages. There's various scenarios >> where a dma-buf isn't backed by anything like a struct page. >> >> So your first step of noodling the underlying struct page out from the >> dma-buf is kinda breaking the abstraction, and I think it's not a good >> idea to have that. Especially not for sharing across VMs. >> >> I think a better design would be if hyper-dmabuf would be the dma-buf >> exporter in both of the VMs, and you'd import it everywhere you want to in >> some gpu/video/whatever driver in the VMs. That way hyper-dmabuf is always >> in control of the pages, and a lot of the troubling forwarding you >> currently need to do disappears. > > I think one of the main driving use cases here is for a "local" graphics > compositor inside the VM to accept client buffers from unmodified > applications and then pass those buffers along to a "global" compositor > running in the service domain. This would allow the global compositor > to composite applications running in different virtual machines (and > possibly running under different operating systems). > > If we require that hyper-dmabuf always be the exporter, that complicates > things a little bit since a buffer allocated via regular interfaces (GEM > ioctls or whatever) wouldn't be directly transferrable to the global > compositor. For graphics use cases like this, we could probably hide a > lot of the details by modifying/replacing the EGL implementation that > handles the details of buffer allocation. However if we have > applications that are themselves just passing along externally-allocated > buffers (e.g., images from a camera device), we'd probably need to > modify those applications and/or the drivers they get their content > from. There's also non-GPU-rendering clients that pass SHM buffers to the compositor. For now, a Wayland proxy in the guest is copying the client-provided buffers to virtio-gpu resources at the appropriate times, which also need to be copied once more to host memory. Would be great to reduce the number of copies that that implies. For more on this effort: https://patchwork.kernel.org/patch/10134603/ Regards, Tomeu > > Matt > >> >> 2nd thing: This seems very much related to what's happening around gvt and >> allowing at least the host (in a kvm based VM environment) to be able to >> access some of the dma-buf (or well, framebuffers in general) that the >> client is using. Adding some mailing lists for that. >> -Daniel >> >> > >> >
Re: [RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf drv
On Wed, Dec 20, 2017 at 10:59:57AM +0100, Daniel Vetter wrote: > On Tue, Dec 19, 2017 at 03:27:31PM -0800, Dongwon Kim wrote: > > I forgot to include this brief information about this patch series. > > > > This patch series contains the implementation of a new device driver, > > hyper_dmabuf, which provides a method for DMA-BUF sharing across > > different OSes running on the same virtual OS platform powered by > > a hypervisor. > > > > Detailed information about this driver is described in a high-level doc > > added by the second patch of the series. > > > > [RFC PATCH 02/60] hyper_dmabuf: added a doc for hyper_dmabuf sharing > > > > I am attaching 'Overview' section here as a summary. > > > > -- > > Section 1. Overview > > -- > > > > Hyper_DMABUF driver is a Linux device driver running on multiple Virtual > > achines (VMs), which expands DMA-BUF sharing capability to the VM > > environment > > where multiple different OS instances need to share same physical data > > without > > data-copy across VMs. > > > > To share a DMA_BUF across VMs, an instance of the Hyper_DMABUF drv on the > > exporting VM (so called, “exporter”) imports a local DMA_BUF from the > > original > > producer of the buffer, then re-exports it with an unique ID, > > hyper_dmabuf_id > > for the buffer to the importing VM (so called, “importer”). > > > > Another instance of the Hyper_DMABUF driver on importer registers > > a hyper_dmabuf_id together with reference information for the shared > > physical > > pages associated with the DMA_BUF to its database when the export happens. > > > > The actual mapping of the DMA_BUF on the importer’s side is done by > > the Hyper_DMABUF driver when user space issues the IOCTL command to access > > the shared DMA_BUF. The Hyper_DMABUF driver works as both an importing and > > exporting driver as is, that is, no special configuration is required. > > Consequently, only a single module per VM is needed to enable cross-VM > > DMA_BUF > > exchange. > > So I know that most dma-buf implementations (especially lots of importers > in drivers/gpu) break this, but fundamentally only the original exporter > is allowed to know about the underlying pages. There's various scenarios > where a dma-buf isn't backed by anything like a struct page. > > So your first step of noodling the underlying struct page out from the > dma-buf is kinda breaking the abstraction, and I think it's not a good > idea to have that. Especially not for sharing across VMs. > > I think a better design would be if hyper-dmabuf would be the dma-buf > exporter in both of the VMs, and you'd import it everywhere you want to in > some gpu/video/whatever driver in the VMs. That way hyper-dmabuf is always > in control of the pages, and a lot of the troubling forwarding you > currently need to do disappears. I think one of the main driving use cases here is for a "local" graphics compositor inside the VM to accept client buffers from unmodified applications and then pass those buffers along to a "global" compositor running in the service domain. This would allow the global compositor to composite applications running in different virtual machines (and possibly running under different operating systems). If we require that hyper-dmabuf always be the exporter, that complicates things a little bit since a buffer allocated via regular interfaces (GEM ioctls or whatever) wouldn't be directly transferrable to the global compositor. For graphics use cases like this, we could probably hide a lot of the details by modifying/replacing the EGL implementation that handles the details of buffer allocation. However if we have applications that are themselves just passing along externally-allocated buffers (e.g., images from a camera device), we'd probably need to modify those applications and/or the drivers they get their content from. Matt > > 2nd thing: This seems very much related to what's happening around gvt and > allowing at least the host (in a kvm based VM environment) to be able to > access some of the dma-buf (or well, framebuffers in general) that the > client is using. Adding some mailing lists for that. > -Daniel > > > > > -- > > > > There is a git repository at github.com where this series of patches are all > > integrated in Linux kernel tree based on the commit: > > > > commit ae64f9bd1d3621b5e60d7363bc20afb46aede215 > > Author: Linus Torvalds> > Date: Sun Dec 3 11:01:47 2017 -0500 > > > > Linux 4.15-rc2 > > > > https://github.com/downor/linux_hyper_dmabuf.git hyper_dmabuf_integration_v3 > > > > ___ > > dri-devel mailing list > > dri-devel@lists.freedesktop.org > >
Re: [RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf drv
On Tue, Dec 19, 2017 at 03:27:31PM -0800, Dongwon Kim wrote: > I forgot to include this brief information about this patch series. > > This patch series contains the implementation of a new device driver, > hyper_dmabuf, which provides a method for DMA-BUF sharing across > different OSes running on the same virtual OS platform powered by > a hypervisor. > > Detailed information about this driver is described in a high-level doc > added by the second patch of the series. > > [RFC PATCH 02/60] hyper_dmabuf: added a doc for hyper_dmabuf sharing > > I am attaching 'Overview' section here as a summary. > > -- > Section 1. Overview > -- > > Hyper_DMABUF driver is a Linux device driver running on multiple Virtual > achines (VMs), which expands DMA-BUF sharing capability to the VM environment > where multiple different OS instances need to share same physical data without > data-copy across VMs. > > To share a DMA_BUF across VMs, an instance of the Hyper_DMABUF drv on the > exporting VM (so called, “exporter”) imports a local DMA_BUF from the original > producer of the buffer, then re-exports it with an unique ID, hyper_dmabuf_id > for the buffer to the importing VM (so called, “importer”). > > Another instance of the Hyper_DMABUF driver on importer registers > a hyper_dmabuf_id together with reference information for the shared physical > pages associated with the DMA_BUF to its database when the export happens. > > The actual mapping of the DMA_BUF on the importer’s side is done by > the Hyper_DMABUF driver when user space issues the IOCTL command to access > the shared DMA_BUF. The Hyper_DMABUF driver works as both an importing and > exporting driver as is, that is, no special configuration is required. > Consequently, only a single module per VM is needed to enable cross-VM DMA_BUF > exchange. So I know that most dma-buf implementations (especially lots of importers in drivers/gpu) break this, but fundamentally only the original exporter is allowed to know about the underlying pages. There's various scenarios where a dma-buf isn't backed by anything like a struct page. So your first step of noodling the underlying struct page out from the dma-buf is kinda breaking the abstraction, and I think it's not a good idea to have that. Especially not for sharing across VMs. I think a better design would be if hyper-dmabuf would be the dma-buf exporter in both of the VMs, and you'd import it everywhere you want to in some gpu/video/whatever driver in the VMs. That way hyper-dmabuf is always in control of the pages, and a lot of the troubling forwarding you currently need to do disappears. 2nd thing: This seems very much related to what's happening around gvt and allowing at least the host (in a kvm based VM environment) to be able to access some of the dma-buf (or well, framebuffers in general) that the client is using. Adding some mailing lists for that. -Daniel > > -- > > There is a git repository at github.com where this series of patches are all > integrated in Linux kernel tree based on the commit: > > commit ae64f9bd1d3621b5e60d7363bc20afb46aede215 > Author: Linus Torvalds> Date: Sun Dec 3 11:01:47 2017 -0500 > > Linux 4.15-rc2 > > https://github.com/downor/linux_hyper_dmabuf.git hyper_dmabuf_integration_v3 > > ___ > dri-devel mailing list > dri-devel@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/dri-devel -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf drv
I forgot to include this brief information about this patch series. This patch series contains the implementation of a new device driver, hyper_dmabuf, which provides a method for DMA-BUF sharing across different OSes running on the same virtual OS platform powered by a hypervisor. Detailed information about this driver is described in a high-level doc added by the second patch of the series. [RFC PATCH 02/60] hyper_dmabuf: added a doc for hyper_dmabuf sharing I am attaching 'Overview' section here as a summary. -- Section 1. Overview -- Hyper_DMABUF driver is a Linux device driver running on multiple Virtual achines (VMs), which expands DMA-BUF sharing capability to the VM environment where multiple different OS instances need to share same physical data without data-copy across VMs. To share a DMA_BUF across VMs, an instance of the Hyper_DMABUF drv on the exporting VM (so called, “exporter”) imports a local DMA_BUF from the original producer of the buffer, then re-exports it with an unique ID, hyper_dmabuf_id for the buffer to the importing VM (so called, “importer”). Another instance of the Hyper_DMABUF driver on importer registers a hyper_dmabuf_id together with reference information for the shared physical pages associated with the DMA_BUF to its database when the export happens. The actual mapping of the DMA_BUF on the importer’s side is done by the Hyper_DMABUF driver when user space issues the IOCTL command to access the shared DMA_BUF. The Hyper_DMABUF driver works as both an importing and exporting driver as is, that is, no special configuration is required. Consequently, only a single module per VM is needed to enable cross-VM DMA_BUF exchange. -- There is a git repository at github.com where this series of patches are all integrated in Linux kernel tree based on the commit: commit ae64f9bd1d3621b5e60d7363bc20afb46aede215 Author: Linus TorvaldsDate: Sun Dec 3 11:01:47 2017 -0500 Linux 4.15-rc2 https://github.com/downor/linux_hyper_dmabuf.git hyper_dmabuf_integration_v3 ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel