/virtio-v1.0-cs03-virtio-input.pdf
https://www.kraxel.org/virtio/virtio-v1.0-cs03-virtio-input.html#x1-287
Signed-off-by: Gerd Hoffmann <kra...@redhat.com>
---
content.tex | 2 +
virtio-input.tex | 124 +++
2 files changed, 126 inse
Hi,
> > > Rendered versions are available here:
> > > https://www.kraxel.org/virtio/virtio-v1.0-cs03-virtio-gpu.pdf
> > >
> > > https://www.kraxel.org/virtio/virtio-v1.0-cs03-virtio-gpu.html#x1-287
> > I guess a non-fenced command only completes when the operation has
> > finished,
Hi,
> The downside is that this is hard to specify formally, the same way we
> do other devices. Say we do that and point to a fixed version of the
> relevant Linux headers and the headers evolve and add or change
> something. Then we would either go change the spec to be in sync with
> the
Hi,
> > > > >
> > > > > https://www.kraxel.org/virtio/virtio-v1.0-cs03-virtio-gpu.html#x1-287
> Is there a chance you could rebase and post?
> This is used widely, I think we shoould have it in 1.1
> if at all possible.
Support for 2d mode (3d/virgl mode is not covered by this patch) has
been added to the linux kernel version 4.2 and to qemu version 2.4.
Signed-off-by: Gerd Hoffmann
---
content.tex| 2 +
virtio-gpu.tex | 481 +
2 files changed, 483
Support for 2d mode (3d/virgl mode is not covered by this patch) has
been added to the linux kernel version 4.2 and to qemu version 2.4.
Cc: Laszlo Ersek
Signed-off-by: Gerd Hoffmann
---
content.tex| 2 +
virtio-gpu.tex | 481 +
2
Use the dma mapping api and properly add iommu mappings for
objects, unless virtio is in iommu quirk mode.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/virtio/virtgpu_drv.h | 1 +
drivers/gpu/drm/virtio/virtgpu_vq.c | 46 +---
2 files changed, 38 insertions
The new function balances virtio_gpu_object_attach().
Also make virtio_gpu_cmd_resource_inval_backing() static and switch
call sites to the new virtio_gpu_object_attach() function.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/virtio/virtgpu_drv.h | 4 ++--
drivers/gpu/drm/virtio
Gerd Hoffmann (2):
drm/virtio: add virtio_gpu_object_detach() function
drm/virtio: add iommu support.
drivers/gpu/drm/virtio/virtgpu_drv.h | 5 ++--
drivers/gpu/drm/virtio/virtgpu_fb.c | 2 +-
drivers/gpu/drm/virtio/virtgpu_ttm.c | 3 +--
drivers/gpu/drm/virtio/virtgpu_vq.c | 52
.
Signed-off-by: Gerd Hoffmann
Reviewed-by: Dave Airlie
---
include/uapi/linux/virtio_gpu.h | 16
1 file changed, 16 insertions(+)
diff --git a/include/uapi/linux/virtio_gpu.h b/include/uapi/linux/virtio_gpu.h
index f43c3c6171..68198691a6 100644
--- a/include/uapi/linux/virtio_gpu.h
update:
https://git.kraxel.org/cgit/virtio-spec/log/?h=virtio-gpu
cheers,
Gerd
Gerd Hoffmann (1):
virtio-gpu: add VIRTIO_GPU_F_EDID feature
include/uapi/linux/virtio_gpu.h | 16
1 file changed, 16 insertions(+)
--
2.9.3
.
Signed-off-by: Gerd Hoffmann
---
virtio-gpu.tex | 29 +
1 file changed, 29 insertions(+)
diff --git a/virtio-gpu.tex b/virtio-gpu.tex
index 5d4709a..fbb7936 100644
--- a/virtio-gpu.tex
+++ b/virtio-gpu.tex
@@ -34,6 +34,7 @@ control queue.
\begin{description}
\item
Support has been added to the linux kernel version 4.1
and to qemu version 2.4.
Signed-off-by: Ladi Prosek
Signed-off-by: Gerd Hoffmann
---
v2->v3:
* add missing abs field to virtio_input_absinfo.
v1->v2 (by Ladi):
* added VIRTIO_INPUT_CFG_ID_DEVIDS / virtio_input_devids
* added nor
.
Signed-off-by: Gerd Hoffmann
---
virtio-gpu.tex | 31 +++
1 file changed, 31 insertions(+)
diff --git a/virtio-gpu.tex b/virtio-gpu.tex
index 5d4709a..0cf2209 100644
--- a/virtio-gpu.tex
+++ b/virtio-gpu.tex
@@ -34,6 +34,7 @@ control queue.
\begin{description}
\item
On Tue, Oct 23, 2018 at 12:08:22PM -0400, Michael S. Tsirkin wrote:
> On Tue, Oct 23, 2018 at 09:20:49AM -0400, Michael S. Tsirkin wrote:
> > On Tue, Oct 23, 2018 at 03:04:51PM +0200, Gerd Hoffmann wrote:
> > > Support has been added to the linux kernel version 4.1
> > &
On Tue, Oct 23, 2018 at 03:06:24PM +0200, Gerd Hoffmann wrote:
> The feature allows the guest request an EDID blob (describing monitor
> capabilities) for a given scanout (aka virtual monitor connector).
>
> It brings a new command message, which has just a scanout field (beside
&g
On Wed, Oct 24, 2018 at 02:15:51PM +0200, Gerd Hoffmann wrote:
> The feature allows the guest request an EDID blob (describing monitor
> capabilities) for a given scanout (aka virtual monitor connector).
>
> It brings a new command message, which has just a scanout field (beside
&g
Qemu branch for testing is here:
https://git.kraxel.org/cgit/qemu/log/?h=sirius/edid
cheers,
Gerd
Gerd Hoffmann (2):
virtio-gpu: add VIRTIO_GPU_F_EDID feature
drm/virtio: add edid support
drivers/gpu/drm/virtio/virtgpu_drv.h | 3 ++
include/uapi/linux/virtio_gpu.h | 18
.
Signed-off-by: Gerd Hoffmann
Reviewed-by: Dave Airlie
---
include/uapi/linux/virtio_gpu.h | 18 ++
1 file changed, 18 insertions(+)
diff --git a/include/uapi/linux/virtio_gpu.h b/include/uapi/linux/virtio_gpu.h
index f43c3c6171..8e88eba1fa 100644
--- a/include/uapi/linux
linux guest driver implementation of the VIRTIO_GPU_F_EDID feature.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/virtio/virtgpu_drv.h | 3 ++
drivers/gpu/drm/virtio/virtgpu_display.c | 12 ++
drivers/gpu/drm/virtio/virtgpu_drv.c | 1 +
drivers/gpu/drm/virtio/virtgpu_kms.c
Hi,
> > Just pass it down, the call sites all know it (see patch just sent).
>
> Tested that patch you sent. Together with this patch it also resolves
> the virtio gpu graphical display issue for SEV guest.
Cool. Can you ack or review the patch so I can commit it?
> Is there a way to
> >> Once display manger is kicked off for example (sudo systemctl start
> >> lightdm.service) and
> >> resource id 3 gets created from user space down, it still gives a blank
> >> black screen.
> >
> > Hmm. I'd suspect there is simply a code path missing. Can you send the
> > patch you have?
Hi,
> void virtio_gpu_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev,
> uint32_t resource_id, uint64_t offset,
> ...
> struct virtio_gpu_fbdev *vgfbdev = vgdev->vgfbdev;
> struct virtio_gpu_framebuffer *fb = >vgfb;
> struct
On Tue, Oct 30, 2018 at 10:38:04AM +0100, Daniel Vetter wrote:
> On Tue, Oct 30, 2018 at 07:32:06AM +0100, Gerd Hoffmann wrote:
> > linux guest driver implementation of the VIRTIO_GPU_F_EDID feature.
> >
> > Signed-off-by: Gerd Hoffmann
>
> Like with bochs, I t
linux guest driver implementation of the VIRTIO_GPU_F_EDID feature.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/virtio/virtgpu_drv.h | 3 ++
drivers/gpu/drm/virtio/virtgpu_display.c | 6
drivers/gpu/drm/virtio/virtgpu_drv.c | 1 +
drivers/gpu/drm/virtio/virtgpu_kms.c
Gerd Hoffmann (3):
drm: virtual connectors can have edid too
virtio-gpu: add VIRTIO_GPU_F_EDID feature
drm/virtio: add edid support
drivers/gpu/drm/virtio/virtgpu_drv.h | 3 ++
include/uapi/linux/virtio_gpu.h | 17 +
drivers/gpu/drm/drm_connector.c | 3
.
Signed-off-by: Gerd Hoffmann
---
include/uapi/linux/virtio_gpu.h | 17 +
1 file changed, 17 insertions(+)
diff --git a/include/uapi/linux/virtio_gpu.h b/include/uapi/linux/virtio_gpu.h
index f43c3c6171..7267c3d2d6 100644
--- a/include/uapi/linux/virtio_gpu.h
+++ b/include/uapi/linux
.
Signed-off-by: Gerd Hoffmann
---
virtio-gpu.tex | 30 ++
1 file changed, 30 insertions(+)
diff --git a/virtio-gpu.tex b/virtio-gpu.tex
index 5d4709ad30..c64ecfe859 100644
--- a/virtio-gpu.tex
+++ b/virtio-gpu.tex
@@ -34,6 +34,7 @@ control queue.
\begin{description
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/drm_connector.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/drm_connector.c b/drivers/gpu/drm/drm_connector.c
index 6011d769d5..95cbbf7ee5 100644
--- a/drivers/gpu/drm/drm_connector.c
+++ b/drivers/gpu/drm
Hi,
> I attempted to fix it in the ttm layer and here is the discussion
> https://lore.kernel.org/lkml/b44280d7-eb13-0996-71f5-3fbdeb466...@amd.com/
>
> The ttm maintainer Christian is suggesting to map and set ttm->pages as
> decrypted
> right after ttm->pages are allocated.
>
> Just
Hi,
> buffer. I tried to put a dma_sync_sg_for_device() on virtio_gpu_object
> obj->pages-sgl
> before VIRTIO_GPU_CMD_TRANSFER_TO_HOST_2D is sent. This fixes the kernel
> console path.
That should be the right place.
> Once display manger is kicked off for example (sudo systemctl start
>
Hi,
> slots by doing so. So for Vulkan, we rely on having one large host visible
> region on the host that is a single region of host shared memory. That, is
> then sub-allocated for the guest. So there is no Vulkan host pointer that
Yes, sub-allocating will be needed for reasonable
On Tue, Feb 12, 2019 at 08:17:13AM -0800, Frank Yang wrote:
> In implementing a complete graphics system, shared memory objects may
> transcend device boundaries.
>
> Consider for example, Android and gralloc. The media codec versus the GPU
> rendering are commonly considered as separate devices,
Hi,
> > Non-linux systems obviously need something else for the job. The
> > guest/host implementation details don't affect the virtio-gpu specs
> > though.
>
> While we're talking about this: what is your plan for virtio-gpu
> implementations for non-Linux guests/hosts?
IIRC someone is
> > Hmm. I'm wondering whenever you just need a different virtio transport.
> > Very simliar to virtio-pci, but instead of using all guest ram (modulo
> > iommu quirks) as address space use a pci memory bar as address space.
> > virtio rings would live there, all buffers would live there,
Hi,
> > For communication between guest processes within the same VM I don't
> > really see a need to involve the hypervisor ...
> >
> Right, once the host memory is set up we can rely on purely guest side stuff
> map sub-regions of it.
Or just use guest ram ...
> > > Yes, also, other devices
Hi,
> > Note: I think using a pci memory bar (aka host memory mapped into the
> > guest) as backing storage for dma-bufs isn't going to work.
>
> (Not knowing dma-bufs) but could you explain why?
Many places in the linux kernel assume dma-bufs are built out of normal
ram pages (i.e.
On Fri, Feb 22, 2019 at 01:15:27AM -0500, Michael S. Tsirkin wrote:
> On Thu, Feb 21, 2019 at 10:59:02AM +0100, Gerd Hoffmann wrote:
> > Hi,
> >
> > > > Note: I think using a pci memory bar (aka host memory mapped into the
> > > > guest) as backing s
On Tue, Feb 05, 2019 at 01:06:42PM -0800, Roman Kiryanov wrote:
> Hi Dave,
>
> > In virtio-fs we have two separate stages:
> > a) A shared arena is setup (and that's what the spec Stefan pointed to is
> > about) -
> > it's statically allocated at device creation and corresponds to a chunk
Hi,
> > That might simlify pass-through of v4l2 host devices, but isn't
> > necessarily the best choice long-term.
> Right, but if we're not emulating at the v4l2 api level, then it starts
> looking a lot
> like the proposed virtio-hostmem; there's a common pattern of
> direct access to host
Hi,
> However, dma-buf seems to require either a Linux kernel or a Linux host.
Sure. They allow passing buffers from one linux driver to another
without copying the data.
> Dma-bufs aren't also 1:1 with Vulkan host visible memory pointers,
> or v4l2 codec buffers, or ffmpeg codec buffers,
> > > In general though, this means that the ideal usage of host pointers would
> > > be to set a few regions up front for certain purposes, then share that
> > out
> > > amongst other device contexts. This also facilitates sharing the memory
> > > between guest processes, which is useful for
esource.com/platform/external/qemu/+/emu-master-dev/hw/pci/goldfish_address_space.c
> >
> > during upstreaming the driver it was suggested that developing a
> > virtio spec could be a better approach than inventing our specific
> > driver and device.
> >
> > Cou
Hi,
> > If you want allow your guest use all three sound cards, then you
> > probably want create three sound cards in the guest too, each with
> > different capabilities and linked to one of the host devices.
>
> I don't really agree here, I don't see why I have to tie a virtual soundcard
>
On Fri, May 10, 2019 at 02:15:00PM +, Anton Yakovlev wrote:
> Hi Gerd! My name is Anton and I'm the original author of this draft. Thanks
> for comments!
>
> >> Configuration space provides a maximum amount of available virtual queues.
> >> The
> >> nqueues value MUST be at least one.
> >>
On Fri, May 10, 2019 at 11:45:24AM +0200, Mikhail Golubev wrote:
> Hi all!
>
> Sorry for breaking in the middle of the virtio audio driver and device
> development discussion. But we are developing a virtio sound card prototype
> for
> a while and we would like to share our specification draft
Hi,
> It's possible to enquiry the virtio-audio device about the host capabilities
> at this time but I'm not sure how to implement this in every audio backend
> of qemu (alsa, pulse, oss, coreaudio, dsound, ...)
It is probably a good idea to coordinate this with Kővágó Zoltán (Cc'ed)
who has
devices, so it's probably
> something generic. The device seems to initialize fine, but I have not
> tried to actually use it (I simply keep a virtio-gpu device in my QEMU
> command line for sanity checking.)
>
> As said, I bisected this down to the initial commit
>
> commit
Hi,
> Our prototype implementation uses [4], which allows the virtio-vdec
> device to use buffers allocated by virtio-gpu device.
> [4] https://lkml.org/lkml/2019/9/12/157
Well. I think before even discussing the protocol details we need a
reasonable plan for buffer handling. I think using
Hi,
> In the graphics buffer sharing use case, how does the other side
> determine how to interpret this data?
The idea is to have free form properties (name=value, with value being
a string) for that kind of metadata.
> Shouldn't there be a VIRTIO
> device spec for the messaging so
On Wed, Nov 06, 2019 at 05:36:22PM +0900, David Stevens wrote:
> > (1) The virtio device
> > =
> >
> > Has a single virtio queue, so the guest can send commands to register
> > and unregister buffers. Buffers are allocated in guest ram. Each buffer
> > has a list of memory
> > (1) The virtio device
> > =
> >
> > Has a single virtio queue, so the guest can send commands to register
> > and unregister buffers. Buffers are allocated in guest ram. Each
> > buffer
> > has a list of memory ranges for the data. Each buffer also has some
> >
Hi,
> > Reason is: Meanwhile I'm wondering whenever "just use virtio-gpu
> > resources" is really a good answer for all the different use cases
> > we have collected over time. Maybe it is better to have a dedicated
> > buffer sharing virtio device? Here is the rough idea:
>
> My concern is
On Tue, Nov 05, 2019 at 08:19:19PM +0100, Dmitry Sepp wrote:
> [Resend after fixing an issue with the virtio-dev mailing list]
>
> This patch proposes a virtio specification for a new virtio video
> device.
Hmm, quickly looking over this, it looks simliar to the vdec draft
posted a few weeks
On Thu, Nov 07, 2019 at 02:09:34PM +0100, Dmitry Sepp wrote:
> Hello Gerd,
>
> Thank you for your feedback.
>
> There is no relationship between those. As I mentioned earlier. we have also
> been working on a virtio video device at the same time. And there is no
> relationship between the two
Hi,
> > Adding a list of common properties to the spec certainly makes sense,
> > so everybody uses the same names. Adding struct-ed properties for
> > common use cases might be useful too.
>
> Why not define VIRTIO devices for wayland and friends?
There is an out-of-tree implementation of
On Thu, Nov 07, 2019 at 11:16:18AM +, Dr. David Alan Gilbert wrote:
> * Gerd Hoffmann (kra...@redhat.com) wrote:
> > Hi,
> >
> > > > This is not about host memory, buffers are in guest ram, everything else
> > > > would make sharing those b
Hi,
> > This is not about host memory, buffers are in guest ram, everything else
> > would make sharing those buffers between drivers inside the guest (as
> > dma-buf) quite difficult.
>
> Given it's just guest memory, can the guest just have a virt queue on
> which it places pointers to the
Hi folks,
The issue of sharing buffers between guests and hosts keeps poping
up again and again in different contexts. Most recently here:
https://www.mail-archive.com/qemu-devel@nongnu.org/msg656685.html
So, I'm grabbing the recipient list of the virtio-vdec thread and some
more people I
> On 2019-11-06 23:41, Gerd Hoffmann wrote:
> > On Wed, Nov 06, 2019 at 05:36:22PM +0900, David Stevens wrote:
> > > > (1) The virtio device
> > > > =
> > > >
> > > > Has a single virtio queue, so the guest can send comm
> > 1. Both the device and the driver submit requests to each other. For each
> > request the response is sent as a separate request.
>
> To be more precise, in vdec there are no responses. The guest sends
> commands to the host using one virtqueue. The host signals
> asynchronous events, which
Hi,
> > In general if I have guest core affinity then hypervisor context
> > switching for that guest is in low 10s of uS (depends on hypervisor
> > of course and HW). This is good enough for low latency audio.
> That's why we are really curious how other people target such issues
> with audio
Hi,
> > > Do we already have a mechanism
> > > to import memory to virtio-gpu via importing a bo handle from somewhere
> > > else?
> >
> > Inside the guest? Import is not working yet, that needs another update
> > (blob resources) due to current virtio-gpu resources requiring a format
> > at
Hi,
> 1. Focus on only decoder/encoder functionalities first.
>
> As Tomasz said earlier in this thread, it'd be too complicated to support
> camera
> usage at the same time. So, I'd suggest to make it just a generic mem-to-mem
> video processing device protocol for now.
> If we finally
On Mon, Dec 02, 2019 at 09:24:31AM -0800, Frank Yang wrote:
> Interesting; what's the use case for this?
Mostly reduce data copying.
> Do we already have a mechanism
> to import memory to virtio-gpu via importing a bo handle from somewhere
> else?
Inside the guest? Import is not working yet,
On Thu, Nov 28, 2019 at 12:42:43PM +0100, Gerd Hoffmann wrote:
> Add some notes about fetching the EDID information.
https://github.com/oasis-tcs/virtio-spec/issues/64
cheers,
Gerd
-
To unsubscribe, e-mail: virtio-
On Mon, Dec 02, 2019 at 02:06:53PM +0100, Anton Yakovlev wrote:
> Hi,
>
> Sorry for the late reply, I was not available for a week.
>
>
> On 25.11.2019 10:31, Gerd Hoffmann wrote:
> >Hi,
> >
> > > > Instead of PCM_OUTPUT and PCM_INPUT I would
Hi,
> > > For example, in case
> > > of GUEST_MEM the request could be followed by a buffer sg-list.
> >
> > I'm not convinced guest_mem is that useful. host_mem allows to give the
> > guest access to the buffers used by the hosts sound hardware, which is
> > probably what you need if the MSG
Hi,
> > For (1) you'll simply do a QUEUE_BUFFER. The command carries references
> > to the buffer pages. No resource management needed.
> >
> > For (2) you'll have RESOURCE_CREATE + RESOURCE_DESTROY + QUEUE_RESOURCE,
> > where RESOURCE_CREATE passes the scatter list of buffer pages to the
> >
> > > 1) Driver sends RESOURCE_CREATE_2D shared request
> > > 2) Driver sends ATTACH_BACKING request
> > > 3) Device creates a shared resource
> > > 4) Driver sends SET_SCANOUT request
> > > 5) Device sends shared resource to display
> > > 6) Driver sends DETACH_BACKING request
> >
> > Hmm, I
Hi,
> > They are never taken away from guest ram. The guest is free to do
> > whatever it wants with the pages. It's the job of the guest driver to
> > make sure the pages are not released until the host stopped using them.
> > So the host must drop all references before completing the
> >
Hi,
> None of the proposals directly address the use case of sharing host
> allocated buffers between devices, but I think they can be extended to
> support it. Host buffers can be identified by the following tuple:
> (transport type enum, transport specific device address, shmid,
> offset). I
Hi,
> On the host side, the encode and decode APIs are different as well, so
> having separate implementation decoder and encoder, possibly just
> sharing some helper code, would make much more sense.
When going down that route I'd suggest to use two device ids (even when
specifying both
Hi,
> +\subsection{Feature bits}\label{sec:Device Types / Sound Device / Feature
> bits}
> +
> +None currently defined.
Flags for hostmem & guestmem here?
This could be an information the driver might want to know before
initializing the virtqueues.
> +\item START
> +\item PAUSE
> +\item
On Thu, Dec 12, 2019 at 09:26:32PM +0900, David Stevens wrote:
> > > > Second I think it is a bad idea
> > > > from the security point of view. When explicitly exporting buffers it
> > > > is easy to restrict access to the actual exports.
> > >
> > > Restricting access to actual exports could
Hi,
> > First the addressing is non-trivial, especially with the "transport
> > specific device address" in the tuple.
>
> There is complexity here, but I think it would also be present in the
> buffer sharing device case. With a buffer sharing device, the same
> identifying information would
Hi,
> > > > > +\subsection{Device ID}\label{sec:Device Types / Video Device /
> > > > > Device ID}
> > > > > +
> > > > > +TBD.
> > > >
> > > > I'm wondering how and when we can determine and reserve this ID?
> > >
> > > Grab the next free, update the spec accordingly, submit the one-line
> > >
Hi,
> > I don't think there is a use case which guestmem can handle but hostmem
> > can not.
>
> There may be a special security requirement like "a guest must not have any
> access to a host memory" (especially in case of type 1 hv).
Hmm, ok. Point.
> Also, sharing memory implies sharing
Hi,
> If the following scenario happens:
>
> 1) Driver sends RESOURCE_CREATE_2D shared request
> 2) Driver sends ATTACH_BACKING request
> 3) Device creates a shared resource
> 4) Driver sends SET_SCANOUT request
> 5) Device sends shared resource to display
> 6) Driver sends DETACH_BACKING
Hi,
> > Hmm, modern GPUs support both encoding and decoding ...
>
> Many SoC architectures have completely separate IP blocks for encoding
> and decoding. Similarly, in GPUs those are usually completely separate
> parts of the pipeline.
In the OS there is one driver per GPU handling both ...
Hi,
> > Of course only virtio drivers would try step (2), other drivers (when
> > sharing buffers between intel gvt device and virtio-gpu for example)
> > would go straight to (3).
>
> For virtio-gpu as it is today, it's not clear to me that they're
> equivalent. As I read it, the virtio-gpu
On Mon, Oct 14, 2019 at 03:05:03PM +0200, Dmitry Morozov wrote:
>
> On Montag, 14. Oktober 2019 14:34:43 CEST Gerd Hoffmann wrote:
> > Hi,
> >
> > > My take on this (for a decoder) would be to allocate memory for output
> > > buffers from a secure
Hi,
> That said, Chrome OS would use a similar model, except that we don't
> use ION. We would likely use minigbm backed by virtio-gpu to allocate
> appropriate secure buffers for us and then import them to the V4L2
> driver.
What exactly is a "secure buffer"? I guess a gem object where read
Hi,
> That could be still a guest physical address. Like on a bare metal
> system with TrustZone, there could be physical memory that is not
> accessible to the CPU.
Hmm. Yes, maybe. We could use the dma address of the (first page of
the) guest buffer. In case of a secure buffer the guest
Hi,
> > Also note that the guest manages the address space, so the host can't
> > simply allocate guest page addresses.
>
> Is this really true? I'm not an expert in this area, but on a bare
> metal system it's the hardware or firmware that sets up the various
> physical address allocations on
> > I doubt you can handle pci memory bars like regular ram when it comes to
> > dma and iommu support. There is a reason we have p2pdma in the first
> > place ...
>
> The thing is that such bars would be actually backed by regular host
> RAM. Do we really need the complexity of real PCI bar
Hi,
> > > +\begin{description}
> > > +\item[0] controlq
> > > +\item[1] pcmq
> > > +\end{description}
> > > +
> > > +The controlq virtqueue always exists, the pcmq virtqueue only exists if
> > > +the VIRTIO_SND_F_PCM_OUTPUT and/or VIRTIO_SND_F_PCM_INPUT feature is
> > > negotiated.
> >
> >
> > Then, the I/O queue will multiplex already three things: read
> > requests, write
> > requests and notifications. The question is how rational is it.
>
> If there is no multiplexing, then we probably need 4 queues per virtual
> PCM:
>
> 1) Playback data
> 2) Playback notifications
> 3)
Hi,
> > > > 3. No support for getting plane requirements from the device (sg vs
> > > > contig,
> > > > size, stride alignment, plane count).
> > >
> > > There is actually a bigger difference that results in that. Vdec
> > > assumes host-allocated buffers coming from a different device, e.g.
>
Hi,
> > > DSP FW has no access to userspace so we would need some additional
> > > API
> > > on top of DMA_BUF_SET_NAME etc to get physical hardware pages ?
> >
> > The dma-buf api currently can share guest memory sg-lists.
>
> Ok, IIUC buffers can either be shared using the GPU proposed APIs
Hi,
> > Instead of PCM_OUTPUT and PCM_INPUT I would put the number of input and
> > output streams into the device config space (and explicitly allow count
> > being zero).
>
> There are several options for the number of streams:
>
> 1. The device can provide one input and/or output stream.
>
> > I'm not convinced this is useful for audio ...
> >
> > I basically see two modes of operation which are useful:
> >
> > (1) send audio data via virtqueue.
> > (2) map host audio buffers into the guest address space.
> >
> > The audio driver api (i.e. alsa) typically allows to mmap() the
> Feature bits
>
>
> VIRTIO_SND_F_PCM_OUTPUT
> VIRTIO_SND_F_PCM_INPUT
>
> and new ones would be
>
> VIRTIO_SND_F_PCM_EVT_PERIOD - support elapsed period notifications
> VIRTIO_SND_F_PCM_EVT_XRUN - support underrun/overrun notifications
> VIRTIO_SND_F_PCM_MSG - support message-based
On Thu, Nov 28, 2019 at 07:04:23PM +0100, Matti Moell wrote:
> Hi Gerd,
>
> I really like this!
>
> On 28.11.19 14:19, Gerd Hoffmann wrote:
> > @@ -186,6 +211,7 @@ \subsubsection{Device Operation: Request
> > header}\label{sec:Device Types / GPU De
> >
Add 3d commands to the command enumeration.
Add a section with a very short overview.
Signed-off-by: Gerd Hoffmann
---
virtio-gpu.tex | 35 +++
1 file changed, 35 insertions(+)
diff --git a/virtio-gpu.tex b/virtio-gpu.tex
index eced3095a494..631aaf24ea18 100644
/qemu/log/?h=sirius/virtio-gpu-feature-shared
linux patches:
https://git.kraxel.org/cgit/linux/log/?h=drm-virtio-feature-shared
Signed-off-by: Gerd Hoffmann
---
virtio-gpu.tex | 27 +++
1 file changed, 27 insertions(+)
diff --git a/virtio-gpu.tex b/virtio-gpu.tex
index
Hi,
> Is the intent that any virtio device has to support all the same formats?
No.
> So it looks to me like all the formats should be part of the spec, and some
> device capabilities should indicate which ones the device accepts.
There is a per-stream bitmask in device config indicating
Add some notes about fetching the EDID information.
Signed-off-by: Gerd Hoffmann
---
virtio-gpu.tex | 13 +
1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/virtio-gpu.tex b/virtio-gpu.tex
index af4ca610d235..15dbf9f2ec82 100644
--- a/virtio-gpu.tex
+++ b/virtio-gpu.tex
> > Well. I think before even discussing the protocol details we need a
> > reasonable plan for buffer handling. I think using virtio-gpu buffers
> > should be an optional optimization and not a requirement. Also the
> > motivation for that should be clear (Let the host decoder write directly
>
> > > +/* supported PCM stream features */
> > > +enum {
> > > +VIRTIO_SND_PCM_F_HOST_MEM = 0,
> > > +VIRTIO_SND_PCM_F_GUEST_MEM,
> >
> > Is this useful as stream property? I would expect when supported by a
> > device it would work for all streams.
>
> Since we allowed different
1 - 100 of 176 matches
Mail list logo