> +\begin{lstlisting}
> +struct virtio_gpu_status_freezing {
> + struct virtio_gpu_ctrl_hdr hdr;
> + __u32 freezing;
> +};
> +\end{lstlisting}
> +This is added for S3 function in guest with virtio-gpu. When guest does
> +S3, let it notify QEMU if virtio-gpu is in freezing status or not in
>
Hi,
> +\item[VIRTIO_GPU_CMD_STATUS_FREEZING]
> +Notify freezing status through controlq.
> +Request data is \field{struct virtio_gpu_status_freezing}.
> +Response type is VIRTIO_GPU_RESP_OK_NODATA.
> +
> +Guest notifies QEMU if virtio-gpu is in freezing status or not in
> \field{freezing}.
>
> Gurchetan Singh (10):
> virtio-gpu api: multiple context types with explicit initialization
> drm/virtgpu api: create context init feature
> drm/virtio: implement context init: track valid capabilities in a mask
> drm/virtio: implement context init: track {ring_idx, emit_fence_info}
>
Hi,
> > I guess you need to also update virtio_gpu_fence_event_process()
> > then? It currently has the strict ordering logic baked in ...
>
> The update to virtio_gpu_fence_event_process was done as a preparation a
> few months back:
>
>
On Wed, Sep 08, 2021 at 06:37:13PM -0700, Gurchetan Singh wrote:
> The plumbing is all here to do this. Since we always use the
> default fence context when allocating a fence, this makes no
> functional difference.
>
> We can't process just the largest fence id anymore, since it's
> it's
On Thu, Feb 25, 2021 at 01:51:16PM +0530, Veera wrote:
> Hi,
>
> Is virtio-snd patches merged and upstreamed to Qemu.org?
There are no qemu patches right now (there are other hypervisors beside
qemu/kvm ...).
Shreyansh Chouhan (added to Cc) is working on a qemu host
implementation.
> When
On Fri, Jan 08, 2021 at 02:56:31PM +0100, Guennadi Liakhovetski wrote:
> Hi Anton,
>
> I see the standard has been merged into the VirtIO spec - congratulations! I
> also saw, that there was a GSOC project for VirtIO sound support in QEMU.
No one looked at qemu support in gsoc 2020.
A few days
On Tue, Nov 17, 2020 at 02:55:14PM +0800, Jie Deng wrote:
>
> On 2020/11/16 16:16, Paolo Bonzini wrote:
> > On 16/11/20 03:12, Jie Deng wrote:
> > > Fore example, the frontend may kick the sequence "write read read
> > > ..." to the backend at a time.
> > >
> > > The segments can be aggregated
Hi,
> Many power platforms are OF based, thus without ACPI or DT support.
pseries has lots of stuff below /proc/device-tree. Dunno whenever that
is the same kind of device tree we have on arm ...
take care,
Gerd
-
To
On Thu, Sep 24, 2020 at 12:02:55PM +0200, Joerg Roedel wrote:
> On Thu, Sep 24, 2020 at 05:38:13AM -0400, Michael S. Tsirkin wrote:
> > On Thu, Sep 24, 2020 at 11:21:29AM +0200, Joerg Roedel wrote:
> > > On Thu, Sep 24, 2020 at 05:00:35AM -0400, Michael S. Tsirkin wrote:
> > > > OK so this looks
Hi,
> --- a/include/uapi/drm/virtgpu_drm.h
kernel <-> userspace API.
> --- a/include/uapi/linux/virtio_gpu.h
host <-> guest API.
Please create sepparate patches for these.
thanks,
Gerd
-
To unsubscribe, e-mail:
Hi,
> +enum virtio_gpu_shm_id {
> + VIRTIO_GPU_SHM_ID_UNDEFINED = 0,
> + VIRTIO_GPU_SHM_ID_HOST_VISIBLE = 1
> +};
I think this is also not in the virtio spec update.
-
To unsubscribe, e-mail:
On Wed, Sep 02, 2020 at 05:00:25PM -0700, Gurchetan Singh wrote:
> On Wed, Sep 2, 2020 at 3:15 PM Vivek Goyal wrote:
>
> > Hi Gurchetan,
> >
> > Now Miklos has queued, these tree virtio patches for shared memory
> > region in his tree as part of virtiofs dax patch series.
> >
> > I am hoping
Hi,
> @@ -100,7 +102,7 @@ struct drm_virtgpu_resource_info {
> __u32 bo_handle;
> __u32 res_handle;
> __u32 size;
> - __u32 stride;
> + __u32 blob_mem;
> };
Huh? This is not in the virtio spec update proposed.
> struct drm_virtgpu_3d_box {
> @@ -117,6 +119,8 @@
Hi,
> > Guest UI too (you can make the same argument the other way around) if we
> > want support guest->host cut+paste.
>
> OK, I was less worried about the host stealing from the guest, since it
> normally already can.
Yes, typically that would less concerning. I wouldn't completely ignore
Hi,
> Right; what I was interested in was whether there would be a way to
> plumb copy/paste through for a VNC or local Gtk display; the guest view
> should be independent of the transport protocol.
Well. Full-blown cut+paste is quite complex. spice goes all-in and
supports cut+paste
On Wed, Aug 19, 2020 at 12:10:11PM +0900, David Stevens wrote:
> Reported-by: kernel test robot
> Signed-off-by: David Stevens
Pushed to drm-misc-next
thanks for the qick fix,
Gerd
-
To unsubscribe, e-mail:
On Tue, Aug 18, 2020 at 10:37:41AM +0900, David Stevens wrote:
> This patchset implements the current proposal for virtio cross-device
> resource sharing [1]. It will be used to import virtio resources into
> the virtio-video driver currently under discussion [2]. The patch
> under consideration
On Mon, Aug 17, 2020 at 12:50:08PM +0200, Gerd Hoffmann wrote:
> On Tue, Jun 23, 2020 at 10:31:28AM +0900, David Stevens wrote:
> > Unless there are any remaining objections to these patches, what are
> > the next steps towards getting these merged? Sorry, I'm not familiar
> &
On Tue, Jun 23, 2020 at 10:31:28AM +0900, David Stevens wrote:
> Unless there are any remaining objections to these patches, what are
> the next steps towards getting these merged? Sorry, I'm not familiar
> with the workflow for contributing patches to Linux.
Sorry, just have been busy and not
On Mon, Aug 10, 2020 at 05:41:13PM +0100, Dr. David Alan Gilbert wrote:
> Hi,
> Is there anywhere that virtio has where host/guest copy/paste would
> fit?
Well, spice supports it, with spice client and spice guest agent talking
to each other using a virtio-serial channel (guest <->
On Tue, May 26, 2020 at 07:58:08PM +0900, David Stevens wrote:
> This patchset implements the current proposal for virtio cross-device
> resource sharing [1]. It will be used to import virtio resources into
> the virtio-video driver currently under discussion [2]. The patch
> under consideration
Hi,
> - for the runtime upcasting the usual approach is to check the ->ops
> pointer. Which means that would need to be the same for all virtio
> dma_bufs, which might get a bit awkward. But I'd really prefer we not
> add allocator specific stuff like this to dma-buf.
This is exactly the
gt; >
> > This patch proposes virtio specification for a new virtio sound device,
> > that may be useful in case when having audio is required but a device
> > passthrough or emulation is not an option.
> >
> > Signed-off-by: Anton Yak
Hi,
> This patch proposes virtio specification for a new virtio sound device,
> that may be useful in case when having audio is required but a device
> passthrough or emulation is not an option.
>
> Signed-off-by: Anton Yakovlev
> ---
>
> v7->v8 changes:
> 1. Add a universal control request
On Wed, Mar 11, 2020 at 08:20:00PM +0900, David Stevens wrote:
> This patchset implements the current proposal for virtio cross-device
> resource sharing [1], with minor changes based on recent comments. It
> is expected that this will be used to import virtio resources into the
> virtio-video
On Mon, Mar 16, 2020 at 09:30:59PM +0100, Anton Yakovlev wrote:
> Hi,
>
> On 16.03.2020 11:19, Gerd Hoffmann wrote:
> >Hi,
> >
> > > 2. Rework jack configuration structure, now it's more aligned with the
> > > HDA spec.
> >
> > Yep, t
Hi,
> 2. Rework jack configuration structure, now it's more aligned with the HDA
> spec.
Yep, that looks good.
> +\item[\field{hda_fn_nid}] indicates a functional node identifier
> +(see \hyperref[intro:HDA]{HDA}, section 7.1.2).
How is the nid used?
In general some of the defconf and caps
On Tue, Mar 03, 2020 at 11:42:22AM +0900, David Stevens wrote:
> > cmd_p->hdr.ctx_id =
> >
> > Before this completion of this hypercall, this resource can be
> > considered context local, while afterward it can be considered
> > "exported".
>
> Maybe I'm misunderstanding render contexts, but
Hi,
> + if (vgdev->has_resource_assign_uuid) {
> + spin_lock(>resource_export_lock);
> + if (bo->uuid_state == UUID_NOT_INITIALIZED) {
> + bo->uuid_state = UUID_INITIALIZING;
> + needs_init = true;
> + }
> +
On Mon, Mar 02, 2020 at 09:15:21PM +0900, David Stevens wrote:
> This change adds a new dma-buf operation that allows dma-bufs to be used
> by virtio drivers to share exported objects. The new operation allows
> the importing driver to query the exporting driver for the UUID which
> identifies the
Hi,
> > With a feature flag both driver and device can choose whenever they want
> > support v1 or v2 or both. With a version config field this is more
> > limited, the device can't decide to support both. So the bonus points
> > for a smooth transition go to the feature flags not the version
On Tue, Mar 03, 2020 at 01:27:38PM +0100, Anton Yakovlev wrote:
> On 03.03.2020 13:10, Gerd Hoffmann wrote:
> > On Tue, Mar 03, 2020 at 12:28:32PM +0100, Anton Yakovlev wrote:
> > > Hi,
> > >
> > > On 03.03.2020 12:20, Gerd Hoffmann wrote:
> > &
On Tue, Mar 03, 2020 at 12:28:32PM +0100, Anton Yakovlev wrote:
> Hi,
>
> On 03.03.2020 12:20, Gerd Hoffmann wrote:
> >Hi,
> >
> > > 4. Introduce the polling mode feature for a message-based transport.
> >
> > BTW: is that driver -> device
Hi,
> 4. Introduce the polling mode feature for a message-based transport.
BTW: is that driver -> device or device -> driver or both?
In case both: should we have two separate feature flags maybe?
Otherwise this looks good to me.
cheers,
Gerd
On Thu, Feb 27, 2020 at 03:08:59PM +0100, Anton Yakovlev wrote:
> Hello all,
>
> We have completed the implementation of the PoC specification v5. Below are
> our comments on what might or should be improved in the specification.
>
> Initialization:
>
> 1. I think we should add the direction of
On Fri, Feb 28, 2020 at 07:11:40PM +0900, David Stevens wrote:
> > But there also is "unix socket", or maybe a somewhat broader "stream",
> > which would be another feature flag I guess because virtio-ipc would
> > just tunnel the stream without the help from other devices.
>
> Can you elaborate
Hi,
> > Yes, sure, we need to exactly specify the different kinds of file
> > handles / resources. I think it makes sense to have a virtio feature
> > flag for each of them, so guest+host can easily negotiate what they are
> > able to handle and what not.
>
> I was expecting that to be a
Hi,
> > > Can you provide more detail about the envisioned scope of this
> > > framework?
> >
> > The scope is "generic message+FD passing" interface, which is pretty
> > much what virtio-wl provides.
>
> I think that scope is too broad. A socket is a 'generic message+FD'
> interface. Unless
Hi,
> > So one possible approach for the host side implementation would be to
> > write the virtio protocol parser as support library, then implement the
> > host applications (i.e. the host wayland proxy) as vhost-user process
> > using that library.
> >
> > That would not work well with the
Hi,
> Dmitry's virtio-video driver
> https://patchwork.linuxtv.org/patch/61717/.
> Once it becomes fully functional, I'll post a list of possible
> improvements of protocol.
Cool. Actually implementing things can find design problems
in the protocol you didn't notice earlier.
> > >
On Wed, Feb 26, 2020 at 12:56:58PM +0900, David Stevens wrote:
> On Tue, Feb 25, 2020 at 3:10 PM Gerd Hoffmann wrote:
> >
> > How about dma_buf_{get,set}_uuid, simliar to dma_buf_set_name?
>
> While I'm not opposed to such an API, I'm also hesitant to make
> ch
Hi,
> So, I'm about to start working on virtio-pipe (I realize the name is
> not that great since pipes are normally unidirectional, but I'm sure
> we'll have plenty of time to bikeshed on that particular aspect once
> the other bits are sorted out :)).
virtio-ipc?
> This device would be a
Hi,
> +/*
> + * Followed by either
> + * - struct virtio_video_mem_entry entries[]
> + * for VIRTIO_VIDEO_MEM_TYPE_GUEST_PAGES
> + * - struct virtio_video_object_entry entries[]
> + * for VIRTIO_VIDEO_MEM_TYPE_VIRTIO_OBJECT
Wouldn't that be a
On Thu, Feb 06, 2020 at 07:20:57PM +0900, Keiichi Watanabe wrote:
> From: Dmitry Sepp
>
> The virtio video encoder device and decoder device provide functionalities to
> encode and decode video stream respectively.
> Though video encoder and decoder are provided as different devices, they use a
Hi,
> +struct dma_buf *virtgpu_gem_prime_export(struct drm_gem_object *obj,
> + int flags)
> +{
[ ... ]
> +}
> +
> +struct drm_gem_object *virtgpu_gem_prime_import(struct drm_device *dev,
> + struct dma_buf *buf)
>
On Wed, Feb 19, 2020 at 05:06:36PM +0900, David Stevens wrote:
> This change adds a new flavor of dma-bufs that can be used by virtio
> drivers to share exported objects. A virtio dma-buf can be queried by
> virtio drivers to obtain the UUID which identifies the underlying
> exported object.
That
Hi,
> But let's say we go for a dedicated virtio-device to preserve this
> granularity. Should we aim at providing a generic virtio-msg device or
> should we keep this so-called wayland-specific virtio device (I'd like
> to remind you that it's actually protocol-agnostic)?
I think it totally
Hi,
> > Because wayland benefits from allocation and sharing of
> > virtio-gpu buffers, a virtio-gpu combo device simplifies access to
> > those buffers, whereas the separate virtio devices as implemented in
> > crosvm requires bridging of resource handles (in guest kernel) and FDs
> > (in
Hi,
> As pointed in my reply to David's email, I'm a bit worried by the
> security implications of this approach. As long as the dmabuf -> UUID
> conversion stays in kernel space we should be safe, but if we start
> allowing a guest proxy (running in userland) to send raw UUIDs on a
> VSOCK
Hi,
> #1 might require extra care if we want to make it safe, as pointed
> out by Stefan here [4] (but I wonder if the problem is not the same
> for a virtio-wayland based solution). Of course you also need a bit of
> infrastructure to register FD <-> VFD mappings (VFD being a virtual
> file
> > > Can't this problem be solved by adding "offset" field in
> > > virtio_video_mem_entry?
> > >
> > > struct virtio_video_mem_entry {
> > > le64 addr;
> > > le32 length;
> > > le32 offset;
> > > u8 padding[4];
> > > };
> > >
> > > Here, "addr" must be the same in every mem_entry for
Hi,
> > Hmm, using (ii) the API, then check whenever your three plane buffers
> > happen to have the correct layout for (1) hardware looks somewhat
> > backwards to me.
>
> Can't this problem be solved by adding "offset" field in
> virtio_video_mem_entry?
>
> struct virtio_video_mem_entry {
Hi,
> > If you have (1) hardware you simply can't import buffers with arbitrary
> > plane offsets, so I'd expect software would prefer the single buffer
> > layout (i) over (ii), even when using another driver + dmabuf
> > export/import, to be able to support as much hardware as possible.
> >
On Thu, Jan 09, 2020 at 03:07:43PM +0100, Anton Yakovlev wrote:
> This patch proposes virtio specification for a new virtio sound device,
> that may be useful in case when having audio is required but a device
> passthrough or emulation is not an option.
Looks good to me. Where do we stand in
Hi,
> > Well, no. Tomasz Figa had splitted the devices into three groups:
> >
> > (1) requires single buffer.
> > (2) allows any layout (including the one (1) devices want).
> > (3) requires per-plane buffers.
> >
> > Category (3) devices are apparently rare and old. Both category
On Mon, Jan 13, 2020 at 11:41:45AM +0100, Dmitry Sepp wrote:
> Hi Gerd,
>
> Thanks for reviewing!
>
> On Montag, 13. Januar 2020 10:56:36 CET Gerd Hoffmann wrote:
> > Hi,
> >
> > > This also means that we cannot have unspec for planes layout. Devi
Hi,
> This also means that we cannot have unspec for planes layout. Device either
> expects planes in separate buffers or in one buffer with some offsets, there
> cannot be mixed cases.
Hmm. Is it useful to support both? Or maybe support the "one buffer +
offsets" case only? Splitting one
Hi,
> > Repeating "device-readable" or "device-writable" for each struct field
> > looks a bit odd because this applies to the whole struct. Not so much
> > for these structs with a single field only, but there are structs with
> > more fields further down the spec ...
>
> Well, I'm not sure
Hi,
> Regarding re-using, the driver can simply re-queue buffers returned by
> the device. If the device needs a buffer as reference frame, it must
> not return the buffer.
Ok, that'll work.
> I'll describe this rule in the next version of the patch.
Good. You should also add a note about
Hi,
> At that point, I think it's just a matter of aesthetics. I lean
> slightly towards returning the uuid from the host, since that rules
> out any implementation with the aforementioned race.
Ok, design the API in a way that you can't get it wrong. Makes sense.
I'd still name it
Hi,
> that isn't just a leaf node of the spec. I think it's better to define
> 'resource' as a top level concept for virtio devices, even if the specifics
> of what a 'resource' is are defined by individual device types.
Your patch doesn't define what a resource is though. It only refers to
> +\begin{lstlisting}
> +struct virtio_gpu_export_resource {
> +struct virtio_gpu_ctrl_hdr hdr;
> +le32 resource_id;
> +le32 padding;
> +};
> +
> +struct virtio_gpu_resp_export_resource {
> +struct virtio_gpu_ctrl_hdr hdr;
> +le64 uuid_low;
> +le64
On Wed, Jan 08, 2020 at 06:01:58PM +0900, David Stevens wrote:
> Define a mechanism for sharing resources between different virtio
> devices.
>
> Signed-off-by: David Stevens
> ---
> content.tex | 18 ++
> 1 file changed, 18 insertions(+)
>
> diff --git a/content.tex
On Tue, Jan 07, 2020 at 01:50:49PM +0100, Anton Yakovlev wrote:
> This patch proposes virtio specification for a new virtio sound device,
> that may be useful in case when having audio is required but a device
> passthrough or emulation is not an option.
Looks pretty good overall. Some small
Hi,
> How should one deal with multiplanar formats? Do we create one resource per
> plane? Otherwise we need a way to send mem entries for each plane in one
> request.
DRM uses arrays of handles and offsets (see struct drm_framebuffer). A
handle references a gem object (roughly the same as
Hi,
> > We also see the need to add a max_streams value to this structure so as to
> > explicitly provide a limit on the number of streams the guest can create.
>
> What would be the advantage over just trying to create one and
> failing? The maximum number would be only meaningful for the
Hi,
> > Period notification would be implicit (playback buffer completion) or
> > explicit event queue message?
>
> Good question. I think, for message-base transport they would be implicit,
> since we will require to enqueue period_size length buffers. The only
> exception here will be the
Hi,
> > qemu has rather small buffers backend buffers, to keep latencies low.
> > So, yes it would copy data to backend buffers. No, it would most likely
> > not copy over everything immediately. It will most likely leave buffers
> > in the virtqueue, reading the data piecewise in the audio
Hi,
> > However that still doesn't let the driver know which buffers will be
> > dequeued when. A simple example of this scenario is when the guest is
> > done displaying a frame and requeues the buffer back to the decoder.
> > Then the decoder will not choose it for decoding next frames into
Hi,
> > Not clearly defined in the spec: When is the decoder supposed to send
> > the response for a queue request? When it finished decoding (i.e. frame
> > is ready for playback), or when it doesn't need the buffer any more for
> > decoding (i.e. buffer can be re-queued or pages can be
Hi,
> > I also can't see why the flag is needed in the first place. The driver
> > should know which buffers are queued still and be able to figure
> > whenever the drain is complete or not without depending on that flag.
> > So I'd suggest to simply drop it.
> This flag is used not for drain
On Wed, Dec 18, 2019 at 11:08:37PM +0900, Tomasz Figa wrote:
> On Wed, Dec 18, 2019 at 10:40 PM Gerd Hoffmann wrote:
> >
> > Hi,
> >
> > > +The device MUST mark the last buffer with the
> > > +VIRTIO_VIDEO_BUFFER_F_EOS flag to denote
Hi,
> > The driver can still split the data from the application into a set of
> > smaller virtio buffers. And this is how I would write a driver. Create
> > a bunch of buffers, period_bytes each, enough to cover buffer_bytes.
> > Then go submit them, re-use them robin-round (each time the
Hi,
> +The device MUST mark the last buffer with the
> +VIRTIO_VIDEO_BUFFER_F_EOS flag to denote completion of the drain
> +sequence.
No, that would build a race condition into the protocol. The device
could complete the last buffer after the driver has sent the drain
command but before the
> > > +/* supported PCM stream features */
> > > +enum {
> > > +VIRTIO_SND_PCM_F_HOST_MEM = 0,
> > > +VIRTIO_SND_PCM_F_GUEST_MEM,
> >
> > Is this useful as stream property? I would expect when supported by a
> > device it would work for all streams.
>
> Since we allowed different
On Tue, Dec 17, 2019 at 05:13:59PM +0100, Dmitry Sepp wrote:
> Hi,
>
> On Dienstag, 17. Dezember 2019 15:09:16 CET Keiichi Watanabe wrote:
> > Hi,
> >
> > Thanks Tomasz and Gerd for the suggestions and information.
> >
> > On Tue, Dec 17, 2019 at 10
Hi,
> +\subsection{Feature bits}\label{sec:Device Types / Sound Device / Feature
> bits}
> +
> +None currently defined.
Flags for hostmem & guestmem here?
This could be an information the driver might want to know before
initializing the virtqueues.
> +\item START
> +\item PAUSE
> +\item
Hi,
> On the host side, the encode and decode APIs are different as well, so
> having separate implementation decoder and encoder, possibly just
> sharing some helper code, would make much more sense.
When going down that route I'd suggest to use two device ids (even when
specifying both
Hi,
> > Of course only virtio drivers would try step (2), other drivers (when
> > sharing buffers between intel gvt device and virtio-gpu for example)
> > would go straight to (3).
>
> For virtio-gpu as it is today, it's not clear to me that they're
> equivalent. As I read it, the virtio-gpu
Hi,
> > Hmm, modern GPUs support both encoding and decoding ...
>
> Many SoC architectures have completely separate IP blocks for encoding
> and decoding. Similarly, in GPUs those are usually completely separate
> parts of the pipeline.
In the OS there is one driver per GPU handling both ...
On Thu, Dec 12, 2019 at 09:26:32PM +0900, David Stevens wrote:
> > > > Second I think it is a bad idea
> > > > from the security point of view. When explicitly exporting buffers it
> > > > is easy to restrict access to the actual exports.
> > >
> > > Restricting access to actual exports could
Hi,
> > First the addressing is non-trivial, especially with the "transport
> > specific device address" in the tuple.
>
> There is complexity here, but I think it would also be present in the
> buffer sharing device case. With a buffer sharing device, the same
> identifying information would
Hi,
> None of the proposals directly address the use case of sharing host
> allocated buffers between devices, but I think they can be extended to
> support it. Host buffers can be identified by the following tuple:
> (transport type enum, transport specific device address, shmid,
> offset). I
Hi,
> > They are never taken away from guest ram. The guest is free to do
> > whatever it wants with the pages. It's the job of the guest driver to
> > make sure the pages are not released until the host stopped using them.
> > So the host must drop all references before completing the
> >
> > > 1) Driver sends RESOURCE_CREATE_2D shared request
> > > 2) Driver sends ATTACH_BACKING request
> > > 3) Device creates a shared resource
> > > 4) Driver sends SET_SCANOUT request
> > > 5) Device sends shared resource to display
> > > 6) Driver sends DETACH_BACKING request
> >
> > Hmm, I
Hi,
> > For (1) you'll simply do a QUEUE_BUFFER. The command carries references
> > to the buffer pages. No resource management needed.
> >
> > For (2) you'll have RESOURCE_CREATE + RESOURCE_DESTROY + QUEUE_RESOURCE,
> > where RESOURCE_CREATE passes the scatter list of buffer pages to the
> >
Hi,
> > I don't think there is a use case which guestmem can handle but hostmem
> > can not.
>
> There may be a special security requirement like "a guest must not have any
> access to a host memory" (especially in case of type 1 hv).
Hmm, ok. Point.
> Also, sharing memory implies sharing
Hi,
> > > > > +\subsection{Device ID}\label{sec:Device Types / Video Device /
> > > > > Device ID}
> > > > > +
> > > > > +TBD.
> > > >
> > > > I'm wondering how and when we can determine and reserve this ID?
> > >
> > > Grab the next free, update the spec accordingly, submit the one-line
> > >
Hi,
> If the following scenario happens:
>
> 1) Driver sends RESOURCE_CREATE_2D shared request
> 2) Driver sends ATTACH_BACKING request
> 3) Device creates a shared resource
> 4) Driver sends SET_SCANOUT request
> 5) Device sends shared resource to display
> 6) Driver sends DETACH_BACKING
Hi,
> > > For example, in case
> > > of GUEST_MEM the request could be followed by a buffer sg-list.
> >
> > I'm not convinced guest_mem is that useful. host_mem allows to give the
> > guest access to the buffers used by the hosts sound hardware, which is
> > probably what you need if the MSG
Hi,
> 1. Focus on only decoder/encoder functionalities first.
>
> As Tomasz said earlier in this thread, it'd be too complicated to support
> camera
> usage at the same time. So, I'd suggest to make it just a generic mem-to-mem
> video processing device protocol for now.
> If we finally
Hi,
> > > Do we already have a mechanism
> > > to import memory to virtio-gpu via importing a bo handle from somewhere
> > > else?
> >
> > Inside the guest? Import is not working yet, that needs another update
> > (blob resources) due to current virtio-gpu resources requiring a format
> > at
On Thu, Nov 28, 2019 at 12:42:43PM +0100, Gerd Hoffmann wrote:
> Add some notes about fetching the EDID information.
https://github.com/oasis-tcs/virtio-spec/issues/64
cheers,
Gerd
-
To unsubscribe, e-mail: virtio-
On Mon, Dec 02, 2019 at 09:24:31AM -0800, Frank Yang wrote:
> Interesting; what's the use case for this?
Mostly reduce data copying.
> Do we already have a mechanism
> to import memory to virtio-gpu via importing a bo handle from somewhere
> else?
Inside the guest? Import is not working yet,
On Mon, Dec 02, 2019 at 02:06:53PM +0100, Anton Yakovlev wrote:
> Hi,
>
> Sorry for the late reply, I was not available for a week.
>
>
> On 25.11.2019 10:31, Gerd Hoffmann wrote:
> >Hi,
> >
> > > > Instead of PCM_OUTPUT and PCM_INPUT I would
Add 3d commands to the command enumeration.
Add a section with a very short overview.
Signed-off-by: Gerd Hoffmann
---
virtio-gpu.tex | 35 +++
1 file changed, 35 insertions(+)
diff --git a/virtio-gpu.tex b/virtio-gpu.tex
index eced3095a494..631aaf24ea18 100644
On Thu, Nov 28, 2019 at 07:04:23PM +0100, Matti Moell wrote:
> Hi Gerd,
>
> I really like this!
>
> On 28.11.19 14:19, Gerd Hoffmann wrote:
> > @@ -186,6 +211,7 @@ \subsubsection{Device Operation: Request
> > header}\label{sec:Device Types / GPU De
> >
/qemu/log/?h=sirius/virtio-gpu-feature-shared
linux patches:
https://git.kraxel.org/cgit/linux/log/?h=drm-virtio-feature-shared
Signed-off-by: Gerd Hoffmann
---
virtio-gpu.tex | 27 +++
1 file changed, 27 insertions(+)
diff --git a/virtio-gpu.tex b/virtio-gpu.tex
index
Add some notes about fetching the EDID information.
Signed-off-by: Gerd Hoffmann
---
virtio-gpu.tex | 13 +
1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/virtio-gpu.tex b/virtio-gpu.tex
index af4ca610d235..15dbf9f2ec82 100644
--- a/virtio-gpu.tex
+++ b/virtio-gpu.tex
1 - 100 of 176 matches
Mail list logo