Hi,
> > ... into 5.6-rc3 fixes the corruption for me.
>
> I tried those 2 patches on top of kernel 5.6-rc3 and they indeed fix the
> problem.
>
> I applied them on top of 5.5.6 and they also fix the problem!
> Could we get those 2 patches applied to stable 5.5, please?
Series just re-posted.
virtio-gpu uses cached mappings, set
drm_gem_shmem_object.map_cached accordingly.
Cc: sta...@vger.kernel.org
Fixes: c66df701e783 ("drm/virtio: switch from ttm to gem shmem helpers")
Reported-by: Gurchetan Singh
Reported-by: Guillaume Gardet
Signed-off-by: Gerd Hoffmann
---
drive
ff-by: Gerd Hoffmann
---
drivers/gpu/drm/udl/udl_gem.c | 62 ++-
1 file changed, 3 insertions(+), 59 deletions(-)
diff --git a/drivers/gpu/drm/udl/udl_gem.c b/drivers/gpu/drm/udl/udl_gem.c
index b6e26f98aa0a..7e3a88b25b6b 100644
--- a/drivers/gpu/drm/udl/udl_gem.c
Add map_cached bool to drm_gem_shmem_object, to request cached mappings
on a per-object base. Check the flag before adding writecombine to
pgprot bits.
Cc: sta...@vger.kernel.org
Signed-off-by: Gerd Hoffmann
---
include/drm/drm_gem_shmem_helper.h | 5 +
drivers/gpu/drm
v5: rebase, add tags, add cc stable for 1+2, no code changes.
v4: back to v2-ish design, but simplified a bit.
v3: switch to drm_gem_object_funcs callback.
v2: make shmem helper caching configurable.
Gerd Hoffmann (3):
drm/shmem: add support for per object caching flags.
drm/virtio: fix mmap
> > No.
> >
> > First, what is wrong with vkms?
> The total lines of vkms driver is 1.2k+.
Which is small for a drm driver.
> I think it doesn't work along
> itself to provide things like mmap on prime fd? (I tried it months
> ago).
Seems vkms only supports prime import, not prime export.
Maybe
Hi,
> > Perhaps try the entire series?
> >
> > https://patchwork.kernel.org/cover/11300619/
Latest version is at:
https://git.kraxel.org/cgit/linux/log/?h=drm-virtio-no-wc
> Applied entire series on top of 5.5.6, but still the same problem.
Can you double-check? Cherry-picking the shmem
On Mon, Feb 24, 2020 at 03:01:54PM -0800, Lepton Wu wrote:
> Hi,
>
> I'd like to get comments on this before I polish it. This is a
> simple way to get something similar with vkms but it heavily reuse
> the code provided by virtio-gpu. Please feel free to give me any
> feedbacks or comments.
No.
Hi,
> +struct dma_buf *virtgpu_gem_prime_export(struct drm_gem_object *obj,
> + int flags)
> +{
[ ... ]
> +}
> +
> +struct drm_gem_object *virtgpu_gem_prime_import(struct drm_device *dev,
> + struct dma_buf *buf)
>
On Wed, Feb 19, 2020 at 05:06:36PM +0900, David Stevens wrote:
> This change adds a new flavor of dma-bufs that can be used by virtio
> drivers to share exported objects. A virtio dma-buf can be queried by
> virtio drivers to obtain the UUID which identifies the underlying
> exported object.
That
On Thu, Feb 20, 2020 at 02:53:19PM -0800, John Bates wrote:
> The previous code was not thread safe and caused
> undefined behavior from spurious duplicate resource IDs.
> In this patch, an atomic_t is used instead. We no longer
> see any duplicate IDs in tests with this change.
>
> Fixes: 16065fc
On Fri, Feb 21, 2020 at 04:54:02PM -0800, Gurchetan Singh wrote:
> On Fri, Feb 21, 2020 at 3:06 PM Chia-I Wu wrote:
> >
> > On Wed, Feb 19, 2020 at 2:34 PM Gurchetan Singh
> > wrote:
> > >
> > > For old userspace, initialization will still be implicit.
> > >
> > > For backwards compatibility, enq
Hi,
> > The plan is for virtio-gpu device to reserve a huge memory region in
> > the guest. Memslots may be added dynamically or statically to back
> > the region.
>
> so the region is marked as E820_RESERVED to prevent guest kernel
> from using it for other purpose and then virtio-gpu device
Daniel Vetter
> Cc: Dave Airlie
> Cc: Gerd Hoffmann
> Cc: Daniel Vetter
> Cc: "Noralf Trønnes"
> Cc: Emil Velikov
> Cc: Thomas Zimmermann
> Cc: virtualizat...@lists.linux-foundation.org
Acked-by: Gerd Hoffmann
___
dri
On Wed, Feb 19, 2020 at 11:21:00AM +0100, Daniel Vetter wrote:
> We can even delete the drm_driver.release hook now!
>
> Signed-off-by: Daniel Vetter
> Cc: Dave Airlie
> Cc: Gerd Hoffmann
> Cc: Daniel Vetter
> Cc: "Noralf Trønnes"
> Cc: Sam Ravnb
>
> Signed-off-by: Daniel Vetter
> Cc: Gerd Hoffmann
> Cc: virtualizat...@lists.linux-foundation.org
Acked-by: Gerd Hoffmann
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
On Wed, Feb 19, 2020 at 11:20:58AM +0100, Daniel Vetter wrote:
> Small mistake that crept into
>
> commit 81da8c3b8d3df6f05b11300b7d17ccd1f3017fab
> Author: Gerd Hoffmann
> Date: Tue Feb 11 14:52:18 2020 +0100
>
> drm/bochs: add drm_driver.releas
m_device and the drmm_ stuff.
Acked-by: Gerd Hoffmann
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
On Tue, Feb 18, 2020 at 09:48:15AM +0100, Thomas Zimmermann wrote:
> The qxl driver uses an empty implementation for its encoder. Replace
> the code with the generic simple encoder.
>
> v2:
> * rebase onto new simple-encoder interface
>
> Signed-off-by: Thomas Zimmer
Hi,
> 2 unfortunately I can't say the same for bochs but it works with this patch
> series so I think bochs is fine as well.
bochs needs the offset only to scanout framebuffers, which in turn
requires framebuffers being pinned to vram. So all green here.
cheers,
Gerd
__
On Mon, Feb 17, 2020 at 04:04:25PM +0100, Nirmoy Das wrote:
> Calculate GPU offset within vram-helper without depending on
> bo->offset
>
> Signed-off-by: Nirmoy Das
Acked-by: Gerd Hoffmann
___
dri-devel mailing list
dri-devel@lists.
On Mon, Feb 17, 2020 at 04:04:26PM +0100, Nirmoy Das wrote:
> Switch over to GEM VRAM's implementation to retrieve bo->offset
>
> Signed-off-by: Nirmoy Das
Acked-by: Gerd Hoffmann
___
dri-devel mailing list
dri-devel@lists.free
On Mon, Feb 17, 2020 at 11:18:40AM +0100, Nirmoy Das wrote:
> Calculate GPU offset within bochs driver itself without depending on
> bo->offset
>
> Signed-off-by: Nirmoy Das
> ---
> drivers/gpu/drm/bochs/bochs_kms.c | 3 ++-
> drivers/gpu/drm/drm_gem_vram_helper.c | 2 +-
> 2 files changed,
that I'll happily
Acked-by: Gerd Hoffmann
cheers,
Gerd
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
On Sat, Feb 15, 2020 at 07:09:11PM +0100, Emmanuel Vadot wrote:
> From: Emmanuel Vadot
>
> Contributors for this file are :
> Gerd Hoffmann
> Maxime Ripard
> Noralf Trønnes
>
> Signed-off-by: Emmanuel Vadot
> ---
> drivers/gpu/drm/drm_format_helper.c | 2 +-
qemu 5.0 introduces a new qxl hardware revision 5. Unlike revision 4
(and below) the device doesn't switch back into vga compatibility mode
when someone touches the vga ports. So we don't have to reserve the
vga ports any more to avoid that happening.
Signed-off-by: Gerd Hoffmann
--
Move virtio_gpu_notify() to higher-level functions for
virtio_gpu_cmd_create_resource(), virtio_gpu_cmd_resource_create_3d()
and virtio_gpu_cmd_resource_attach_backing().
virtio_gpu_object_create() will batch commands and notify only once when
creating a resource.
Signed-off-by: Gerd Hoffmann
Before we are going to wait for virtqueue entries becoming available
call virtio_gpu_notify() to make sure the host has seen everything
we've submitted.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/virtio/virtgpu_vq.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/gp
batching to separate patches.
v2:
- rebase to latest drm-misc-next.
- use "if (!atomic_read())".
- add review & test tags.
Signed-off-by: Gerd Hoffmann
Reviewed-by: Gurchetan Singh
Tested-by: Gurchetan Singh
Reviewed-by: Chia-I Wu
---
drivers/gpu/drm/virtio/virtgpu_drv.h | 6
Move all remaining virtio_gpu_notify() calls from virtio_gpu_cmd_*
to the callers, for consistency reasons.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/virtio/virtgpu_gem.c| 2 ++
drivers/gpu/drm/virtio/virtgpu_ioctl.c | 3 +++
drivers/gpu/drm/virtio/virtgpu_kms.c| 3 +++
drivers
Move virtio_gpu_notify() to higher-level functions for
virtio_gpu_cmd_get_display_info() and virtio_gpu_cmd_get_edids().
virtio_gpu_config_changed_work_func() and virtio_gpu_init() will
batch commands and notify only once per update
Signed-off-by: Gerd Hoffmann
Reviewed-by: Chia-I Wu
-off-by: Gerd Hoffmann
Reviewed-by: Chia-I Wu
---
drivers/gpu/drm/virtio/virtgpu_display.c | 2 ++
drivers/gpu/drm/virtio/virtgpu_ioctl.c | 1 +
drivers/gpu/drm/virtio/virtgpu_plane.c | 3 +++
drivers/gpu/drm/virtio/virtgpu_vq.c | 4
4 files changed, 6 insertions(+), 4 deletions
v4:
- add patches #2 + #6.
v3:
- split into multiple patches.
Gerd Hoffmann (6):
drm/virtio: rework notification for better batching
drm/virtio: notify before waiting
drm/virtio: batch plane updates (pageflip)
drm/virtio: batch resource creation
drm/virtio: batch display query
drm
> - if (bo->mem.mm_node)
> - bo->offset = (bo->mem.start << PAGE_SHIFT) +
> - bdev->man[bo->mem.mem_type].gpu_offset;
> - else
> - bo->offset = 0;
> -
>
>
> My assumption is
>
> (bo->tbo.offset - slot->gpu_offset + offset) == (bo->tbo.me
The >= compare op must happen in cpu byte order, doing it in
little endian fails on big endian machines like s390.
Reported-by: Sebastian Mitterle
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/virtio/virtgpu_vq.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/driv
> @@ -311,10 +311,8 @@ qxl_bo_physical_address(struct qxl_device *qdev, struct
> qxl_bo *bo,
> (bo->tbo.mem.mem_type == TTM_PL_VRAM)
> ? &qdev->main_slot : &qdev->surfaces_slot;
>
> - WARN_ON_ONCE((bo->tbo.offset & slot->gpu_offset) != slot->gpu_offset);
> -
> -
Move virtio_gpu_notify() to higher-level functions for
virtio_gpu_cmd_create_resource(), virtio_gpu_cmd_resource_create_3d()
and virtio_gpu_cmd_resource_attach_backing().
virtio_gpu_object_create() will batch commands and notify only once when
creating a resource.
Signed-off-by: Gerd Hoffmann
-off-by: Gerd Hoffmann
---
drivers/gpu/drm/virtio/virtgpu_display.c | 2 ++
drivers/gpu/drm/virtio/virtgpu_ioctl.c | 1 +
drivers/gpu/drm/virtio/virtgpu_plane.c | 3 +++
drivers/gpu/drm/virtio/virtgpu_vq.c | 4
4 files changed, 6 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu
Move virtio_gpu_notify() to higher-level functions for
virtio_gpu_cmd_get_display_info() and virtio_gpu_cmd_get_edids().
virtio_gpu_config_changed_work_func() and virtio_gpu_init() will
batch commands and notify only once per update
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/virtio
v4:
- split into multiple patches.
Gerd Hoffmann (4):
drm/virtio: rework notification for better batching
drm/virtio: batch plane updates (pageflip)
drm/virtio: batch resource creation
drm/virtio: batch display query.
drivers/gpu/drm/virtio/virtgpu_drv.h | 6 ++--
drivers/gpu/drm
batching to separate patches.
v2:
- rebase to latest drm-misc-next.
- use "if (!atomic_read())".
- add review & test tags.
Signed-off-by: Gerd Hoffmann
Reviewed-by: Gurchetan Singh
Tested-by: Gurchetan Singh
---
drivers/gpu/drm/virtio/virtgpu_drv.h | 6 ++--
drivers/
On Wed, Feb 12, 2020 at 01:33:44PM -0600, Gustavo A. R. Silva wrote:
> The current codebase makes use of the zero-length array language
> extension to the C90 standard, but the preferred mechanism to declare
> variable-length types such as these ones is a flexible array member[1][2],
> introduced i
for
userspace ioctls.
v2:
- rebase to latest drm-misc-next.
- use "if (!atomic_read())".
- add review & test tags.
Signed-off-by: Gerd Hoffmann
Reviewed-by: Gurchetan Singh
Tested-by: Gurchetan Singh
---
drivers/gpu/drm/virtio/virtgpu_drv.h | 6 ++---
drivers/
Hi,
> --- a/drivers/gpu/drm/virtio/virtgpu_kms.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_kms.c
> @@ -270,7 +270,9 @@ int virtio_gpu_driver_open(struct drm_device *dev, struct
> drm_file *file)
> return id;
> }
>
> +
> vfpriv->ctx_id = id;
checkpatch warning here:
-:
On Tue, Feb 11, 2020 at 03:27:11PM +0100, Daniel Vetter wrote:
> On Tue, Feb 11, 2020 at 02:58:04PM +0100, Gerd Hoffmann wrote:
> > Split virtio_gpu_deinit(), move the drm shutdown and release to
> > virtio_gpu_release(). Drop vqs_ready variable, instead use
> > drm_dev_{
On Tue, Feb 11, 2020 at 03:19:56PM +0100, Daniel Vetter wrote:
> On Tue, Feb 11, 2020 at 02:52:18PM +0100, Gerd Hoffmann wrote:
> > Call bochs_unload via drm_driver.release to make sure we release stuff
> > when it is safe to do so. Use drm_dev_{enter,exit,unplug} to avoid
> &
Split virtio_gpu_deinit(), move the drm shutdown and release to
virtio_gpu_release(). Drop vqs_ready variable, instead use
drm_dev_{enter,exit,unplug} to avoid touching hardware after
device removal. Tidy up here and there.
v4: add changelog.
v3: use drm_dev_*().
Signed-off-by: Gerd Hoffmann
().
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/cirrus/cirrus.c | 43 -
1 file changed, 37 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/cirrus/cirrus.c b/drivers/gpu/drm/cirrus/cirrus.c
index a91fb0d7282c..d2ff63ce8eaf 100644
--- a/drivers/gpu/drm
-by: Gerd Hoffmann
---
drivers/gpu/drm/bochs/bochs_drv.c | 6 +++---
drivers/gpu/drm/bochs/bochs_hw.c | 24 +++-
2 files changed, 26 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/bochs/bochs_drv.c
b/drivers/gpu/drm/bochs/bochs_drv.c
index 10460878414e
Gerd Hoffmann (2):
drm/virtio: fix virtio_gpu_execbuffer_ioctl locking
drm/virtio: fix virtio_gpu_cursor_plane_update().
drivers/gpu/drm/virtio/virtgpu_ioctl.c | 20 ++--
drivers/gpu/drm/virtio/virtgpu_plane.c | 1 +
2 files changed, 11 insertions(+), 10 deletions
Lockdep says we can't call vmemdup() while having objects reserved
because it needs the mmap semaphore. So reorder the calls reserve
the objects later.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/virtio/virtgpu_ioctl.c | 20 ++--
1 file changed, 10 insertions(+
Add missing virtio_gpu_array_lock_resv() call.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/virtio/virtgpu_plane.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c
b/drivers/gpu/drm/virtio/virtgpu_plane.c
index ac42c84d2d7f..d1c3f5fbfee4 100644
Hi,
> @@ -541,6 +539,7 @@ void virtio_gpu_cmd_unref_resource(struct
> virtio_gpu_device *vgdev,
> cmd_p->resource_id = cpu_to_le32(resource_id);
>
> virtio_gpu_queue_ctrl_buffer(vgdev, vbuf);
> + virtio_gpu_commit_ctrl(vgdev);
> }
Well, I was more thinking about adding that
for
userspace ioctls.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/virtio/virtgpu_drv.h | 6 ++---
drivers/gpu/drm/virtio/virtgpu_display.c | 2 ++
drivers/gpu/drm/virtio/virtgpu_ioctl.c | 4 +++
drivers/gpu/drm/virtio/virtgpu_kms.c | 3 +++
drivers/gpu/drm/virtio
Split virtio_gpu_deinit(), move the drm shutdown and release to
virtio_gpu_release(). Drop vqs_ready variable, instead use
drm_dev_{enter,exit,unplug} to avoid touching hardware after
device removal. Tidy up here and there.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/virtio/virtgpu_drv.h
Move final cleanups from cirrus_pci_remove() to the new callback.
Add drm_atomic_helper_shutdown() call to cirrus_pci_remove().
Use drm_dev_{enter,exit,unplug} to avoid touching hardware after
device removal.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/cirrus/cirrus.c | 43
Call bochs_unload via drm_driver.release to make sure we release stuff
when it is safe to do so. Use drm_dev_{enter,exit,unplug} to avoid
touching hardware after device removal. Tidy up here and there.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/bochs/bochs_drv.c | 6 +++---
drivers/gpu
Gerd Hoffmann (2):
drm/qxl: reorder calls in qxl_device_fini().
drm/qxl: add drm_driver.release callback.
drivers/gpu/drm/qxl/qxl_drv.c | 26 +++---
drivers/gpu/drm/qxl/qxl_kms.c | 8
2 files changed, 23 insertions(+), 11 deletions(-)
--
2.18.1
Reorder calls in qxl_device_fini(). Cleaning up gem & ttm
might trigger qxl commands, so we should do that before
releaseing command rings.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/qxl/qxl_kms.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu
Move final cleanups to qxl_drm_release() callback.
Add drm_atomic_helper_shutdown() call to qxl_pci_remove().
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/qxl/qxl_drv.c | 26 +++---
1 file changed, 19 insertions(+), 7 deletions(-)
diff --git a/drivers/gpu/drm/qxl
Split virtio_gpu_deinit(), move the drm shutdown and release to
virtio_gpu_release(). Also free vbufs in case we can't queue them.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/virtio/virtgpu_drv.h | 1 +
drivers/gpu/drm/virtio/virtgpu_display.c | 1 -
drivers/gpu/drm/v
Move final cleanups from cirrus_pci_remove() to the new callback.
Add drm_atomic_helper_shutdown() call to cirrus_pci_remove().
Set pointers to NULL after iounmap() and check them before using
them to make sure we don't touch released hardware.
Signed-off-by: Gerd Hoffmann
---
drivers/gp
touched after bochs_pci_remove returns.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/bochs/bochs.h | 1 +
drivers/gpu/drm/bochs/bochs_drv.c | 6 +++---
drivers/gpu/drm/bochs/bochs_hw.c | 14 ++
3 files changed, 18 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm
On Fri, Feb 07, 2020 at 03:16:01PM +0100, Thomas Zimmermann wrote:
> Atomic modesetting doesn't use struct drm_connector_funcs.dpms and
> the set function, drm_helper_connector_dpms(), wouldn't support it
> anyway. So keep the pointer to NULL.
>
> Signed-off-by: Thomas Zi
Check whenever mode_config was actually properly
initialized before trying to clean it up.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/bochs/bochs_kms.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/gpu/drm/bochs/bochs_kms.c
b/drivers/gpu/drm/bochs/bochs_kms.c
index
On Fri, Feb 07, 2020 at 01:06:00PM +0100, Thomas Zimmermann wrote:
> Hi
>
> Am 07.02.20 um 12:57 schrieb Gerd Hoffmann:
> > Shutdown of firmware framebuffer has a bunch of problems. Because
> > of this the framebuffer region might still b
> > How about using
> >
> > #define drm_simple_encoder_init(dev, type, name, ...) \
> > drm_encoder_init(dev, drm_simple_encoder_funcs_cleanup, type, name,
> > __VA_ARGS__)
> >
> > instead ?
> I guess you want to save a few lines in the implementation of
> drm_simple_encoder_init() (?)
Move final cleanups from cirrus_pci_remove() to the new callback.
Add drm_atomic_helper_shutdown() call to cirrus_pci_remove().
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/cirrus/cirrus.c | 17 -
1 file changed, 12 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm
Split virtio_gpu_deinit(), move the drm shutdown and release to
virtio_gpu_release(). Also free vbufs in case we can't queue them.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/virtio/virtgpu_drv.h | 1 +
drivers/gpu/drm/virtio/virtgpu_drv.c | 4
drivers/gpu/drm/virtio/virtgpu_
Move final cleanups to qxl_drm_release() callback.
Add drm_atomic_helper_shutdown() call to qxl_pci_remove().
Reorder calls in qxl_device_fini(). Cleaning up gem & ttm
might trigger qxl commands, so we should do that before
releaseing command rings.
Signed-off-by: Gerd Hoffmann
---
dri
From: Gurchetan Singh
Move bochs_unload call from bochs_remove() to the new bochs_release()
callback. Also call drm_dev_unregister() first in bochs_remove().
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/bochs/bochs_drv.c | 9 +++--
1 file changed, 7 insertions(+), 2 deletions
round this issue.
Reported-by: Marek Marczykowski-Górecki
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/bochs/bochs_hw.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/bochs/bochs_hw.c b/drivers/gpu/drm/bochs/bochs_hw.c
index b615b7dfdd9d..a387efa9e559 1
> +static const struct drm_encoder_funcs drm_simple_encoder_funcs_cleanup = {
> + .destroy = drm_encoder_cleanup,
> +};
> +
> +/**
> + * drm_simple_encoder_init - Init a preallocated encoder
> + * @dev: drm device
> + * @funcs: callbacks for this encoder
> + * @encoder_type: user visible type o
On Thu, Feb 06, 2020 at 11:22:14AM -0800, Chia-I Wu wrote:
> The global disable_notify state does not scale well when we start
> using it in more places and when there are multiple threads. Use
> command-level bools to control whether to notify or not.
Hmm, I don't like passing around the bool ev
Just call virtio_gpu_alloc_cmd_resp with some fixed args
instead of duplicating most of the function body.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/virtio/virtgpu_vq.c | 26 +-
1 file changed, 9 insertions(+), 17 deletions(-)
diff --git a/drivers/gpu/drm/virtio
Introduce new virtio_gpu_object_shmem_init() helper function which will
create the virtio_gpu_mem_entry array, containing the backing storage
information for the host. For the most path this just moves code from
virtio_gpu_object_attach().
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm
Stop sending DETACH_BACKING commands, that will happening anyway when
releasing resources via UNREF. Handle guest-side cleanup in
virtio_gpu_cleanup_object(), called when the host finished processing
the UNREF command.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/virtio/virtgpu_drv.h
Signed-off-by: Gerd Hoffmann
Gerd Hoffmann (4):
drm/virtio: simplify virtio_gpu_alloc_cmd
drm/virtio: resource teardown tweaks
drm/virtio: move mapping teardown to virtio_gpu_cleanup_object()
drm/virtio: move virtio_gpu_mem_entry initialization to new function
drivers/gpu/drm/virtio
Add new virtio_gpu_cleanup_object() helper function for object cleanup.
Wire up callback function for resource unref, do cleanup from callback
when we know the host stopped using the resource.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/virtio/virtgpu_drv.h| 4 +++-
drivers/gpu/drm
If the virtio device supports indirect ring descriptors we need only one
ring entry for the whole command. Take that into account when checking
whenever the virtqueue has enough free entries for our command.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/virtio/virtgpu_drv.h | 1
Hi,
> > + indirect = virtio_has_feature(vgdev->vdev,
> > VIRTIO_RING_F_INDIRECT_DESC);
> > + vqcnt = indirect ? 1 : elemcnt;
> Is the feature dynamic and require the lock held? If not, the result
> can be cached and the fixup can happen before grabbing the lock
Not dynamic, so yes
On Wed, Feb 05, 2020 at 10:19:44AM -0800, Chia-I Wu wrote:
> This series consists of fixes and cleanups for
> virtio_gpu_queue_fenced_ctrl_buffer, except for the last patch. The fixes are
> for corner cases that were overlooked. The cleanups make the last patch
> easier, but they should be good i
If the virtio device supports indirect ring descriptors we need only one
ring entry for the whole command. Take that into account when checking
whenever the virtqueue has enough free entries for our command.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/virtio/virtgpu_vq.c | 9 ++---
1
On Wed, Feb 05, 2020 at 10:19:53AM -0800, Chia-I Wu wrote:
> Make sure elemcnt does not exceed the maximum element count in
> virtio_gpu_queue_ctrl_sgs. We should improve our error handling or
> impose a size limit on execbuffer, which are TODOs.
Hmm, virtio supports indirect ring entries, so lar
Hi,
> > virtio_gpu_cmd_resource_attach_backing(vgdev, obj->hw_res_handle,
> > - ents, nents,
> > + obj->ents, obj->nents,
> >fence);
> > + obj->
> > -
> > - drm_gem_shmem_free_object(obj);
> > + if (bo->created) {
> > + virtio_gpu_cmd_unref_resource(vgdev, bo);
> > + /* completion handler calls virtio_gpu_cleanup_object() */
> nitpick: we don't need this comment when virtio_gpu_cmd_unref_cb is
> defin
Introduce new virtio_gpu_object_shmem_init() helper function which will
create the virtio_gpu_mem_entry array, containing the backing storage
information for the host. For the most path this just moves code from
virtio_gpu_object_attach().
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm
Just call virtio_gpu_alloc_cmd_resp with some fixed args
instead of duplicating most of the function body.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/virtio/virtgpu_vq.c | 25 -
1 file changed, 8 insertions(+), 17 deletions(-)
diff --git a/drivers/gpu/drm/virtio
Add new virtio_gpu_cleanup_object() helper function for object cleanup.
Wire up callback function for resource unref, do cleanup from callback
when we know the host stopped using the resource.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/virtio/virtgpu_drv.h| 3 ++-
drivers/gpu/drm
Stop sending DETACH_BACKING commands, that will happening anyway when
releasing resources via UNREF. Handle guest-side cleanup in
virtio_gpu_cleanup_object(), called when the host finished processing
the UNREF command.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/virtio/virtgpu_drv.h
Signed-off-by: Gerd Hoffmann
Gerd Hoffmann (4):
drm/virtio: simplify virtio_gpu_alloc_cmd
drm/virtio: resource teardown tweaks
drm/virtio: move mapping teardown to virtio_gpu_cleanup_object()
drm/virtio: move virtio_gpu_mem_entry initialization to new function
drivers/gpu/drm/virtio
Avoid flooding the log in case we screw up badly.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/virtio/virtgpu_vq.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c
b/drivers/gpu/drm/virtio/virtgpu_vq.c
index 5914e79d3429
virtio has its own commit fail function. Add the
drm_atomic_helper_fake_vblank() call there.
Fixes: 2a735ad3d211 ("drm/virtio: Remove sending of vblank event")
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/virtio/virtgpu_display.c | 1 +
1 file changed, 1 insertion(+)
diff --git
.
>
> v2:
> * remove bochs_connector_mode_valid(), which now serves no purpose
>
> Signed-off-by: Thomas Zimmermann
Reviewed-by: Gerd Hoffmann
> ---
> drivers/gpu/drm/bochs/bochs_kms.c | 21 +
> 1 file changed, 1 insertion(+), 20 deletions(-)
>
On Mon, Feb 03, 2020 at 09:35:54AM +0100, Thomas Zimmermann wrote:
> Hi Gerd
>
> Am 03.02.20 um 07:47 schrieb Gerd Hoffmann:
> > On Sat, Feb 01, 2020 at 01:27:42PM +0100, Thomas Zimmermann wrote:
> >> The implementation of struct drm_mode_config_funcs.mode_valid verifie
On Sat, Feb 01, 2020 at 01:27:42PM +0100, Thomas Zimmermann wrote:
> The implementation of struct drm_mode_config_funcs.mode_valid verifies
> that enough video memory is available for a given display mode.
There is bochs_connector_mode_valid() doing the same check,
you can drop it when hooking up
do not support interrupts. Xen comes with its
> own VBLANK logic and disables no_vblank explictly.
Acked-by: Gerd Hoffmann
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
Hi,
> > open. Which can result in drm driver not being able to grab resources
> > (and fail initialization) because the firmware framebuffer still holds
> > them. Reportedly plymouth can trigger this.
>
> Could you please describe issue some more?
>
> I guess that a problem is happening duri
. Reportedly plymouth can trigger this.
Fix this by trying to wait until all references are gone. Don't wait
forever though given that userspace might keep the file handle open.
Reported-by: Marek Marczykowski-Górecki
Signed-off-by: Gerd Hoffmann
---
drivers/video/fbdev/core/fbmem.c | 7 +++
1
When submitting a fenced command we must lock the object reservations
because virtio_gpu_queue_fenced_ctrl_buffer() unlocks after adding the
fence.
Reported-by: Jann Horn
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/virtio/virtgpu_plane.c | 1 +
1 file changed, 1 insertion(+)
diff --git a
401 - 500 of 2019 matches
Mail list logo