qemu 5.0 introduces a new qxl hardware revision 5. Unlike revision 4
(and below) the device doesn't switch back into vga compatibility mode
when someone touches the vga ports. So we don't have to reserve the
vga ports any more to avoid that happening.
Signed-off-by: Gerd Hoffmann
---
drivers
.net
Fixes: 3954ff10e06e ("drm/virtio: skip set_scanout if framebuffer didn't
change")
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/virtio/virtgpu_drv.h | 1 +
drivers/gpu/drm/virtio/virtgpu_display.c | 1 +
drivers/gpu/drm/virtio/virtgpu_plane.c | 4 +++-
3 files changed, 5 inser
On Tue, Aug 04, 2020 at 12:56:05PM +1000, Dave Airlie wrote:
> From: Dave Airlie
>
> Signed-off-by: Dave Airlie
Reviewed-by: Gerd Hoffmann
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/lis
On Tue, Aug 04, 2020 at 12:56:13PM +1000, Dave Airlie wrote:
> From: Dave Airlie
>
> Signed-off-by: Dave Airlie
Reviewed-by: Gerd Hoffmann
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/lis
On Tue, Aug 04, 2020 at 12:55:51PM +1000, Dave Airlie wrote:
> From: Dave Airlie
>
> Signed-off-by: Dave Airlie
Reviewed-by: Gerd Hoffmann
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/lis
On Tue, Aug 04, 2020 at 12:55:50PM +1000, Dave Airlie wrote:
> From: Dave Airlie
>
> Signed-off-by: Dave Airlie
Reviewed-by: Gerd Hoffmann
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/lis
On Tue, Aug 04, 2020 at 12:55:45PM +1000, Dave Airlie wrote:
> From: Dave Airlie
>
> This code was assuming there was a drm_mm here, don't do
> that call the correct API.
>
> v2: use the new exported interface.
Reviewed-by: Gerd Hoffmann
_
ger to the file ]
Reviewed-by: Gerd Hoffmann
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
On Tue, Aug 04, 2020 at 12:55:37PM +1000, Dave Airlie wrote:
> From: Dave Airlie
>
> Signed-off-by: Dave Airlie
Reviewed-by: Gerd Hoffmann
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/lis
On Thu, Jul 09, 2020 at 02:33:39PM +0200, Daniel Vetter wrote:
> Exactly matches the one in the helpers.
>
> This avoids me having to roll out dma-fence critical section
> annotations to this copy.
>
> Signed-off-by: Daniel Vetter
> Cc: David Airlie
> Cc: Gerd Hof
> Thanks Gerd - I've just tested the diff below with memcpy_toio() and that
> works too:
>
> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
> index 5609e164805f..4d05b0ab1592 100644
> --- a/drivers/gpu/drm/drm_fb_helper.c
> +++ b/drivers/gpu/drm/drm_fb_helper.c
>
> Yes, that's correct - I can confirm that the simplified diff below works:
>
> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
> index 5609e164805f..83af05fac604 100644
> --- a/drivers/gpu/drm/drm_fb_helper.c
> +++ b/drivers/gpu/drm/drm_fb_helper.c
> @@ -399,7
Hi,
> > > > + msg->vram.user_ctx = msg->vram.vram_gpa = vram_pp;
> > > > + msg->vram.is_vram_gpa_specified = 1;
> > > > + synthvid_send(hdev, msg);
> > >
> > > That suggests it is possible to define multiple framebuffers in vram,
> > > then pageflip by setting vram.vram_gpa.
Hi,
> +/* Should be done only once during init and resume */
> +static int synthvid_update_vram_location(struct hv_device *hdev,
> + phys_addr_t vram_pp)
> +{
> + struct hyperv_device *hv = hv_get_drvdata(hdev);
> + struct synthvid_msg *msg =
On Fri, Jun 12, 2020 at 11:54:54AM -0700, Gurchetan Singh wrote:
> Plus, I just realized the virtio dma ops and ops used by drm shmem are
> different, so virtio would have to unconditionally have to skip the
> shmem path. Perhaps the best policy is to revert d323bb44e4d2?
Reverting d323bb44e4d2
uffer didn't
change")
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/virtio/virtgpu_drv.h | 1 +
drivers/gpu/drm/virtio/virtgpu_display.c | 1 +
drivers/gpu/drm/virtio/virtgpu_plane.c | 4 +++-
3 files changed, 5 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/virtio/virtgp
On Fri, Jun 12, 2020 at 11:47:55AM +0200, Thomas Zimmermann wrote:
> Hi
>
> Am 12.06.20 um 03:36 schrieb Gurchetan Singh:
> > This is useful for the next patch. Also, should we only unmap the
> > amount entries that we mapped with the dma-api?
>
> It looks like you're moving virtio code into
On Tue, May 26, 2020 at 07:58:08PM +0900, David Stevens wrote:
> This patchset implements the current proposal for virtio cross-device
> resource sharing [1]. It will be used to import virtio resources into
> the virtio-video driver currently under discussion [2]. The patch
> under consideration
On Thu, May 28, 2020 at 03:57:05PM +0800, Dongyang Zhan wrote:
> Hi,
> My name is Dongyang Zhan, I am a security researcher.
> Currently, I found two possible memory bugs in
> drivers/gpu/drm/virtio/virtgpu_vq.c (Linux 5.6).
> I hope you can help me to confirm them. Thank you.
Sorry. Not
On Mon, May 18, 2020 at 10:50:15AM +0200, Thomas Zimmermann wrote:
> Hi Gerd
>
> Am 18.05.20 um 10:23 schrieb Gerd Hoffmann:
> >>> $ git grep drm_gem_shmem_mmap
> >>>
> >>> We also need correct access from userspace, otherwise the gpu is going to
>
> > $ git grep drm_gem_shmem_mmap
> >
> > We also need correct access from userspace, otherwise the gpu is going to
> > be sad.
>
> I've been thinking about this, and I think it means that we can never
> have cached mappings anywhere. Even if shmem supports it internally for
> most drivers, as
>
> Just drop the suffix. It makes the API cleaner.
Acked-by: Gerd Hoffmann
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
>
> Just drop the suffix. It makes the API cleaner.
Acked-by: Gerd Hoffmann
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
ection direction)
> {
> - dma_unmap_sg(dev, sg->sgl, sg->nents, direction);
> + dma_unmap_sgtable(dev, sg, direction, 0);
> sg_free_table(sg);
> kfree(sg);
> }
Easy straightforward conversation.
Acked-by: Gerd Hoffmann
take care,
Gerd
__
ssible. This, almost always, hides references to the
> nents and orig_nents entries, making the code robust, easier to follow
> and copy/paste safe.
Looks all sane.
Acked-by: Gerd Hoffmann
take care,
Gerd
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
Hi,
> - for the runtime upcasting the usual approach is to check the ->ops
> pointer. Which means that would need to be the same for all virtio
> dma_bufs, which might get a bit awkward. But I'd really prefer we not
> add allocator specific stuff like this to dma-buf.
This is exactly the
On Wed, Apr 29, 2020 at 07:51:07PM +0200, Sam Ravnborg wrote:
> On Wed, Apr 29, 2020 at 04:32:22PM +0200, Thomas Zimmermann wrote:
> > The HW cursor of Matrox G200 cards only supports a 16-color palette
> > format. Univeral planes require at least ARGB or a similar component-
> > based format.
t;
> It can lead to crashes in qxl driver or trigger memory corruption
> in some kmalloc-192 slab object
>
> Gerd Hoffmann proposes to swap the qxl_release_fence_buffer_objects() +
> qxl_push_{cursor,command}_ring_release() calls to close that race window.
>
> cc: sta...@vger
Hi,
> > The only way I see for this to happen is that the guest is preempted
> > between qxl_push_{cursor,command}_ring_release() and
> > qxl_release_fence_buffer_objects() calls. The host can complete the qxl
> > command then, signal the guest, and the IRQ handler calls
> >
Hi,
> It's not that easy. Current cursors n ast are statically allocated. As
> soon as you add dynamic cursors into the mix, you'd get OOMs.
Well, with the split you can simply allocate dynamic cursors with
top-bottom to keep them out of the way. It's not 100% perfect, the area
where the
On Mon, Apr 27, 2020 at 10:55:27AM +0300, Vasily Averin wrote:
> Signed-off-by: Vasily Averin
> ---
> drivers/gpu/drm/qxl/qxl_image.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/drivers/gpu/drm/qxl/qxl_image.c b/drivers/gpu/drm/qxl/qxl_image.c
> index 43688ecdd8a0..7270da62fc29
On Mon, Apr 27, 2020 at 08:32:51AM +0300, Vasily Averin wrote:
> Cc: sta...@vger.kernel.org
> Fixes: 8002db6336dd ("qxl: convert qxl driver to proper use for reservations")
> Signed-off-by: Vasily Averin
Both patches pushed to drm-misc-fixes.
thanks,
Gerd
On Wed, Apr 08, 2020 at 04:29:38PM -0700, Gurchetan Singh wrote:
> This can happen if userspace doesn't issue any 3D ioctls before
> closing the DRM fd.
>
> Fixes: 72b48ae800da ("drm/virtio: enqueue virtio_gpu_create_context
> after the first 3D ioctl")
> Signed-off-by: Gurchetan Singh
Pushed
Hi,
> At some point one has to choose to switch to top-down, and then back
> again at one of the next BOs. So the current patch effectively splits
> vram into a lower half and an upper half and puts BOs in alternating halves.
Hmm, so maybe just make the split explicit instead of playing tricks
> > I don't think it is that simple.
> >
> > First: How will that interact with cursor bo allocations? IIRC the
> > strategy for them is to allocate top-down, for similar reasons (avoid
> > small cursor bo allocs fragment vram memory).
>
> In ast, 2 cursor BOs are allocated during driver
On Wed, Apr 22, 2020 at 04:40:55PM +0200, Thomas Zimmermann wrote:
> With limited VRAM available, fragmentation can lead to OOM errors.
> Alternating between bottom-up and top-down placement keeps BOs near the
> ends of the VRAM and the available pages consecutively near the middle.
>
> A
Hi,
> I am a newbie, andy gave me some directions to submit the patch, eg: check
> ioremap leak. At this time, I found that the bochs driver may have similar
> problems, so I submitted this patch, then, Andy said the best is to switch
> this driver to use pcim _ * () functions and drop tons of
On Sat, Apr 18, 2020 at 02:39:17PM +0800, Caicai wrote:
> When a qxl resource is released, the list that needs to be released is
> fetched from the linked list ring and cleared. When you empty the list,
> instead of trying to determine whether the ttm buffer object for each
> qxl in the list is
On Wed, Apr 15, 2020 at 09:40:34AM +0200, Daniel Vetter wrote:
> This is leftovers from the old drm_driver->load callback
> upside-down issues. It doesn't do anything for not-hotplugged
> connectors since drm_dev_register takes care of that.
>
> Signed-off-by: Daniel Vetter
&
On Wed, Apr 15, 2020 at 09:40:12AM +0200, Daniel Vetter wrote:
> Because it is.
Indeed.
Acked-by: Gerd Hoffmann
take care,
Gerd
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
> should not report an error.
This always has been my expectation, as you might have concluded from
the absence of any qemu guest driver patches in this series ;)
The patches look sane too, so for the series:
Acked-by: Gerd Hoffmann
cheers,
Gerd
_
Hi,
> > > +drivers/gpu/drm/virtio/virtgpu_object.c maintainers
> > > Now we have both mainline and linux-next boot broken (linux-next is
> > > broken for the past 40 days).
> > > No testing of new code happens.
> > >
> > > > virtio_gpu_object_shmem_init
> > > >
On Mon, Apr 06, 2020 at 09:07:44AM +0200, Dmitry Vyukov wrote:
> On Mon, Apr 6, 2020 at 8:46 AM syzbot
> wrote:
> >
> > Hello,
> >
> > syzbot found the following crash on:
> >
> > HEAD commit:ffc1c20c Merge tag 'for-5.7/dm-changes' of git://git.kerne..
> > git tree: upstream
> > console
On Fri, Apr 03, 2020 at 03:58:15PM +0200, Daniel Vetter wrote:
> Upcasting using a container_of macro is more typesafe, faster and
> easier for the compiler to optimize.
>
> Signed-off-by: Daniel Vetter
> Cc: Dave Airlie
> Cc: Gerd Hoffmann
> Cc: virtualizat...@lists.linu
On Fri, Apr 03, 2020 at 03:58:14PM +0200, Daniel Vetter wrote:
> Also need to remove the drm_dev_put from the remove hook.
>
> Signed-off-by: Daniel Vetter
> Cc: Dave Airlie
> Cc: Gerd Hoffmann
> Cc: virtualizat...@lists.linux-foundation.org
> Cc: spice-de...@lists.f
On Wed, Apr 01, 2020 at 03:30:39PM -0700, Gurchetan Singh wrote:
> We want to avoid this path for upcoming blob resources.
> - } else {
> + } else if (params->dumb) {
That should be posted as part of the actual blob resource patch series,
it doesn't make sense at all standalone.
The
On Tue, Mar 31, 2020 at 10:12:38AM +0200, Thomas Zimmermann wrote:
> Most of the documentation was in an otherwise empty file, which was
> probably just left from a previous clean-up effort. So move code and
> documentation into a single file.
>
> Signed-off-by: Thomas Zimmermann
Hi,
> + * TODO: Synchronize with other users of the buffer. Buffers
> + * cannot be pinned to VRAM while they are in use by other
> + * drivers for DMA. We should probably wait for each GEM
> + * object's fence before attempting to pin the buffer.
The
On Wed, Mar 11, 2020 at 08:20:00PM +0900, David Stevens wrote:
> This patchset implements the current proposal for virtio cross-device
> resource sharing [1], with minor changes based on recent comments. It
> is expected that this will be used to import virtio resources into the
> virtio-video
On Fri, Mar 20, 2020 at 03:21:14AM +0100, Emmanuel Vadot wrote:
> Source file was dual licenced but the header was omitted, fix that.
> Contributors for this file are:
> Noralf Trønnes
> Gerd Hoffmann
> Thomas Gleixner
Acked-by: Gerd Hoffmann
> Signed-off-by: Emmanuel Vadot
which has 2 more members.
>
> So fix that by using correct type in virtio_gpu_create_object.
>
> Signed-off-by: Jiri Slaby
> Fixes: f651c8b05542 ("drm/virtio: factor out the sg_table from
> virtio_gpu_object")
> Cc: Gurchetan Singh
> Cc: Gerd Hoffmann
Tha
On Thu, Mar 19, 2020 at 10:32:25AM +0100, Jiri Slaby wrote:
> On 05. 03. 20, 2:32, Gurchetan Singh wrote:
> > A resource will be a shmem based resource or a (planned)
> > vram based resource, so it makes sense to factor out common fields
> > (resource handle, dumb).
> >
> > v2: move mapped field
On Tue, Mar 17, 2020 at 05:49:41PM +0100, Daniel Vetter wrote:
> On Fri, Mar 13, 2020 at 09:41:52AM +0100, Gerd Hoffmann wrote:
> > Shutdown of firmware framebuffer has a bunch of problems. Because
> > of this the framebuffer region might still be rese
Hi,
> > > I am still catching up, but IIUC, indeed I don't think the host needs
> > > to depend on fence_id. We should be able to repurpose fence_id.
> >
> > I'd rather ignore it altogether for FENCE_V2 (or whatever we call the
> > feature flag).
>
> That's fine when one assumes each
Hi,
> >> At virtio level it is pretty simple: The host completes the SUBMIT_3D
> >> virtio command when it finished rendering, period.
> >>
> >>
> >> On the guest side we don't need the fence_id. The completion callback
> >> gets passed the virtio_gpu_vbuffer, so it can figure which command
Hi,
> > + if (pci_request_region(pdev, 0, "bochs-drm") != 0)
> > + DRM_WARN("Cannot request framebuffer, boot fb still active?\n");
> So you could use drm_WARN() which is what is preferred these days.
Nope, this isn't yet in -fixes.
cheers,
Gerd
this issue.
Reported-by: Marek Marczykowski-Górecki
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/bochs/bochs_hw.c | 6 ++
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/bochs/bochs_hw.c b/drivers/gpu/drm/bochs/bochs_hw.c
index 952199cc0462..dce4672e3fc8
On Wed, Mar 11, 2020 at 04:36:16PM -0700, Gurchetan Singh wrote:
> On Wed, Mar 11, 2020 at 3:36 AM Gerd Hoffmann wrote:
>
> > Hi,
> >
> > > I should've been more clear -- this is an internal cleanup/preparation
> > and
> > > the per-context changes a
Hi,
> I will start with... how many timelines do we want to expose per
> context? In my mind, it goes like
>
> V1: 1 timeline per virtqueue (there is one timeline for ctrlq right now)
> V2: 1 timeline per context (VK and GL on different timelines)
> V3: N timelines per context (each VkQueue
Hi,
> Can virtqueues be added dynamically?
No.
> Or can we have
> fixed but enough (e.g., 64) virtqueues?
Well, I wouldn't bet that any specific number is enough. When gtk
starts rendering using vulkan by default the number of contexts of a
standard gnome desktop will be pretty high I guess
Hi,
> I should've been more clear -- this is an internal cleanup/preparation and
> the per-context changes are invisible to host userspace.
Ok, it wasn't clear that you don't flip the switch yet. In general the
commit messages could be a bit more verbose ...
I'm wondering though why we need
On Mon, Mar 09, 2020 at 06:08:10PM -0700, Gurchetan Singh wrote:
> We don't want fences from different 3D contexts/processes (GL, VK) to
> be on the same timeline. Sending this out as a RFC to solicit feedback
> on the general approach.
NACK.
virtio fences are global, period. You can't simply
On Wed, Mar 04, 2020 at 05:32:11PM -0800, Gurchetan Singh wrote:
> A resource will be a shmem based resource or a (planned)
> vram based resource, so it makes sense to factor out common fields
> (resource handle, dumb).
>
> v2: move mapped field to shmem object
Pushed to drm-misc-next.
thanks,
On Thu, Mar 05, 2020 at 02:29:08PM +0100, Nirmoy Das wrote:
> Calculate GEM VRAM bo's offset within vram-helper without depending on
> bo->offset.
>
> Signed-off-by: Nirmoy Das
> Reviewed-by: Daniel Vetter
> ---
> drivers/gpu/drm/drm_gem_vram_helper.c | 9 -
> 1 file changed, 8
Hi,
> + drm_gem_shmem_free_object(>base.base);
> }
> +
> virtio_gpu_resource_id_put(vgdev, bo->hw_res_handle);
use-after-free here.
cheers,
Gerd
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
On Tue, Mar 03, 2020 at 11:42:22AM +0900, David Stevens wrote:
> > cmd_p->hdr.ctx_id =
> >
> > Before this completion of this hypercall, this resource can be
> > considered context local, while afterward it can be considered
> > "exported".
>
> Maybe I'm misunderstanding render contexts, but
Hi,
> + if (vgdev->has_resource_assign_uuid) {
> + spin_lock(>resource_export_lock);
> + if (bo->uuid_state == UUID_NOT_INITIALIZED) {
> + bo->uuid_state = UUID_INITIALIZING;
> + needs_init = true;
> + }
> +
On Mon, Mar 02, 2020 at 09:15:21PM +0900, David Stevens wrote:
> This change adds a new dma-buf operation that allows dma-bufs to be used
> by virtio drivers to share exported objects. The new operation allows
> the importing driver to query the exporting driver for the UUID which
> identifies the
> This function won't be useable for hostmem objects.
> @@ -526,7 +526,8 @@ static void virtio_gpu_cmd_unref_cb(struct
> virtio_gpu_device *vgdev,
> bo = vbuf->resp_cb_data;
> vbuf->resp_cb_data = NULL;
>
> - virtio_gpu_cleanup_object(bo);
> + if (bo &&
Hi,
> struct virtio_gpu_object {
> struct drm_gem_shmem_object base;
> uint32_t hw_res_handle;
> -
> - struct sg_table *pages;
> uint32_t mapped;
> -
> bool dumb;
> bool created;
> };
> #define gem_to_virtio_gpu_obj(gobj) \
> container_of((gobj),
On Fri, Feb 28, 2020 at 01:01:49PM -0800, Chia-I Wu wrote:
> On Wed, Oct 2, 2019 at 5:18 PM Gurchetan Singh
> wrote:
> >
> > On Wed, Oct 2, 2019 at 1:49 AM Gerd Hoffmann wrote:
> > >
> > > On Tue, Oct 01, 2019 at 06:49:35PM -0700, Gurchetan Singh wrot
On Mon, Mar 02, 2020 at 11:26:10PM +0100, Daniel Vetter wrote:
> With the drm_device lifetime fun cleaned up there's nothing in the way
> anymore to use devm_ for everything hw releated. Do it, and in the
> process, throw out the entire onion unwinding.
Acked-by: Gerd
drmm_kzalloc.
>
> This is made possible by a preceeding patch which added a drmm_
> cleanup action to drm_mode_config_init(), hence all we need to do to
> ensure that drm_mode_config_cleanup() is run on final drm_device
> cleanup is check the new error code for _init(
_config_cleanup() is run on final drm_device
> cleanup is check the new error code for _init().
Acked-by: Gerd Hoffmann
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
On Mon, Mar 02, 2020 at 11:26:07PM +0100, Daniel Vetter wrote:
> Small mistake that crept into
>
> commit 81da8c3b8d3df6f05b11300b7d17ccd1f3017fab
> Author: Gerd Hoffmann
> Date: Tue Feb 11 14:52:18 2020 +0100
>
> drm/bochs: add drm_driver.releas
goto err_pci_release;
> + }
> dev->dev_private = cirrus;
> + drmm_add_final_kfree(dev, cirrus);
That doesn't look like an error path improvement.
With patch #30 applied it'll looks alot better though.
So maybe squash the patches?
In any c
On Mon, Mar 02, 2020 at 11:25:47PM +0100, Daniel Vetter wrote:
> With this we can drop the final kfree from the release function.
Acked-by: Gerd Hoffmann
>
> Signed-off-by: Daniel Vetter
> Cc: Dave Airlie
> Cc: Gerd Hoffmann
> Cc: virtualizat...@lists.linux-foundation.
lloc(). We need to remove the kfree from
> xen_drm_drv_release().
>
> bochs also has a release hook, but leaked the drm_device ever since
>
> commit 0a6659bdc5e8221da99eebb176fd9591435e38de
> Author: Gerd Hoffmann
> Date: Tue Dec 17 18:04:46 2013 +0100
>
> drm/bo
On Mon, Mar 02, 2020 at 02:14:02PM -0800, Alistair Francis wrote:
> On Fri, Feb 28, 2020 at 1:57 AM Gerd Hoffmann wrote:
> >
> > On Thu, Feb 27, 2020 at 01:04:54PM -0800, Alistair Francis wrote:
> > > The QEMU model for the Bochs display has no VGA memory section
e the
call.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/drm_gem_shmem_helper.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c
b/drivers/gpu/drm/drm_gem_shmem_helper.c
index aad9324dcf4f..df31e5782eed 100644
--- a/drivers/gpu/drm/drm_gem_shmem_help
On Fri, Feb 28, 2020 at 10:54:54AM +0100, Thomas Hellström (VMware) wrote:
> On 2/28/20 10:49 AM, Gerd Hoffmann wrote:
> >Hi,
> >
> > > > Not clue about the others (lima, tiny, panfrost, v3d). Maybe they use
> > > > write-combine just becaus
Hi,
> > struct virtgpu_object {
>
> Yeah, using "virtgpu_" rather than "virtio_gpu" makes sense.
It wasn't my intention to suggest a rename. It's just that the kernel
is a bit inconsistent here and I picked the wrong name here. Most
places use virtio_gpu but some use virtgpu (file names,
On Thu, Feb 27, 2020 at 01:04:54PM -0800, Alistair Francis wrote:
> The QEMU model for the Bochs display has no VGA memory section at offset
> 0x400 [1]. By writing to this register Linux can create a write to
> unassigned memory which depending on machine and architecture can result
> in a store
Hi,
> > Not clue about the others (lima, tiny, panfrost, v3d). Maybe they use
> > write-combine just because this is what they got by default from
> > drm_gem_mmap_obj(). Maybe they actually need that. Trying to Cc:
> > maintainters (and drop stable@).
> > virtio-gpu needs it, otherwise the
Hi,
> > So I'd like to push patches 1+2 to -fixes and sort everything else later
> > in -next. OK?
>
> OK with me.
Done.
>> [ context: why shmem helpers use pgprot_writecombine + pgprot_decrypted?
>>we get conflicting mappings because of that, linear kernel
>>map vs.
On Wed, Feb 26, 2020 at 04:25:53PM -0800, Gurchetan Singh wrote:
> The main motivation behind this is to have eventually have something like
> this:
patches 1+2 cherry-picked and pushed to -next.
thanks,
Gerd
___
dri-devel mailing list
Hi,
> I think it might be safe for some integrated graphics where the driver
> maintainers can guarantee that it's safe on all particular processors used
> with that driver, but then IMO it should be moved out to those drivers.
>
> Other drivers needing write-combine shouldn't really use
Hi,
> > + if (!shmem->map_cached)
> > + prot = pgprot_writecombine(prot);
> > shmem->vaddr = vmap(shmem->pages, obj->size >> PAGE_SHIFT,
> > - VM_MAP, pgprot_writecombine(PAGE_KERNEL));
> > +
On Wed, Feb 26, 2020 at 04:25:53PM -0800, Gurchetan Singh wrote:
> The main motivation behind this is to have eventually have something like
> this:
>
> struct virtio_gpu_shmem {
> struct drm_gem_shmem_object base;
> uint32_t hw_res_handle;
> struct sg_table *pages;
> (...)
> };
On Wed, Feb 26, 2020 at 12:56:58PM +0900, David Stevens wrote:
> On Tue, Feb 25, 2020 at 3:10 PM Gerd Hoffmann wrote:
> >
> > How about dma_buf_{get,set}_uuid, simliar to dma_buf_set_name?
>
> While I'm not opposed to such an API, I'm also hesitant to make
> ch
Hi,
> > ... into 5.6-rc3 fixes the corruption for me.
>
> I tried those 2 patches on top of kernel 5.6-rc3 and they indeed fix the
> problem.
>
> I applied them on top of 5.5.6 and they also fix the problem!
> Could we get those 2 patches applied to stable 5.5, please?
Series just
virtio-gpu uses cached mappings, set
drm_gem_shmem_object.map_cached accordingly.
Cc: sta...@vger.kernel.org
Fixes: c66df701e783 ("drm/virtio: switch from ttm to gem shmem helpers")
Reported-by: Gurchetan Singh
Reported-by: Guillaume Gardet
Signed-off-by: Gerd Hoffmann
---
drive
-by: Gerd Hoffmann
---
drivers/gpu/drm/udl/udl_gem.c | 62 ++-
1 file changed, 3 insertions(+), 59 deletions(-)
diff --git a/drivers/gpu/drm/udl/udl_gem.c b/drivers/gpu/drm/udl/udl_gem.c
index b6e26f98aa0a..7e3a88b25b6b 100644
--- a/drivers/gpu/drm/udl/udl_gem.c
+++ b
Add map_cached bool to drm_gem_shmem_object, to request cached mappings
on a per-object base. Check the flag before adding writecombine to
pgprot bits.
Cc: sta...@vger.kernel.org
Signed-off-by: Gerd Hoffmann
---
include/drm/drm_gem_shmem_helper.h | 5 +
drivers/gpu/drm
v5: rebase, add tags, add cc stable for 1+2, no code changes.
v4: back to v2-ish design, but simplified a bit.
v3: switch to drm_gem_object_funcs callback.
v2: make shmem helper caching configurable.
Gerd Hoffmann (3):
drm/shmem: add support for per object caching flags.
drm/virtio: fix mmap
> > No.
> >
> > First, what is wrong with vkms?
> The total lines of vkms driver is 1.2k+.
Which is small for a drm driver.
> I think it doesn't work along
> itself to provide things like mmap on prime fd? (I tried it months
> ago).
Seems vkms only supports prime import, not prime export.
Hi,
> > Perhaps try the entire series?
> >
> > https://patchwork.kernel.org/cover/11300619/
Latest version is at:
https://git.kraxel.org/cgit/linux/log/?h=drm-virtio-no-wc
> Applied entire series on top of 5.5.6, but still the same problem.
Can you double-check? Cherry-picking the shmem
On Mon, Feb 24, 2020 at 03:01:54PM -0800, Lepton Wu wrote:
> Hi,
>
> I'd like to get comments on this before I polish it. This is a
> simple way to get something similar with vkms but it heavily reuse
> the code provided by virtio-gpu. Please feel free to give me any
> feedbacks or comments.
Hi,
> +struct dma_buf *virtgpu_gem_prime_export(struct drm_gem_object *obj,
> + int flags)
> +{
[ ... ]
> +}
> +
> +struct drm_gem_object *virtgpu_gem_prime_import(struct drm_device *dev,
> + struct dma_buf *buf)
>
On Wed, Feb 19, 2020 at 05:06:36PM +0900, David Stevens wrote:
> This change adds a new flavor of dma-bufs that can be used by virtio
> drivers to share exported objects. A virtio dma-buf can be queried by
> virtio drivers to obtain the UUID which identifies the underlying
> exported object.
That
301 - 400 of 2006 matches
Mail list logo