iewed-by: Muchun Song
On the 5 drm patches (I counted 2 ttm and 3 drivers) for merging through
some other tree (since I'm assuming that's how this will land):
Acked-by: Daniel Vetter
> ---
> drivers/gpu/drm/ttm/ttm_pool.c | 15 +++
> include/linux/shrinker
psets);
> @@ -246,10 +249,12 @@ int virtio_gpu_init(struct virtio_device *vdev, struct
> drm_device *dev)
> virtio_gpu_get_capsets(vgdev, num_capsets);
> if (vgdev->has_edid)
> virtio_gpu_cmd_get_edids(vgdev);
> -
On Thu, Feb 16, 2023 at 11:43:38PM +0300, Dmitry Osipenko wrote:
> On 2/16/23 15:15, Daniel Vetter wrote:
> > On Mon, Jan 30, 2023 at 03:02:10PM +0300, Dmitry Osipenko wrote:
> >> On 1/27/23 11:13, Gerd Hoffmann wrote:
> >>> On Thu, Jan 26, 2023 at 01:55:09A
king at the the DRM core changes. I expect he'll ack them.
>
> Thank you for reviewing the virtio patches!
I think best-case would be an ack from msm people that this looks good
(even better a conversion for msm to start using this).
Otherwise I think the locking looks reasonable, I
ick to infer format preferences ...
Anyway on the series, since it pushes in a direction I wanted to fix years
ago but gave up because too ambitious :-)
Acked-by: Daniel Vetter
>*/
> - if (!preferred_bpp)
> - preferred_bpp = dev->mode_config.preferred_depth;
>
@@ -305,6 +307,7 @@ void qxl_device_fini(struct qxl_device *qdev)
> wait_event_timeout(qdev->release_event,
> atomic_read(&qdev->release_count) == 0,
> HZ);
> + free_irq(pdev->irq, ddev);
> flush_work(&
;
>
> - return dmabuf->ops->mmap(dmabuf, vma);
> + dma_resv_lock(dmabuf->resv, NULL);
> + ret = dmabuf->ops->mmap(dmabuf, vma);
> + dma_resv_unlock(dmabuf->resv);
> +
> + return ret
On Tue, Sep 06, 2022 at 10:01:47PM +0200, Daniel Vetter wrote:
> On Mon, Aug 15, 2022 at 12:05:19PM +0200, Christian König wrote:
> > Am 15.08.22 um 11:54 schrieb Dmitry Osipenko:
> > > Higher order pages allocated using alloc_pages() aren't refcounted and
> > >
> {
> > unsigned long attr = DMA_ATTR_FORCE_CONTIGUOUS;
> > struct ttm_pool_dma *dma;
> > + unsigned int i;
> > void *vaddr;
> > #ifdef CONFIG_X86
> > @@ -142,6 +163,8 @@ static void ttm_pool_free_page(struct ttm_pool *pool,
> > enum ttm_caching caching,
> > if (caching != ttm_cached && !PageHighMem(p))
> > set_pages_wb(p, 1 << order);
> > #endif
> > + for (i = 1; i < 1 << order; i++)
> > + page_ref_dec(p + i);
> > if (!pool || !pool->use_dma_alloc) {
> > __free_pages(p, order);
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
;drm: Allow userspace to ask for universal plane list
> (v2)")
> Cc: # v5.4+
> Cc: Maarten Lankhorst
> Cc: Maxime Ripard
> Cc: Thomas Zimmermann
> Cc: David Airlie
> Cc: Daniel Vetter
> Cc: Dave Airlie
> Cc: Gerd Hoffmann
> Cc: Hans de Goede
> Cc: Gurchetan
ow to
> do that, I think we're good.
So can I have an ack from Rob here or are there still questions that this
might go boom?
Dmitry, since you have a bunch of patches merged now I think would also be
good to get commit rights so you can drive this more yourself. I've asked
Daniel Stone to help you out with getting that.
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
On Wed, 10 Aug 2022 at 08:52, Christian König wrote:
>
> Am 09.08.22 um 18:44 schrieb Daniel Vetter:
> > On Tue, Jul 05, 2022 at 01:33:51PM +0200, Christian König wrote:
> >> Am 01.07.22 um 11:02 schrieb Dmitry Osipenko:
> >>> Use ww_acquire_fini() in the er
em_lock_reservations(struct drm_gem_object
> > **objs, int count,
> > goto retry;
> > }
> > - ww_acquire_done(acquire_ctx);
> > + ww_acquire_fini(acquire_ctx);
> > retur
int count,
> goto retry;
> }
>
> - ww_acquire_done(acquire_ctx);
> + ww_acquire_fini(acquire_ctx);
> return ret;
> }
> }
> --
> 2.36.1
>
--
Danie
threads hitting the shrinker in parallel.
> I guess since you are using trylock, it won't *block* the other
> threads hitting shrinker, but they'll just end up looping in
> do_shrink_slab() because they are hitting contention.
>
> I'd have to do some experiments to see how it works out in practice,
> but my gut feel is that it isn't a good idea
Yeah trylock on anything else than the object lock is No Good in the
shrinker. And it really shouldn't be needed, since import/export should
pin stuff as needed. Which should be protected by the dma_resv object
lock. If not, we need to fix that.
Picking a random drm-internal lock like this is definitely no good design.
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
will fail to find shrinkable memory
way too often. We need to engineer this out somehow.
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
On Sun, 5 Jun 2022 at 20:32, Rob Clark wrote:
>
> On Sun, Jun 5, 2022 at 9:47 AM Daniel Vetter wrote:
> >
> > On Fri, 27 May 2022 at 01:55, Dmitry Osipenko
> > wrote:
> > >
> > > Introduce a common DRM SHMEM shrinker framework that allows to reduce
&
aches zero.
> +*/
> + unsigned int pages_pin_count;
> +
> /**
> * @madv: State for madvise
> *
> * 0 is active/inuse.
> + * 1 is not-needed/can-be-purged
> * A negative value is
On Thu, May 12, 2022 at 10:04:53PM +0300, Dmitry Osipenko wrote:
> On 5/12/22 20:04, Daniel Vetter wrote:
> > On Thu, 12 May 2022 at 13:36, Dmitry Osipenko
> > wrote:
> >>
> >> On 5/11/22 22:09, Daniel Vetter wrote:
> >>> On Wed, May 11, 2022 at 07:06
On Thu, 12 May 2022 at 13:36, Dmitry Osipenko
wrote:
>
> On 5/11/22 22:09, Daniel Vetter wrote:
> > On Wed, May 11, 2022 at 07:06:18PM +0300, Dmitry Osipenko wrote:
> >> On 5/11/22 16:09, Daniel Vetter wrote:
> >>>>>>> I'd like to ask you
On Thu, May 12, 2022 at 09:29:35AM +0200, Christian König wrote:
> Am 11.05.22 um 21:05 schrieb Daniel Vetter:
> > [SNIP]
> > > > > It's unclear to me which driver may ever want to do the mapping under
> > > > > the dma_resv_lock. But if we will ever
On Wed, May 11, 2022 at 07:06:18PM +0300, Dmitry Osipenko wrote:
> On 5/11/22 16:09, Daniel Vetter wrote:
> >>>>> I'd like to ask you to reduce the scope of the patchset and build the
> >>>>> shrinker only for virtio-gpu. I know that I first suggested
On Wed, May 11, 2022 at 06:40:32PM +0300, Dmitry Osipenko wrote:
> On 5/11/22 18:29, Daniel Vetter wrote:
> > On Wed, May 11, 2022 at 06:14:00PM +0300, Dmitry Osipenko wrote:
> >> On 5/11/22 17:24, Christian König wrote:
> >>> Am 11.05.22 um 15:00 schrieb Daniel Vette
On Wed, May 11, 2022 at 06:14:00PM +0300, Dmitry Osipenko wrote:
> On 5/11/22 17:24, Christian König wrote:
> > Am 11.05.22 um 15:00 schrieb Daniel Vetter:
> >> On Tue, May 10, 2022 at 04:39:53PM +0300, Dmitry Osipenko wrote:
> >>> [SNIP]
> >>> Since vmapp
On Wed, May 11, 2022 at 04:24:28PM +0200, Christian König wrote:
> Am 11.05.22 um 15:00 schrieb Daniel Vetter:
> > On Tue, May 10, 2022 at 04:39:53PM +0300, Dmitry Osipenko wrote:
> > > [SNIP]
> > > Since vmapping implies implicit pinning, we can't use a separate
On Tue, May 10, 2022 at 04:47:52PM +0300, Dmitry Osipenko wrote:
> On 5/9/22 16:49, Daniel Vetter wrote:
> > On Fri, May 06, 2022 at 03:10:43AM +0300, Dmitry Osipenko wrote:
> >> On 5/5/22 11:34, Thomas Zimmermann wrote:
> >>> Hi
> >>>
> >
On Tue, May 10, 2022 at 04:39:53PM +0300, Dmitry Osipenko wrote:
> On 5/9/22 16:42, Daniel Vetter wrote:
> > On Fri, May 06, 2022 at 01:49:12AM +0300, Dmitry Osipenko wrote:
> >> On 5/5/22 11:12, Daniel Vetter wrote:
> >>> On Wed, May 04, 2022 at 06:56:09PM +0300, Dm
rtIO shrinker didn't support memory eviction.
> Memory eviction support requires page fault handler to be aware of the
> evicted pages, what should we do about it? The page fault handling is a
> part of memory management, hence to me drm-shmem is already kinda a MM.
Hm I still
On Fri, May 06, 2022 at 01:49:12AM +0300, Dmitry Osipenko wrote:
> On 5/5/22 11:12, Daniel Vetter wrote:
> > On Wed, May 04, 2022 at 06:56:09PM +0300, Dmitry Osipenko wrote:
> >> On 5/4/22 11:21, Daniel Vetter wrote:
> >> ...
> >>>>> - Maybe also do
; > --- a/include/drm/drm_gem.h
> > +++ b/include/drm/drm_gem.h
> > @@ -172,6 +172,41 @@ struct drm_gem_object_funcs {
> > * This is optional but necessary for mmap support.
> > */
> > const struct vm_operations_struct *vm_ops;
> > +
> > + /**
> > + * @purge
On Wed, May 04, 2022 at 06:56:09PM +0300, Dmitry Osipenko wrote:
> On 5/4/22 11:21, Daniel Vetter wrote:
> ...
> >>> - Maybe also do what you suggest and keep a separate lock for this, but
> >>> the fundamental issue is that this doesn't really work - if you s
ktop.org
Reviewed-by: Daniel Vetter
> ---
> drivers/gpu/drm/qxl/qxl_display.c | 8 +++-
> 1 file changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/qxl/qxl_display.c
> b/drivers/gpu/drm/qxl/qxl_display.c
> index 9a9c29b1d3e1..9a64fa4c7530 100644
On Thu, Apr 28, 2022 at 09:20:15PM +0300, Dmitry Osipenko wrote:
> 27.04.2022 18:03, Daniel Vetter wrote:
> >> ...
> >>>> @@ -172,6 +172,41 @@ struct drm_gem_object_funcs {
> >>>> * This is optional but necessary for mmap support.
On Thu, Apr 28, 2022 at 09:31:00PM +0300, Dmitry Osipenko wrote:
> Hello Daniel,
>
> 27.04.2022 17:50, Daniel Vetter пишет:
> > On Mon, Apr 18, 2022 at 10:18:54PM +0300, Dmitry Osipenko wrote:
> >> Hello,
> >>
> >> On 4/18/22 21:38, Thomas Zimmermann wro
e(struct drm_gem_shmem_object *shmem);
>
> -int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem);
> void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem);
> int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem);
>
ve the buffers and make
sure that mappings exists, to the point where the request is submitted
to hw or drm/sched and fences are installed.
But I think a lot of current shmem users just pin as part of execbuf, so
this won't work quite so well right out of the box.
Anyway with that d
st
> >> and Lima drivers use vmap() and they do it in the slow code paths,
> >> hence there was no practical justification for the usage of separate
> >> lock in the vmap().
> >>
> >> Suggested-by: Daniel Vetter
> >> Signed-off-by: Dmitry Os
On Wed, Apr 13, 2022 at 06:12:59PM +0200, Michel Dänzer wrote:
> From: Michel Dänzer
>
> Instead of relying on it getting pulled in indirectly.
>
> Signed-off-by: Michel Dänzer
Reviewed-by: Daniel Vetter
> ---
> drivers/gpu/drm/tiny/bochs.c | 1 +
> 1 file changed,
that have been freed by purging the GEM
> object.
> + *
> + * This callback is used by the GEM shrinker.
> + */
> + unsigned long (*purge)(struct drm_gem_object *obj);
> };
>
> /**
> diff --git a/include/drm/drm_gem_shmem_helper.h
>
ut that works
perfectly fine with standard drm_poll (and is meant to work perfectly fine
with standard drm_poll).
So if it's buggy on top of questionable I think revert might be the right
choice irrespective of whether there's some fixes in-flight.
So if you end up pushing that rev
c not going to be perfect, but better than nothing.
With that, on the series:
Acked-by: Daniel Vetter
But maybe wait for some more acks/reviews from driver folks.
-Daniel
> */
>
> static const struct drm_gem_object_funcs drm_gem_shmem_funcs = {
> - .free = drm_g
; Signed-off-by: Thomas Zimmermann
Really nice! On the series:
Acked-by: Daniel Vetter
I think I've found one missing static below.
Cheers, Daniel
> ---
> MAINTAINERS | 2 +-
> drivers/gpu/drm/Kconfig | 2 -
> drivers/gpu/drm/Makefile
On Mon, Apr 26, 2021 at 02:18:05PM +0200, Thomas Zimmermann wrote:
> Hi
>
> Am 20.04.21 um 10:46 schrieb Daniel Vetter:
> > On Mon, Apr 19, 2021 at 10:00:56AM +0200, Geert Uytterhoeven wrote:
> > > Hi Thomas,
> > >
> > > On Fri, Apr 16, 2021 a
On Tue, Apr 20, 2021 at 11:16:09AM +0200, Geert Uytterhoeven wrote:
> Hi Daniel,
>
> On Tue, Apr 20, 2021 at 10:46 AM Daniel Vetter wrote:
> > On Mon, Apr 19, 2021 at 10:00:56AM +0200, Geert Uytterhoeven wrote:
> > > On Fri, Apr 16, 2021 at 11:00 AM Thomas Zimmermann
&
erface, but really there's not much
userspace for it. In other words, it would work as well as current offb
would, but that's at least that.
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
_fb_helper {
> */
> bool deferred_setup;
>
> + /**
> +* @no_dpms_blank:
> +*
> +* A flag indicating that the driver doesn't support blanking.
> +* Then fbcon core co
On Thu, Apr 15, 2021 at 09:12:14PM +0200, Thomas Zimmermann wrote:
> Hi
>
> Am 15.04.21 um 14:57 schrieb Daniel Vetter:
> > On Thu, Apr 15, 2021 at 08:56:20AM +0200, Thomas Zimmermann wrote:
> > > Hi
> > >
> > > Am 09.04.21 um 11:22 schrieb Daniel Vett
On Thu, Apr 15, 2021 at 08:56:20AM +0200, Thomas Zimmermann wrote:
> Hi
>
> Am 09.04.21 um 11:22 schrieb Daniel Vetter:
> > > Is it that easy? simepldrm's detach function has code to synchronize with
> > > concurrent hotplug removals. If we can use drm_de
On Fri, Apr 09, 2021 at 09:54:03AM +0200, Thomas Zimmermann wrote:
> Hi
>
> Am 08.04.21 um 11:48 schrieb Daniel Vetter:
> > On Thu, Mar 18, 2021 at 11:29:15AM +0100, Thomas Zimmermann wrote:
> > > Platform devices might operate on firmware framebuffers, such as VESA or
&g
On Fri, Apr 09, 2021 at 09:06:56AM +0200, Thomas Zimmermann wrote:
> Hi
>
> Am 08.04.21 um 11:48 schrieb Daniel Vetter:
> >
> > Maybe just me, but to avoid overstretching the attention spawn of doc
> > readers I'd avoid this example here. And maybe make the recomm
On Thu, Apr 08, 2021 at 01:34:03PM +0200, Thomas Zimmermann wrote:
> Hi
>
> Am 08.04.21 um 13:16 schrieb Daniel Vetter:
> > On Tue, Apr 06, 2021 at 10:29:38AM +0200, Thomas Zimmermann wrote:
> > > The implementation of drm_driver.dumb_map_offset is the same for several
clude/drm/drm_gem_ttm_helper.h | 5 ++-
> include/drm/drm_gem_vram_helper.h | 7 +---
> 12 files changed, 45 insertions(+), 103 deletions(-)
>
> --
> 2.30.2
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_
t; >>>> There never is any output to the text-console and fbcon never
> >>>> takes-over, so on
> >>>> many Laptops running say Fedora workstation the fbcon code is actually
> >>>> unused
> >>>> until the user manually switches
> +#include
> +
> +/**
> + * drm_fb_helper_remove_conflicting_framebuffers - remove
> firmware-configured framebuffers
Annoying bikeshed, but I'd give them drm_aperture_ prefixes, for ocd
consistency. Also make them real functions, they're quite big and will
grow more in
drm_aperture private
> * rebase onto existing drm_aperture.h header file
> * use MIT license only for simplicity
> * documentation
>
> Signed-off-by: Thomas Zimmermann
> Tested-by: nerdopolis
Bunch of bikesheds for your considerations below, but overall lgtm.
A
are
passed to the guest and devices on the host side (like displays I guess?
or maybe video encode if this is for cloug gaming?), then using virtio-gpu
in render mode should also allow you to pass the dma_fence back&forth.
Which we'll need too, not just the dma-buf.
So at a first gu
F_FIRST_PAGE_DATA_OFFSET,
> + VIRTIO_VDMABUF_LAST_PAGE_DATA_LENGTH,
> + VIRTIO_VDMABUF_REF_ADDR_UPPER_32BIT,
> + VIRTIO_VDMABUF_REF_ADDR_LOWER_32BIT,
> + VIRTIO_VDMABUF_PRIVATE_DATA_SIZE,
> + VIRTIO_VDMABUF_PRIVATE_DATA_START
> +};
> +
> +/* adding exported/imported
On Wed, Feb 3, 2021 at 3:26 PM Thomas Zimmermann wrote:
>
> Hi
>
> Am 03.02.21 um 15:01 schrieb Daniel Vetter:
> > On Wed, Feb 03, 2021 at 02:10:42PM +0100, Thomas Zimmermann wrote:
> >> Several drivers use GEM SHMEM buffer objects as shadow buffers for
> >>
struct drm_plane_state
> *plane_state);
> +void
> +drm_gem_shmem_simple_kms_destroy_shadow_plane_state(struct
> drm_simple_display_pipe *pipe,
> + struct drm_plane_state
> *plane_state);
> +
&
atomic_dec(&qdev->release_count);
> }
>
> static int qxl_release_bo_alloc(struct qxl_device *qdev,
> @@ -344,6 +345,7 @@ int qxl_alloc_release_reserved(struct qxl_device *qdev,
> unsigned long size,
> *rbo = NULL;
>
On Wed, Jan 27, 2021 at 01:08:05PM +0100, Thomas Zimmermann wrote:
> Hi
>
> Am 11.01.21 um 17:50 schrieb Daniel Vetter:
> > On Fri, Jan 08, 2021 at 10:43:31AM +0100, Thomas Zimmermann wrote:
> > > Implementations of the vmap/vunmap GEM callbacks may perform pinning
> &g
h other pin leaks if you have them. Setting it
to 0 kinda defeats the warning.
-Daniel
>
> Not calling ttm_bo_unpin() makes ttm_bo_release() throw
> a WARN() because of the pin.
>
> Clearing pin_count (which is how ttm fixes things up
> in
u implement here,
to support the full use cases on Android's closed stacks. And it is uapi.
Tech debt isn't measured in lines of code, but in how expensive it's going
to be to fix up the mess in the future. uapi is expensive no matter how
few lines are used to implement it.
So
On Wed, Jan 20, 2021 at 10:51 AM Yiwei Zhang wrote:
>
> On Wed, Jan 20, 2021 at 1:11 AM Daniel Vetter wrote:
> >
> > On Tue, Jan 19, 2021 at 11:08:12AM -0800, Yiwei Zhang wrote:
> > > On Mon, Jan 18, 2021 at 11:03 PM Daniel Vetter wrote:
> > > >
> &g
On Tue, Jan 19, 2021 at 11:08:12AM -0800, Yiwei Zhang wrote:
> On Mon, Jan 18, 2021 at 11:03 PM Daniel Vetter wrote:
> >
> > On Tue, Jan 19, 2021 at 12:41 AM Yiwei Zhang wrote:
> > >
> > > On the success of virtio_gpu_object_create, add size of newly allocated
virtio_gpu_cmd_unref_resource(vgdev, bo);
> virtio_gpu_notify(vgdev);
> /* completion handler calls virtio_gpu_cleanup_object() */
> @@ -265,6 +283,7 @@ int virtio_gpu_object_create(struct virtio_gpu_device
> *v
On Tue, Jan 12, 2021 at 02:11:24PM +0100, Thomas Zimmermann wrote:
> Hi
>
> Am 11.01.21 um 17:50 schrieb Daniel Vetter:
> > On Fri, Jan 08, 2021 at 10:43:31AM +0100, Thomas Zimmermann wrote:
> > > Implementations of the vmap/vunmap GEM callbacks may perform pinning
> &g
On Tue, Jan 12, 2021 at 08:54:02AM +0100, Thomas Zimmermann wrote:
> Hi
>
> Am 11.01.21 um 18:06 schrieb Daniel Vetter:
> > On Fri, Jan 08, 2021 at 10:43:38AM +0100, Thomas Zimmermann wrote:
> > > Cursor updates in vboxvideo require a short-term mapping of
cursor_atomic_update(struct drm_plane
> *plane,
> data_size = width * height * 4 + mask_size;
>
> copy_cursor_image(src, vbox->cursor_data, width, height, mask_size);
> - drm_gem_vram_vunmap(gbo, &map);
> + drm_gem_vram_vunmap_local(gbo, &map);
On Mon, Jan 11, 2021 at 06:00:42PM +0100, Daniel Vetter wrote:
> On Fri, Jan 08, 2021 at 10:43:33AM +0100, Thomas Zimmermann wrote:
> > Damage handling in cirrus requires a short-term mapping of the source
> > BO. Use drm_gem_shmem_vmap_local().
> >
> > Signed-off-by:
before vunmap.
-Daniel
> vunmap:
> - drm_gem_shmem_vunmap(fb->obj[0], &map);
> + drm_gem_shmem_vunmap_local(fb->obj[0], &map);
> put_fb:
> drm_framebuffer_put(fb);
> gm12u320->fb_update.fb = NULL;
> --
> 2.29.2
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Why don't we
vmap/vunmap these in prepare/cleanup_fb? Generally we'd want a long-term
vmap here to make sure this all works nicely.
Since it's nothing new, on this patch:
Reviewed-by: Daniel Vetter
> ---
> drivers/gpu/drm/tiny/cirrus.c | 10 --
> 1 file changed,
On Mon, Jan 11, 2021 at 05:53:41PM +0100, Daniel Vetter wrote:
> On Fri, Jan 08, 2021 at 10:43:32AM +0100, Thomas Zimmermann wrote:
> > Damage handling in mgag200 requires a short-term mapping of the source
> > BO. Use drm_gem_shmem_vmap_local().
> >
> > Signe
On Fri, Jan 08, 2021 at 10:43:32AM +0100, Thomas Zimmermann wrote:
> Damage handling in mgag200 requires a short-term mapping of the source
> BO. Use drm_gem_shmem_vmap_local().
>
> Signed-off-by: Thomas Zimmermann
Reviewed-by: Daniel Vetter
> ---
> drivers/gpu/drm/mgag
and vunmap operations in VRAM helpers are therefore unused
> and can be removed.
>
> Signed-off-by: Thomas Zimmermann
Reviewed-by: Daniel Vetter
> ---
> drivers/gpu/drm/drm_gem_vram_helper.c | 98 ---
> include/drm/drm_gem_vram_helper.h | 2 -
>
hmem_create_with_handle(struct drm_file *file_priv,
>struct drm_device *dev, size_t size,
> diff --git a/include/drm/drm_gem_shmem_helper.h
> b/include/drm/drm_gem_shmem_helper.h
> index 434328d8a0d9..3f59bdf74
On Thu, Jan 7, 2021 at 11:28 AM Thomas Zimmermann wrote:
>
> Hi Daniel
>
> Am 11.12.20 um 10:50 schrieb Daniel Vetter:
> [...]
> >> +/**
> >> + * drm_gem_shmem_vmap_local - Create a virtual mapping for a shmem GEM
> >> object
> >> + * @shmem:
ct dma_buf *dmabuf, struct dma_buf_map *map);
> +
> + /**
> + * @vunmap_local:
> + *
> + * Removes a virtual mapping that wa sestablished by @vmap_local.
> + *
> + * This callback is optional.
> + */
> + void (*vunmap_local)(struct dm
On Fri, Dec 11, 2020 at 11:16:25AM +0100, Thomas Zimmermann wrote:
> Hi
>
> Am 11.12.20 um 11:01 schrieb Daniel Vetter:
> > On Wed, Dec 09, 2020 at 03:25:27PM +0100, Thomas Zimmermann wrote:
> > > Fbdev emulation has to lock the BO into place while flushing the shadow
&g
On Fri, Dec 11, 2020 at 11:49:48AM +0100, Thomas Zimmermann wrote:
>
>
> Am 11.12.20 um 11:18 schrieb Daniel Vetter:
> > On Wed, Dec 09, 2020 at 03:25:21PM +0100, Thomas Zimmermann wrote:
> > > The HW cursor's BO used to be mapped permanently into the kernel'
:
> * fix typos in commit description
>
> Signed-off-by: Thomas Zimmermann
> Acked-by: Christian König
Acked-by: Daniel Vetter
Now there's a pretty big issue here though: We can't take dma_resv_lock in
commit_tail, because of possible deadlocks on at least
On Wed, Dec 09, 2020 at 03:25:20PM +0100, Thomas Zimmermann wrote:
> Vmapping the cursor source BO contains an implicit pin operation,
> so there's no need to do this manually.
>
> Signed-off-by: Thomas Zimmermann
Acked-by: Daniel Vetter
> ---
> drivers/gpu/dr
On Fri, Dec 11, 2020 at 10:40:00AM +0100, Daniel Vetter wrote:
> On Wed, Dec 09, 2020 at 03:25:24PM +0100, Thomas Zimmermann wrote:
> > Implementations of the vmap/vunmap GEM callbacks may perform pinning
> > of the BO and may acquire the associated reservation object's lock.
&
sed anymore, please
delete. That will also make it clearer in the diff what's going on and
that it makes sense to have the client and fb-helper part in one patch.
With that: Reviewed-by: Daniel Vetter
> ---
> drivers/gpu/drm/drm_client.c| 91
eturns the kernel virtual address of the VRAM GEM object's backing
> + * store.
> + *
> + * The vmap_local function maps the buffer of a GEM VRAM object into kernel
> address
> + * space. Call drm_gem_vram_vunmap_local() with the returned address to
> unmap and
> + *
_held(obj->resv);
> +
> + ret = mutex_lock_interruptible(&shmem->vmap_lock);
This bites. You need to check for shmem->import_attach and call
dma_buf_vmap_local directly here before you take any shmem helper locks.
Real fix would be to replace both vmap_lock an
m_object *obj, struct dma_buf_map *map)
> +{
> + struct vc4_bo *bo = to_vc4_bo(obj);
> +
> + if (bo->validated_shader) {
This freaks me out. It should be impossible to export a validated shader
as a dma-buf, and indeed the check exists already.
All the wrapper fu
nclude/drm/drm_gem.h
> index 5e6daa1c982f..1281f26de494 100644
> --- a/include/drm/drm_gem.h
> +++ b/include/drm/drm_gem.h
> @@ -151,6 +151,26 @@ struct drm_gem_object_funcs {
>*/
> void (*vunmap)(struct drm_gem_object *obj, struc
ap);
> +
> + /**
> + * @vunmap_local:
> + *
> + * Removes a virtual mapping that wa sestablished by @vmap_local.
^^established
> + *
> + * This callback is optional.
> + */
> + void (*vunmap_
On Thu, Dec 03, 2020 at 03:06:20AM +, Zack Rusin wrote:
>
>
> > On Dec 2, 2020, at 11:03, Daniel Vetter wrote:
> >
> > On Wed, Dec 2, 2020 at 4:37 PM Zack Rusin wrote:
> >>
> >>
> >>
> >>> On Dec 2, 2020, at 09:27, Thomas
t. If you're OK with that, I'd merge the vmwgfx patch through
> > drm-misc-next as well.
>
> Sounds good. I’ll make sure to rebase our latest patch set on top of it when
> it’s in. Thanks!
btw if you want to avoid multi-tree coordination headaches, we can
also m
On Thu, Oct 29, 2020 at 02:33:47PM +0100, Daniel Vetter wrote:
> These are leftovers from 13aff184ed9f ("drm/qxl: remove dead qxl fbdev
> emulation code").
>
> v2: Somehow these structs provided the struct qxl_device pre-decl,
> reorder the header to not anger compi
ightmare. And when an oops happens, this might be the only thing that
manages to get the oops to the user.
Unless someone really starts caring about fbcon acceleration I really
wouldn't bother. Ok maybe it also matters for fbdev, but the problem is
that the page fault intercept
v3:
- Improve commit message (Sam)
Acked-by: Sam Ravnborg
Cc: kernel test robot
Acked-by: Maxime Ripard
Reviewed-by: Alex Deucher
Signed-off-by: Daniel Vetter
Cc: Sam Ravnborg
Cc: Dave Airlie
Cc: Gerd Hoffmann
Cc: virtualization@lists.linux-foundation.org
Cc: Harry Wentland
Cc: Leo Li
ked-by: Sam Ravnborg
Cc: kernel test robot
Acked-by: Maxime Ripard
Signed-off-by: Daniel Vetter
Cc: Sam Ravnborg
Cc: Dave Airlie
Cc: Gerd Hoffmann
Cc: virtualization@lists.linux-foundation.org
Cc: Harry Wentland
Cc: Leo Li
Cc: Alex Deucher
Cc: Christian König
Cc: Eric Anholt
Cc: Maxime Ri
These are leftovers from 13aff184ed9f ("drm/qxl: remove dead qxl fbdev
emulation code").
v2: Somehow these structs provided the struct qxl_device pre-decl,
reorder the header to not anger compilers.
Acked-by: Gerd Hoffmann
Signed-off-by: Daniel Vetter
Cc: Dave Airlie
Cc: Gerd Ho
These are leftovers from 13aff184ed9f ("drm/qxl: remove dead qxl fbdev
emulation code").
Signed-off-by: Daniel Vetter
Cc: Dave Airlie
Cc: Gerd Hoffmann
Cc: virtualization@lists.linux-foundation.org
Cc: spice-de...@lists.freedesktop.org
---
drivers/gpu/drm/qxl/qxl_drv.h | 14 ---
On Tue, Oct 27, 2020 at 01:17:23PM +0100, Bartosz Golaszewski wrote:
> From: Bartosz Golaszewski
>
> Use the helper that checks for overflows internally instead of manually
> calculating the size of the new array.
>
> Signed-off-by: Bartosz Golaszewski
Acked-by: Daniel Vette
On Mon, Oct 26, 2020 at 9:43 AM Thomas Zimmermann wrote:
>
> Hi
>
> Am 23.10.20 um 14:28 schrieb Daniel Vetter:
> > Only the following drivers aren't converted:
> > - amdgpu, because of the driver_feature mangling due to virt support
> > - nouveau, because
On Sun, Oct 25, 2020 at 11:23 PM Sam Ravnborg wrote:
>
> Hi Daniel.
>
> On Fri, Oct 23, 2020 at 06:04:44PM +0200, Daniel Vetter wrote:
> > Only the following drivers aren't converted:
> > - amdgpu, because of the driver_feature mangling due to virt support
> &g
1 - 100 of 422 matches
Mail list logo