[Intel-gfx] ✓ Fi.CI.BAT: success for series starting with [01/20] drm/i915: Preallocate stashes for vma page-directories

2020-07-05 Thread Patchwork
== Series Details == Series: series starting with [01/20] drm/i915: Preallocate stashes for vma page-directories URL : https://patchwork.freedesktop.org/series/79129/ State : success == Summary == CI Bug Log - changes from CI_DRM_8708 -> Patchwork_18082 ===

[Intel-gfx] ✗ Fi.CI.SPARSE: warning for series starting with [01/20] drm/i915: Preallocate stashes for vma page-directories

2020-07-05 Thread Patchwork
== Series Details == Series: series starting with [01/20] drm/i915: Preallocate stashes for vma page-directories URL : https://patchwork.freedesktop.org/series/79129/ State : warning == Summary == $ dim sparse --fast origin/drm-tip Sparse version: v0.6.0 Fast mode used, each commit won't be c

[Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for series starting with [01/20] drm/i915: Preallocate stashes for vma page-directories

2020-07-05 Thread Patchwork
== Series Details == Series: series starting with [01/20] drm/i915: Preallocate stashes for vma page-directories URL : https://patchwork.freedesktop.org/series/79129/ State : warning == Summary == $ dim checkpatch origin/drm-tip b5c5d0ce6d3d drm/i915: Preallocate stashes for vma page-director

[Intel-gfx] [PATCH 09/20] drm/i915/gem: Assign context id for async work

2020-07-05 Thread Chris Wilson
Allocate a few dma fence context id that we can use to associate async work [for the CPU] launched on behalf of this context. For extra fun, we allow a configurable concurrency width. A current example would be that we spawn an unbound worker for every userptr get_pages. In the future, we wish to

[Intel-gfx] [PATCH 05/20] drm/i915/gem: Break apart the early i915_vma_pin from execbuf object lookup

2020-07-05 Thread Chris Wilson
As a prelude to the next step where we want to perform all the object allocations together under the same lock, we first must delay the i915_vma_pin() as that implicitly does the allocations for us, one by one. As it only does the allocations one by one, it is not allowed to wait/evict, whereas pul

[Intel-gfx] [PATCH 15/20] drm/i915/gem: Include secure batch in common execbuf pinning

2020-07-05 Thread Chris Wilson
Pull the GGTT binding for the secure batch dispatch into the common vma pinning routine for execbuf, so that there is just a single central place for all i915_vma_pin(). Signed-off-by: Chris Wilson --- .../gpu/drm/i915/gem/i915_gem_execbuffer.c| 88 +++ 1 file changed, 51 ins

[Intel-gfx] [PATCH 14/20] drm/i915/gem: Include cmdparser in common execbuf pinning

2020-07-05 Thread Chris Wilson
Pull the cmdparser allocations in to the reservation phase, and then they are included in the common vma pinning pass. Signed-off-by: Chris Wilson --- .../gpu/drm/i915/gem/i915_gem_execbuffer.c| 348 ++ drivers/gpu/drm/i915/gem/i915_gem_object.h| 10 + drivers/gpu/drm/i9

[Intel-gfx] [PATCH 13/20] drm/i915/gem: Bind the fence async for execbuf

2020-07-05 Thread Chris Wilson
It is illegal to wait on an another vma while holding the vm->mutex, as that easily leads to ABBA deadlocks (we wait on a second vma that waits on us to release the vm->mutex). So while the vm->mutex exists, move the waiting outside of the lock into the async binding pipeline. Signed-off-by: Chris

[Intel-gfx] [PATCH 08/20] drm/i915: Always defer fenced work to the worker

2020-07-05 Thread Chris Wilson
Currently, if an error is raised we always call the cleanup locally [and skip the main work callback]. However, some future users may need to take a mutex to cleanup and so we cannot immediately execute the cleanup as we may still be in interrupt context. With the execute-immediate flag, for most

[Intel-gfx] [PATCH 02/20] drm/i915: Switch to object allocations for page directories

2020-07-05 Thread Chris Wilson
The GEM object is grossly overweight for the practicality of tracking large numbers of individual pages, yet it is currently our only abstraction for tracking DMA allocations. Since those allocations need to be reserved upfront before an operation, and that we need to break away from simple system

[Intel-gfx] [PATCH 12/20] drm/i915/gem: Asynchronous GTT unbinding

2020-07-05 Thread Chris Wilson
It is reasonably common for userspace (even modern drivers like iris) to reuse an active address for a new buffer. This would cause the application to stall under its mutex (originally struct_mutex) until the old batches were idle and it could synchronously remove the stale PTE. However, we can que

[Intel-gfx] [PATCH 10/20] drm/i915: Export a preallocate variant of i915_active_acquire()

2020-07-05 Thread Chris Wilson
Sometimes we have to be very careful not to allocate underneath a mutex (or spinlock) and yet still want to track activity. Enter i915_active_acquire_for_context(). This raises the activity counter on i915_active prior to use and ensures that the fence-tree contains a slot for the context. Signed-

[Intel-gfx] [PATCH 17/20] drm/i915: Add an implementation for i915_gem_ww_ctx locking, v2.

2020-07-05 Thread Chris Wilson
From: Maarten Lankhorst i915_gem_ww_ctx is used to lock all gem bo's for pinning and memory eviction. We don't use it yet, but lets start adding the definition first. To use it, we have to pass a non-NULL ww to gem_object_lock, and don't unlock directly. It is done in i915_gem_ww_ctx_fini. Chan

[Intel-gfx] [PATCH 04/20] drm/i915/gem: Rename execbuf.bind_link to unbound_link

2020-07-05 Thread Chris Wilson
Rename the current list of unbound objects so that we can track of all objects that we need to bind, as well as the list of currently unbound [unprocessed] objects. Signed-off-by: Chris Wilson --- drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 14 +++--- 1 file changed, 7 insertions(+)

[Intel-gfx] [PATCH 19/20] drm/i915/gem: Replace i915_gem_object.mm.mutex with reservation_ww_class

2020-07-05 Thread Chris Wilson
Our goal is to pull all memory reservations (next iteration obj->ops->get_pages()) under a ww_mutex, and to align those reservations with other drivers, i.e. control all such allocations with the reservation_ww_class. Currently, this is under the purview of the obj->mm.mutex, and while obj->mm rema

[Intel-gfx] [PATCH 16/20] drm/i915/gem: Reintroduce multiple passes for reloc processing

2020-07-05 Thread Chris Wilson
The prospect of locking the entire submission sequence under a wide ww_mutex re-imposes some key restrictions, in particular that we must not call copy_(from|to)_user underneath the mutex (as the faulthandlers themselves may need to take the ww_mutex). To satisfy this requirement, we need to split

[Intel-gfx] [PATCH 01/20] drm/i915: Preallocate stashes for vma page-directories

2020-07-05 Thread Chris Wilson
We need to make the DMA allocations used for page directories to be performed up front so that we can include those allocations in our memory reservation pass. The downside is that we have to assume the worst case, even before we know the final layout, and always allocate enough page directories fo

[Intel-gfx] s/obj->mm.lock//

2020-07-05 Thread Chris Wilson
This is the easy part; pulling reservation of multiple objects under an ww acquire context. With one simple rule that eviction is handled by the ww acquire context, we can carefully transition the driver over to using eviction. Instead of feeding the acquire context everywhere, we make the caller g

[Intel-gfx] [PATCH 07/20] drm/i915: Add list_for_each_entry_safe_continue_reverse

2020-07-05 Thread Chris Wilson
One more list iterator variant, for when we want to unwind from inside one list iterator with the intention of restarting from the current entry as the new head of the list. Signed-off-by: Chris Wilson Reviewed-by: Tvrtko Ursulin --- drivers/gpu/drm/i915/i915_utils.h | 6 ++ 1 file changed,

[Intel-gfx] [PATCH 06/20] drm/i915/gem: Remove the call for no-evict i915_vma_pin

2020-07-05 Thread Chris Wilson
Remove the stub i915_vma_pin() used for incrementally pining objects for execbuf (under the severe restriction that they must not wait on a resource as we may have already pinned it) and replace it with a i915_vma_pin_inplace() that is only allowed to reclaim the currently bound location for the vm

[Intel-gfx] [PATCH 11/20] drm/i915/gem: Separate the ww_mutex walker into its own list

2020-07-05 Thread Chris Wilson
In preparation for making eb_vma bigger and heavy to run inn parallel, we need to stop apply an in-place swap() to reorder around ww_mutex deadlocks. Keep the array intact and reorder the locks using a dedicated list. Signed-off-by: Chris Wilson --- .../gpu/drm/i915/gem/i915_gem_execbuffer.c

[Intel-gfx] [PATCH 18/20] drm/i915/gem: Pull execbuf dma resv under a single critical section

2020-07-05 Thread Chris Wilson
Acquire all the objects and their backing storage, and page directories, as used by execbuf under a single common ww_mutex. Albeit we have to restart the critical section a few times in order to handle various restrictions (such as avoiding copy_(from|to)_user and mmap_sem). Signed-off-by: Chris W

[Intel-gfx] [PATCH 03/20] drm/i915/gem: Don't drop the timeline lock during execbuf

2020-07-05 Thread Chris Wilson
Our timeline lock is our defence against a concurrent execbuf interrupting our request construction. we need hold it throughout or, for example, a second thread may interject a relocation request in between our own relocation request and execution in the ring. A second, major benefit, is that it a

Re: [Intel-gfx] linux-next: manual merge of the drm-intel tree with the drm-intel-fixes tree

2020-07-05 Thread Stephen Rothwell
Hi all, On Tue, 30 Jun 2020 11:52:02 +1000 Stephen Rothwell wrote: > > Today's linux-next merge of the drm-intel tree got a conflict in: > > drivers/gpu/drm/i915/gvt/handlers.c > > between commit: > > fc1e3aa0337c ("drm/i915/gvt: Fix incorrect check of enabled bits in mask > registers")

Re: [Intel-gfx] [PATCH 17/23] drm/i915/gem: Asynchronous GTT unbinding

2020-07-05 Thread Chris Wilson
Quoting Andi Shyti (2020-07-05 23:01:29) > Hi Chris, > > > +static int gen6_fixup_ggtt(struct i915_vma *vma) > > you create this function here and remove it in patch 21. This > series is a bit confusing, can we have a final version of it? It get's removed because the next patches reorder all the

Re: [Intel-gfx] [PATCH 17/23] drm/i915/gem: Asynchronous GTT unbinding

2020-07-05 Thread Andi Shyti
Hi Chris, > +static int gen6_fixup_ggtt(struct i915_vma *vma) you create this function here and remove it in patch 21. This series is a bit confusing, can we have a final version of it? Andi ___ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org ht

Re: [Intel-gfx] [PATCH v2 5/6] drm/i915/display: use port_info in intel_ddi_init

2020-07-05 Thread kernel test robot
documented in https://git-scm.com/docs/git-format-patch] url: https://github.com/0day-ci/linux/commits/Lucas-De-Marchi/display-ddi-keep-register-indexes-in-a-table/20200625-081557 base: git://anongit.freedesktop.org/drm-intel for-linux-next config: x86_64-randconfig-a002-20200705 (attached as