Re: [Intel-gfx] [PATCH v2] drm/i915: Use the engine name directly in the error_state file

2018-01-15 Thread Tvrtko Ursulin
On 10/01/2018 01:21, Michel Thierry wrote: Instead of using local string names that we will have to keep maintaining, use the engine->name directly. v2: Better invalid engine_id handling, capture_bo will not be able know the engine_id and end up with -1 (Michal). Suggested-by: Michal

[Intel-gfx] [PATCH i-g-t v3 3/8] tests/kms_plane_scaling: Convert from simple test to full test

2018-01-15 Thread Maarten Lankhorst
Convert the test to run subtests per pipe, before we start adding more subtests. Signed-off-by: Maarten Lankhorst Reviewed-by: Mika Kahola --- tests/kms_plane_scaling.c | 46 +- 1 file

[Intel-gfx] [PATCH i-g-t v3 2/8] tests/kms_plane_scaling: Fix basic scaling test, v3.

2018-01-15 Thread Maarten Lankhorst
From: Mahesh Kumar PIPEC doesnt have 3rd plane in GEN9. So, we skip the 3rd plane related scaling test where 2nd OVERLAY plane is not available. Restricting downscaling to (9/10)x original size of the image to avoid "Max pixel rate limitation" of the hardware. When

[Intel-gfx] [PATCH i-g-t v3 8/8] tests/kms_plane_scaling: test for multi pipe with scaling, v3.

2018-01-15 Thread Maarten Lankhorst
From: Jyoti Yadav Add a subtest to display primary and overlay planes on two connected pipes and runs scaling test on both pipes Changes since v1: - Commit first before trying any scaling. (Maarten) - Use the same logic as kms_cursor_legacy to find a pipe and output.

[Intel-gfx] [PATCH i-g-t v3 4/8] tests/kms_plane_scaling: Move get_num_scalers to a function.

2018-01-15 Thread Maarten Lankhorst
The number of scalers can depend on the pipe, so require at least 1 scaler before running any subtests. Signed-off-by: Maarten Lankhorst --- tests/kms_plane_scaling.c | 20 ++-- 1 file changed, 14 insertions(+), 6 deletions(-) diff --git

[Intel-gfx] [PATCH i-g-t v3 7/8] tests/kms_plane_scaling: test scaler with clipping clamping, v3.

2018-01-15 Thread Maarten Lankhorst
From: Jyoti Yadav This patch adds subtest to test scaler clipping and clamping scenario. Changes since v1: - Modify test to work with the changes to kms_plane_scaling. (Maarten) Changes since v2: - Use get_num_scalers() to skip when needed. Signed-off-by: Jyoti Yadav

[Intel-gfx] [PATCH i-g-t v3 5/8] tests/kms_plane_scaling: Clean up tests to work better with igt_kms, v2.

2018-01-15 Thread Maarten Lankhorst
The test only runs on gen9+, so we can safely replace all calls with COMMIT_ATOMIC. Also perform some cleanups by making fb an array, and cleaning up in prepare_crtc. This way failed subtests won't cause failures in other subtests. Changes since v1: - Rebase on top of num_scalers changes.

[Intel-gfx] [PATCH i-g-t v3 0/8] kms_plane_scaling tests.

2018-01-15 Thread Maarten Lankhorst
This series fixes the current scaler igt test failures and enhances kms_plane_scaling and kms_plane for covering subtests below: - verify all the supported pixel formats in planes - combination of rotation and scaling - combination of tiling and scaling - multi-plane/multi-pipe scaling I've

[Intel-gfx] [PATCH i-g-t v3 1/8] tests/kms_plane_scaling: Move the actual test to its own function.

2018-01-15 Thread Maarten Lankhorst
We will add more subtests in the future, it's more clear if we split out the actual test to its own function first. Signed-off-by: Maarten Lankhorst Reviewed-by: Mika Kahola --- tests/kms_plane_scaling.c | 226

[Intel-gfx] [PATCH i-g-t v3 6/8] tests/kms_plane_scaling: test scaling with tiling rotation and pixel formats, v2.

2018-01-15 Thread Maarten Lankhorst
From: Jyoti Yadav This patch adds subtest for testing scaling in combination with rotation and pixel formats. Changes since v1: - Rework test to work with the other changes to kms_plane_scaling. (Maarten) - Remove hardcodes for MIN/MAX_SRC_WIDTH, and use the value

Re: [Intel-gfx] [PATCH v4 1/6] drm/i915: store all subslice masks

2018-01-15 Thread Tvrtko Ursulin
On 15/01/2018 14:41, Lionel Landwerlin wrote: Up to now, subslice mask was assumed to be uniform across slices. But starting with Cannonlake, slices can be asymmetric (for example slice0 has different number of subslices as slice1+). This change stores all subslices masks for all slices rather

[Intel-gfx] ✓ Fi.CI.BAT: success for kms_plane_scaling tests.

2018-01-15 Thread Patchwork
== Series Details == Series: kms_plane_scaling tests. URL : https://patchwork.freedesktop.org/series/36485/ State : success == Summary == IGT patchset tested on top of latest successful build 84a308022028a55903a1916fcee516aab768ed48 tests/kms_plane: Run test for all supported pixel formats,

[Intel-gfx] ✓ Fi.CI.BAT: success for series starting with [v2,1/1] meson: Refactor get_option() calls for directories (rev2)

2018-01-15 Thread Patchwork
== Series Details == Series: series starting with [v2,1/1] meson: Refactor get_option() calls for directories (rev2) URL : https://patchwork.freedesktop.org/series/36476/ State : success == Summary == IGT patchset tested on top of latest successful build

[Intel-gfx] [PATCH v4 1/6] drm/i915: store all subslice masks

2018-01-15 Thread Lionel Landwerlin
Up to now, subslice mask was assumed to be uniform across slices. But starting with Cannonlake, slices can be asymmetric (for example slice0 has different number of subslices as slice1+). This change stores all subslices masks for all slices rather than having a single mask that applies to all

[Intel-gfx] [PATCH v4 3/6] drm/i915/debugfs: add rcs topology entry

2018-01-15 Thread Lionel Landwerlin
While the end goal is to make this information available to userspace through a new ioctl, there is no reason we can't display it in a human readable fashion through debugfs. slice0: 3 subslice(s) (0x7): subslice0: 8 EUs (0xff) subslice1: 8 EUs (0xff) subslice2: 8 EUs

[Intel-gfx] [PATCH v4 4/6] drm/i915: add rcs topology to error state

2018-01-15 Thread Lionel Landwerlin
This might be useful information for developers looking at an error state. v2: Place topology towards the end of the error state (Chris) v3: Reuse common printing code (Michal) Signed-off-by: Lionel Landwerlin --- drivers/gpu/drm/i915/i915_gpu_error.c | 9

[Intel-gfx] [PATCH v4 6/6] drm/i915: expose rcs topology through query uAPI

2018-01-15 Thread Lionel Landwerlin
With the introduction of asymmetric slices in CNL, we cannot rely on the previous SUBSLICE_MASK getparam to tell userspace what subslices are available. Here we introduce a more detailed way of querying the Gen's GPU topology that doesn't aggregate numbers. This is essential for monitoring parts

[Intel-gfx] [PATCH v4 2/6] drm/i915/debugfs: reuse max slice/subslices already stored in sseu

2018-01-15 Thread Lionel Landwerlin
Now that we have that information in topology fields, let's just reused it. v2: Style tweaks (Tvrtko) Signed-off-by: Lionel Landwerlin Reviewed-by: Tvrtko Ursulin --- drivers/gpu/drm/i915/i915_debugfs.c | 27 +++

[Intel-gfx] [PATCH v4 5/6] drm/i915: add query uAPI

2018-01-15 Thread Lionel Landwerlin
There are a number of information that are readable from hardware registers and that we would like to make accessible to userspace. One particular example is the topology of the execution units (how are execution units grouped in subslices and slices and also which ones have been fused off for die

[Intel-gfx] [PATCH v4 0/6] drm/i915: expose RCS topology to userspace

2018-01-15 Thread Lionel Landwerlin
Hi all, Here is another iteration on this series. Most changes are pretty cosmetic (slightly different helper in patch1, factored code in patch 3 & 4, etc...). Thanks for your feedback! Lionel Landwerlin (6): drm/i915: store all subslice masks drm/i915/debugfs: reuse max slice/subslices

[Intel-gfx] [PATCH] drm/i915/selftests: Test i915_sw_fence/dma_fence interop

2018-01-15 Thread Chris Wilson
Check that we can successfully wait upon a dma_fence using the i915_sw_fence, including the optional timeout mechanism. Signed-off-by: Chris Wilson Cc: Tvrtko Ursulin --- drivers/gpu/drm/i915/selftests/i915_sw_fence.c | 139

Re: [Intel-gfx] [PATCH 1/2] drm/i915: Use our singlethreaded wq for freeing objects

2018-01-15 Thread Chris Wilson
Quoting Tvrtko Ursulin (2018-01-15 16:54:57) > > In general it is a bit funky to call_rcu to schedule worker - what would > be the difference to just queueing the worker which would have > synchronize rcu in it? The complication came from introducing the direct cleanup path to penalise frequent

[Intel-gfx] ✗ Fi.CI.IGT: warning for series starting with [v2,1/1] meson: Refactor get_option() calls for directories (rev2)

2018-01-15 Thread Patchwork
== Series Details == Series: series starting with [v2,1/1] meson: Refactor get_option() calls for directories (rev2) URL : https://patchwork.freedesktop.org/series/36476/ State : warning == Summary == Test kms_frontbuffer_tracking: Subgroup fbc-1p-primscrn-pri-indfb-draw-render:

[Intel-gfx] ✓ Fi.CI.BAT: success for igt/pm_rps: Increase load for waitboosting

2018-01-15 Thread Patchwork
== Series Details == Series: igt/pm_rps: Increase load for waitboosting URL : https://patchwork.freedesktop.org/series/36452/ State : success == Summary == IGT patchset tested on top of latest successful build 84a308022028a55903a1916fcee516aab768ed48 tests/kms_plane: Run test for all

Re: [Intel-gfx] [PATCH 1/2] drm/i915: Use our singlethreaded wq for freeing objects

2018-01-15 Thread Tvrtko Ursulin
On 15/01/2018 12:28, Chris Wilson wrote: As freeing the objects require serialisation on struct_mutex, we should prefer to use our singlethreaded driver wq that is dedicated to work requiring struct_mutex (hence serialised).The benefit should be less clutter on the system wq, allowing it to

[Intel-gfx] ✓ Fi.CI.BAT: success for drm/i915: expose RCS topology to userspace

2018-01-15 Thread Patchwork
== Series Details == Series: drm/i915: expose RCS topology to userspace URL : https://patchwork.freedesktop.org/series/36486/ State : success == Summary == Series 36486v1 drm/i915: expose RCS topology to userspace https://patchwork.freedesktop.org/api/1.0/series/36486/revisions/1/mbox/ Test

[Intel-gfx] ✓ Fi.CI.BAT: success for drm/i915/selftests: Test i915_sw_fence/dma_fence interop

2018-01-15 Thread Patchwork
== Series Details == Series: drm/i915/selftests: Test i915_sw_fence/dma_fence interop URL : https://patchwork.freedesktop.org/series/36493/ State : success == Summary == Series 36493v1 drm/i915/selftests: Test i915_sw_fence/dma_fence interop

[Intel-gfx] ✓ Fi.CI.IGT: success for series starting with [1/2] drm/i915: Use our singlethreaded wq for freeing objects

2018-01-15 Thread Patchwork
== Series Details == Series: series starting with [1/2] drm/i915: Use our singlethreaded wq for freeing objects URL : https://patchwork.freedesktop.org/series/36480/ State : success == Summary == Test kms_flip: Subgroup vblank-vs-suspend: skip -> PASS

[Intel-gfx] ✓ Fi.CI.IGT: success for igt/pm_rps: Increase load for waitboosting

2018-01-15 Thread Patchwork
== Series Details == Series: igt/pm_rps: Increase load for waitboosting URL : https://patchwork.freedesktop.org/series/36452/ State : success == Summary == Test kms_setmode: Subgroup basic: fail -> PASS (shard-hsw) fdo#99912 Test drv_suspend:

[Intel-gfx] [PATCH] drm/i915: Rewrite some comments around RCU-deferred object free

2018-01-15 Thread Chris Wilson
Tvrtko noticed that the comments describing the interaction of RCU and the deferred worker for freeing drm_i915_gem_object were a little confusing, so attempt to bring some sense to them. Signed-off-by: Chris Wilson Cc: Tvrtko Ursulin Cc:

[Intel-gfx] ✓ Fi.CI.BAT: success for drm/i915/selftests: Test i915_sw_fence/dma_fence interop (rev2)

2018-01-15 Thread Patchwork
== Series Details == Series: drm/i915/selftests: Test i915_sw_fence/dma_fence interop (rev2) URL : https://patchwork.freedesktop.org/series/36493/ State : success == Summary == Series 36493v2 drm/i915/selftests: Test i915_sw_fence/dma_fence interop

Re: [Intel-gfx] [PATCH v4 3/6] drm/i915/debugfs: add rcs topology entry

2018-01-15 Thread Lionel Landwerlin
On 15/01/18 17:42, Tvrtko Ursulin wrote: On 15/01/2018 14:41, Lionel Landwerlin wrote: While the end goal is to make this information available to userspace through a new ioctl, there is no reason we can't display it in a human readable fashion through debugfs. slice0: 3 subslice(s) (0x7):    

[Intel-gfx] ✗ Fi.CI.IGT: failure for drm/i915/selftests: Test i915_sw_fence/dma_fence interop

2018-01-15 Thread Patchwork
== Series Details == Series: drm/i915/selftests: Test i915_sw_fence/dma_fence interop URL : https://patchwork.freedesktop.org/series/36493/ State : failure == Summary == Test drv_selftest: Subgroup mock_fence: pass -> INCOMPLETE (shard-snb) pass

[Intel-gfx] ✗ Fi.CI.IGT: failure for kms_plane_scaling tests.

2018-01-15 Thread Patchwork
== Series Details == Series: kms_plane_scaling tests. URL : https://patchwork.freedesktop.org/series/36485/ State : failure == Summary == Test gem_cpu_reloc: Subgroup full: pass -> DMESG-FAIL (shard-hsw) shard-hswtotal:2753 pass:1548 dwarn:1 dfail:1

Re: [Intel-gfx] ✗ Fi.CI.IGT: failure for drm/i915/selftests: Test i915_sw_fence/dma_fence interop

2018-01-15 Thread Chris Wilson
Quoting Patchwork (2018-01-15 19:31:14) > == Series Details == > > Series: drm/i915/selftests: Test i915_sw_fence/dma_fence interop > URL : https://patchwork.freedesktop.org/series/36493/ > State : failure > > == Summary == > > Test drv_selftest: > Subgroup mock_fence: >

[Intel-gfx] [PATCH v2] drm/i915/selftests: Test i915_sw_fence/dma_fence interop

2018-01-15 Thread Chris Wilson
Check that we can successfully wait upon a dma_fence using the i915_sw_fence, including the optional timeout mechanism. v2: Account for the rounding up of the timeout to the next second. Unfortunately, the minimum delay is then 1 second. Signed-off-by: Chris Wilson Cc:

Re: [Intel-gfx] [PATCH 1/2] drm/i915: Use our singlethreaded wq for freeing objects

2018-01-15 Thread Chris Wilson
Quoting Tvrtko Ursulin (2018-01-15 16:54:57) > > On 15/01/2018 12:28, Chris Wilson wrote: > > As freeing the objects require serialisation on struct_mutex, we should > > prefer to use our singlethreaded driver wq that is dedicated to work > > requiring struct_mutex (hence serialised).The benefit

Re: [Intel-gfx] [PATCH v4 1/6] drm/i915: store all subslice masks

2018-01-15 Thread Lionel Landwerlin
On 15/01/18 17:37, Tvrtko Ursulin wrote: On 15/01/2018 14:41, Lionel Landwerlin wrote: Up to now, subslice mask was assumed to be uniform across slices. But starting with Cannonlake, slices can be asymmetric (for example slice0 has different number of subslices as slice1+). This change stores

[Intel-gfx] ✓ Fi.CI.BAT: success for drm/i915: Rewrite some comments around RCU-deferred object free

2018-01-15 Thread Patchwork
== Series Details == Series: drm/i915: Rewrite some comments around RCU-deferred object free URL : https://patchwork.freedesktop.org/series/36500/ State : success == Summary == Series 36500v1 drm/i915: Rewrite some comments around RCU-deferred object free

[Intel-gfx] [PATCH 05/10] drm/i915: Trim the retired request queue after submitting

2018-01-15 Thread Chris Wilson
If we submit a request and see that the previous request on this timeline was already signaled, we first do not need to add the dependency tracker for that completed request and secondly we know that we there is then a large backlog in retiring requests affecting this timeline. Given that we just

Re: [Intel-gfx] [PATCH v4 3/6] drm/i915/debugfs: add rcs topology entry

2018-01-15 Thread Tvrtko Ursulin
On 15/01/2018 14:41, Lionel Landwerlin wrote: While the end goal is to make this information available to userspace through a new ioctl, there is no reason we can't display it in a human readable fashion through debugfs. slice0: 3 subslice(s) (0x7): subslice0: 8 EUs (0xff)

Re: [Intel-gfx] [PATCH v4 4/6] drm/i915: add rcs topology to error state

2018-01-15 Thread Tvrtko Ursulin
On 15/01/2018 14:41, Lionel Landwerlin wrote: This might be useful information for developers looking at an error state. v2: Place topology towards the end of the error state (Chris) v3: Reuse common printing code (Michal) Signed-off-by: Lionel Landwerlin ---

[Intel-gfx] [PATCH 08/10] drm/i915: Move the irq_counter inside the spinlock

2018-01-15 Thread Chris Wilson
Rather than have multiple locked instructions inside the notify_ring() irq handler, move them inside the spinlock and reduce their intrinsic locking. Signed-off-by: Chris Wilson --- drivers/gpu/drm/i915/i915_gem_request.c | 4 ++-- drivers/gpu/drm/i915/i915_irq.c

[Intel-gfx] [PATCH 09/10] drm/i915: Only signal from interrupt when requested

2018-01-15 Thread Chris Wilson
Avoid calling dma_fence_signal() from inside the interrupt if we haven't enabled signaling on the request. Signed-off-by: Chris Wilson --- drivers/gpu/drm/i915/i915_gem_request.c | 2 +- drivers/gpu/drm/i915/i915_irq.c | 3 ++-

[Intel-gfx] [PATCH 10/10] drm/i915/breadcrumbs: Reduce signaler rbtree to a sorted list

2018-01-15 Thread Chris Wilson
The goal here is to try and reduce the latency of signaling additional requests following the wakeup from interrupt by reducing the list of to-be-signaled requests from an rbtree to a sorted linked list. The original choice of using an rbtree was to facilitate random insertions of request into the

[Intel-gfx] [PATCH 06/10] drm/i915/breadcrumbs: Drop request reference for the signaler thread

2018-01-15 Thread Chris Wilson
If we remember to cancel the signaler on a request when retiring it (after we know that the request has been signaled), we do not need to carry an additional request in the signaler itself. This prevents an issue whereby the signaler threads may be delayed and hold on to thousands of request

[Intel-gfx] [PATCH 01/10] drm/i915: Only attempt to scan the requested number of shrinker slabs

2018-01-15 Thread Chris Wilson
Since commit 4e773c3a8a69 ("drm/i915: Wire up shrinkctl->nr_scanned"), we track the number of objects we scan and do not wish to exceed that as it will overly penalise our own slabs under mempressure. Given that we now know the target number of objects to scan, use that as our guide for deciding

[Intel-gfx] Prevent trivial oom from gem_exec_nop/sequential

2018-01-15 Thread Chris Wilson
About the third resend of this series that tries to curtail the prolonged reference lifetimes of fences via the signaler thread when the CPUs are saturated by interrupts. -Chris ___ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org

[Intel-gfx] [PATCH 02/10] drm/i915: Move i915_gem_retire_work_handler

2018-01-15 Thread Chris Wilson
In preparation for the next patch, move i915_gem_retire_work_handler() later to avoid a forward declaration. Signed-off-by: Chris Wilson --- drivers/gpu/drm/i915/i915_gem.c | 228 1 file changed, 114 insertions(+), 114

[Intel-gfx] [PATCH 04/10] drm/i915: Shrink the request kmem_cache on allocation error

2018-01-15 Thread Chris Wilson
If we fail to allocate a new request, make sure we recover the pages that are in the process of being freed by inserting an RCU barrier. Signed-off-by: Chris Wilson --- drivers/gpu/drm/i915/i915_gem_request.c | 3 +++ 1 file changed, 3 insertions(+) diff --git

[Intel-gfx] [PATCH 07/10] drm/i915: Reduce spinlock hold time during notify_ring() interrupt

2018-01-15 Thread Chris Wilson
By taking advantage of the RCU protection of the task struct, we can find the appropriate signaler under the spinlock and then release the spinlock before waking the task and signaling the fence. Signed-off-by: Chris Wilson --- drivers/gpu/drm/i915/i915_irq.c | 29

[Intel-gfx] [PATCH 03/10] drm/i915: Shrink the GEM kmem_caches upon idling

2018-01-15 Thread Chris Wilson
When we finally decide the gpu is idle, that is a good time to shrink our kmem_caches. Signed-off-by: Chris Wilson --- drivers/gpu/drm/i915/i915_gem.c | 22 ++ 1 file changed, 22 insertions(+) diff --git a/drivers/gpu/drm/i915/i915_gem.c

Re: [Intel-gfx] [PATCH v4 6/6] drm/i915: expose rcs topology through query uAPI

2018-01-15 Thread Tvrtko Ursulin
On 15/01/2018 14:41, Lionel Landwerlin wrote: With the introduction of asymmetric slices in CNL, we cannot rely on the previous SUBSLICE_MASK getparam to tell userspace what subslices are available. Here we introduce a more detailed way of querying the Gen's GPU topology that doesn't aggregate

Re: [Intel-gfx] [PATCH v4 6/6] drm/i915: expose rcs topology through query uAPI

2018-01-15 Thread Lionel Landwerlin
On 15/01/18 17:54, Tvrtko Ursulin wrote: On 15/01/2018 14:41, Lionel Landwerlin wrote: With the introduction of asymmetric slices in CNL, we cannot rely on the previous SUBSLICE_MASK getparam to tell userspace what subslices are available. Here we introduce a more detailed way of querying the

[Intel-gfx] ✗ Fi.CI.IGT: failure for drm/i915: expose RCS topology to userspace

2018-01-15 Thread Patchwork
== Series Details == Series: drm/i915: expose RCS topology to userspace URL : https://patchwork.freedesktop.org/series/36486/ State : failure == Summary == Test kms_frontbuffer_tracking: Subgroup fbc-1p-offscren-pri-shrfb-draw-blt: pass -> FAIL (shard-snb)

[Intel-gfx] ✓ Fi.CI.IGT: success for drm/i915/selftests: Test i915_sw_fence/dma_fence interop (rev2)

2018-01-15 Thread Patchwork
== Series Details == Series: drm/i915/selftests: Test i915_sw_fence/dma_fence interop (rev2) URL : https://patchwork.freedesktop.org/series/36493/ State : success == Summary == Test kms_frontbuffer_tracking: Subgroup fbc-1p-primscrn-pri-indfb-draw-render: pass ->

[Intel-gfx] ✓ Fi.CI.IGT: success for drm/i915: Rewrite some comments around RCU-deferred object free

2018-01-15 Thread Patchwork
== Series Details == Series: drm/i915: Rewrite some comments around RCU-deferred object free URL : https://patchwork.freedesktop.org/series/36500/ State : success == Summary == Test kms_frontbuffer_tracking: Subgroup fbc-1p-offscren-pri-shrfb-draw-blt: pass ->

[Intel-gfx] ✓ Fi.CI.BAT: success for series starting with [01/10] drm/i915: Only attempt to scan the requested number of shrinker slabs

2018-01-15 Thread Patchwork
== Series Details == Series: series starting with [01/10] drm/i915: Only attempt to scan the requested number of shrinker slabs URL : https://patchwork.freedesktop.org/series/36501/ State : success == Summary == Series 36501v1 series starting with [01/10] drm/i915: Only attempt to scan the

Re: [Intel-gfx] [PATCH 1/2] drm/i915: Add display WA #1175 for planes ending close to right screen edge

2018-01-15 Thread Imre Deak
On Fri, Jan 12, 2018 at 03:01:59PM +, Chris Wilson wrote: > Quoting Imre Deak (2018-01-12 14:54:36) > > As described in the WA on GLK and CNL planes on the right edge of the > > screen that have less than 4 pixels visible from the beginning of the > > plane to the edge of the screen can cause

Re: [Intel-gfx] [PATCH 1/2] drm/i915: Add display WA #1175 for planes ending close to right screen edge

2018-01-15 Thread Imre Deak
On Fri, Jan 12, 2018 at 03:13:19PM +, Chris Wilson wrote: > Quoting Imre Deak (2018-01-12 14:54:36) > > As described in the WA on GLK and CNL planes on the right edge of the > > screen that have less than 4 pixels visible from the beginning of the > > plane to the edge of the screen can cause

Re: [Intel-gfx] [PATCH 1/2] drm/i915: Add display WA #1175 for planes ending close to right screen edge

2018-01-15 Thread Chris Wilson
Quoting Imre Deak (2018-01-15 13:20:37) > On Fri, Jan 12, 2018 at 03:01:59PM +, Chris Wilson wrote: > > Quoting Imre Deak (2018-01-12 14:54:36) > > > As described in the WA on GLK and CNL planes on the right edge of the > > > screen that have less than 4 pixels visible from the beginning of

Re: [Intel-gfx] [PATCH 1/2] drm/i915: Add display WA #1175 for planes ending close to right screen edge

2018-01-15 Thread Imre Deak
On Mon, Jan 15, 2018 at 01:26:48PM +, Chris Wilson wrote: > Quoting Imre Deak (2018-01-15 13:20:37) > > On Fri, Jan 12, 2018 at 03:01:59PM +, Chris Wilson wrote: > > > Quoting Imre Deak (2018-01-12 14:54:36) > > > > As described in the WA on GLK and CNL planes on the right edge of the > >

[Intel-gfx] IGT news - New mailing list, switching to meson

2018-01-15 Thread Petri Latvala
New mailing list IGT now has its own mailing list. For a transition period, patches on both the new mailing list and intel-gfx (with the appropriate patch subjectprefix) get tested on their own respective Patchwork instances. Patches that are sent to both lists will get

Re: [Intel-gfx] [PATCH v2 01/11] drm/i915: Disable preemption and sleeping while using the punit sideband

2018-01-15 Thread Mika Kuoppala
Chris Wilson writes: > Quoting Mika Kuoppala (2018-01-15 12:04:40) >> Chris Wilson writes: >> >> > While we talk to the punit over its sideband, we need to prevent the cpu >> > from sleeping in order to prevent a potential machine hang. >> >

[Intel-gfx] ✓ Fi.CI.BAT: success for series starting with [1/2] drm/i915: Use our singlethreaded wq for freeing objects

2018-01-15 Thread Patchwork
== Series Details == Series: series starting with [1/2] drm/i915: Use our singlethreaded wq for freeing objects URL : https://patchwork.freedesktop.org/series/36480/ State : success == Summary == Series 36480v1 series starting with [1/2] drm/i915: Use our singlethreaded wq for freeing

[Intel-gfx] ✓ Fi.CI.BAT: success for series starting with [v2,01/11] drm/i915: Disable preemption and sleeping while using the punit sideband

2018-01-15 Thread Patchwork
== Series Details == Series: series starting with [v2,01/11] drm/i915: Disable preemption and sleeping while using the punit sideband URL : https://patchwork.freedesktop.org/series/36469/ State : success == Summary == Series 36469v1 series starting with [v2,01/11] drm/i915: Disable

Re: [Intel-gfx] [PATCH 1/5] drm/vblank: Fix return type for drm_vblank_count()

2018-01-15 Thread Daniel Vetter
On Fri, Jan 12, 2018 at 01:57:03PM -0800, Dhinakaran Pandiyan wrote: > drm_vblank_count() has a u32 type returning what is a 64-bit vblank count. > The effect of this is when drm_wait_vblank_ioctl() tries to widen the user > space requested vblank sequence using this clipped 32-bit count(when the

Re: [Intel-gfx] [PATCH 1/2] drm/i915: Only defer freeing of fence callback when also using the timer

2018-01-15 Thread Tvrtko Ursulin
On 15/01/2018 09:06, Chris Wilson wrote: Without an accompanying timer (for internal fences), we can free the fence callback immediately as we do not need to employ the RCU barrier to serialise with the timer. By avoiding the RCU delay, we can avoid the extra mempressure under heavy

Re: [Intel-gfx] [PATCH 2/2] drm/i915/fence: Separate timeout mechanism for awaiting on dma-fences

2018-01-15 Thread Chris Wilson
Quoting Tvrtko Ursulin (2018-01-15 10:08:01) > > On 15/01/2018 09:06, Chris Wilson wrote: > > As the timeout mechanism has grown more and more complicated, using > > multiple deferred tasks and more than doubling the size of our struct, > > split the two implementations to streamline the simpler

[Intel-gfx] [PATCH] drm/i915: Lock out execlist tasklet while peeking inside for busy-stats

2018-01-15 Thread Chris Wilson
In order to prevent a race condition where we may end up overaccounting the active state and leaving the busy-stats believing the GPU is 100% busy, lock out the tasklet while we reconstruct the busy state. There is no direct spinlock guard for the execlists->port[], so we need to utilise

[Intel-gfx] ✓ Fi.CI.BAT: success for series starting with [1/2] drm/i915: Only defer freeing of fence callback when also using the timer

2018-01-15 Thread Patchwork
== Series Details == Series: series starting with [1/2] drm/i915: Only defer freeing of fence callback when also using the timer URL : https://patchwork.freedesktop.org/series/36470/ State : success == Summary == Series 36470v1 series starting with [1/2] drm/i915: Only defer freeing of fence

[Intel-gfx] ✓ Fi.CI.IGT: success for Adding NV12 support (rev5)

2018-01-15 Thread Patchwork
== Series Details == Series: Adding NV12 support (rev5) URL : https://patchwork.freedesktop.org/series/28103/ State : success == Summary == Test gem_tiled_swapping: Subgroup non-threaded: incomplete -> PASS (shard-hsw) fdo#104218 +1 Test kms_pipe_crc_basic:

Re: [Intel-gfx] [PATCH 5/5] drm/i915: Estimate and update missed vblanks.

2018-01-15 Thread Daniel Vetter
On Fri, Jan 12, 2018 at 01:57:07PM -0800, Dhinakaran Pandiyan wrote: > The frame counter may have got reset between disabling and enabling vblank > interrupts due to DMC putting the hardware to DC5/6 state if PSR was > active. The frame counter also could have stalled if PSR is active in cases >

Re: [Intel-gfx] [PATCH] drm/i915: Lock out execlist tasklet while peeking inside for busy-stats

2018-01-15 Thread Tvrtko Ursulin
On 15/01/2018 09:20, Chris Wilson wrote: In order to prevent a race condition where we may end up overaccounting the active state and leaving the busy-stats believing the GPU is 100% busy, lock out the tasklet while we reconstruct the busy state. There is no direct spinlock guard for the

Re: [Intel-gfx] [PATCH 1/2] drm/i915: Only defer freeing of fence callback when also using the timer

2018-01-15 Thread Chris Wilson
Quoting Tvrtko Ursulin (2018-01-15 10:00:48) > > On 15/01/2018 09:06, Chris Wilson wrote: > > Without an accompanying timer (for internal fences), we can free the > > fence callback immediately as we do not need to employ the RCU barrier > > to serialise with the timer. By avoiding the RCU delay,

Re: [Intel-gfx] ✗ Fi.CI.BAT: failure for series starting with [1/2] drm/i915/guc: Don't enable GuC when vGPU is active

2018-01-15 Thread Joonas Lahtinen
On Fri, 2018-01-12 at 14:08 +0800, Du, Changbin wrote: > On Fri, Jan 12, 2018 at 11:32:30AM +0530, Sagar Arun Kamble wrote: > > Is skl-gvtdvm not having vGPU active? > > > > It has flag X86_FEATURE_HYPERVISOR set however it might be set on host too > > so relying intel_vgpu_active(). > > > > Do

Re: [Intel-gfx] [PATCH 2/2] drm/i915/fence: Separate timeout mechanism for awaiting on dma-fences

2018-01-15 Thread Tvrtko Ursulin
On 15/01/2018 09:06, Chris Wilson wrote: As the timeout mechanism has grown more and more complicated, using multiple deferred tasks and more than doubling the size of our struct, split the two implementations to streamline the simpler no-timeout callback variant. Signed-off-by: Chris Wilson

Re: [Intel-gfx] [PATCH 2/2] drm/i915/fence: Separate timeout mechanism for awaiting on dma-fences

2018-01-15 Thread Chris Wilson
Quoting Chris Wilson (2018-01-15 10:15:45) > Quoting Tvrtko Ursulin (2018-01-15 10:08:01) > > If it compiles, and works, assuming we have tests cases which exercise > > both paths, then it is obviously fine. > > The no-timeout variants are using for inter-engine signaling, the > timeout variant

[Intel-gfx] ✗ Fi.CI.IGT: failure for series starting with [v2,01/11] drm/i915: Disable preemption and sleeping while using the punit sideband

2018-01-15 Thread Patchwork
== Series Details == Series: series starting with [v2,01/11] drm/i915: Disable preemption and sleeping while using the punit sideband URL : https://patchwork.freedesktop.org/series/36469/ State : failure == Summary == Test kms_flip: Subgroup flip-vs-absolute-wf_vblank-interruptible:

Re: [Intel-gfx] [PATCH v2 01/11] drm/i915: Disable preemption and sleeping while using the punit sideband

2018-01-15 Thread Mika Kuoppala
Chris Wilson writes: > While we talk to the punit over its sideband, we need to prevent the cpu > from sleeping in order to prevent a potential machine hang. > > Note that by itself, it appears that pm_qos_update_request (via > intel_idle) doesn't provide a sufficient

[Intel-gfx] ✓ Fi.CI.IGT: success for drm/i915: Lock out execlist tasklet while peeking inside for busy-stats

2018-01-15 Thread Patchwork
== Series Details == Series: drm/i915: Lock out execlist tasklet while peeking inside for busy-stats URL : https://patchwork.freedesktop.org/series/36473/ State : success == Summary == Test kms_flip: Subgroup plain-flip-fb-recreate: fail -> PASS (shard-hsw)

Re: [Intel-gfx] [PATCH v2 01/11] drm/i915: Disable preemption and sleeping while using the punit sideband

2018-01-15 Thread Chris Wilson
Quoting Mika Kuoppala (2018-01-15 12:04:40) > Chris Wilson writes: > > > While we talk to the punit over its sideband, we need to prevent the cpu > > from sleeping in order to prevent a potential machine hang. > > > > Note that by itself, it appears that

Re: [Intel-gfx] [PATCH v2 01/11] drm/i915: Disable preemption and sleeping while using the punit sideband

2018-01-15 Thread Chris Wilson
Quoting Mika Kuoppala (2018-01-15 12:04:40) > Chris Wilson writes: > > > While we talk to the punit over its sideband, we need to prevent the cpu > > from sleeping in order to prevent a potential machine hang. > > > > Note that by itself, it appears that

Re: [Intel-gfx] ✓ Fi.CI.IGT: success for series starting with [1/2] drm/i915: Only defer freeing of fence callback when also using the timer

2018-01-15 Thread Chris Wilson
Quoting Patchwork (2018-01-15 10:59:15) > == Series Details == > > Series: series starting with [1/2] drm/i915: Only defer freeing of fence > callback when also using the timer > URL : https://patchwork.freedesktop.org/series/36470/ > State : success > > == Summary == > > Test

Re: [Intel-gfx] ✓ Fi.CI.IGT: success for drm/i915: Lock out execlist tasklet while peeking inside for busy-stats

2018-01-15 Thread Chris Wilson
Quoting Patchwork (2018-01-15 12:11:48) > == Series Details == > > Series: drm/i915: Lock out execlist tasklet while peeking inside for > busy-stats > URL : https://patchwork.freedesktop.org/series/36473/ > State : success > > == Logs == > > For more details see: >

[Intel-gfx] [PATCH 2/2] drm/i915: Only attempt to scan the requested number of shrinker slabs

2018-01-15 Thread Chris Wilson
Since commit 4e773c3a8a69 ("drm/i915: Wire up shrinkctl->nr_scanned"), we track the number of objects we scan and do not wish to exceed that as it will overly penalise our own slabs under mempressure. Given that we now know the target number of objects to scan, use that as our guide for deciding

[Intel-gfx] [PATCH 1/2] drm/i915: Use our singlethreaded wq for freeing objects

2018-01-15 Thread Chris Wilson
As freeing the objects require serialisation on struct_mutex, we should prefer to use our singlethreaded driver wq that is dedicated to work requiring struct_mutex (hence serialised).The benefit should be less clutter on the system wq, allowing it to make progress even when the driver/struct_mutex

[Intel-gfx] ✗ Fi.CI.BAT: failure for Adding NV12 support (rev5)

2018-01-15 Thread Patchwork
== Series Details == Series: Adding NV12 support (rev5) URL : https://patchwork.freedesktop.org/series/28103/ State : failure == Summary == Series 28103v5 Adding NV12 support https://patchwork.freedesktop.org/api/1.0/series/28103/revisions/5/mbox/ Test core_auth: Subgroup basic-auth:

Re: [Intel-gfx] [PATCH v2 01/11] drm/i915: Disable preemption and sleeping while using the punit sideband

2018-01-15 Thread Hans de Goede
Hi, On 15-01-18 13:21, Chris Wilson wrote: Quoting Mika Kuoppala (2018-01-15 12:04:40) Chris Wilson writes: While we talk to the punit over its sideband, we need to prevent the cpu from sleeping in order to prevent a potential machine hang. Note that by itself, it

[Intel-gfx] ✗ Fi.CI.IGT: failure for series starting with [v2,1/6] drm/i915: Lock out execlist tasklet while peeking inside for busy-stats

2018-01-15 Thread Patchwork
== Series Details == Series: series starting with [v2,1/6] drm/i915: Lock out execlist tasklet while peeking inside for busy-stats URL : https://patchwork.freedesktop.org/series/36475/ State : failure == Summary == Test kms_flip: Subgroup plain-flip-fb-recreate: fail

[Intel-gfx] [PATCH v2 01/11] drm/i915: Disable preemption and sleeping while using the punit sideband

2018-01-15 Thread Chris Wilson
While we talk to the punit over its sideband, we need to prevent the cpu from sleeping in order to prevent a potential machine hang. Note that by itself, it appears that pm_qos_update_request (via intel_idle) doesn't provide a sufficient barrier to ensure that all core are indeed awake (out of

[Intel-gfx] [PATCH v2 09/11] drm/i915: Merge sandybride_pcode_(read|write)

2018-01-15 Thread Chris Wilson
These routines are identical except in the nature of the value parameter. For writes it is a pure in-param, but for a read, we need an out-param. Since they differ in a single line, merge the two routines into one. Signed-off-by: Chris Wilson ---

[Intel-gfx] [PATCH v2 10/11] drm/i915: Move sandybride pcode access to intel_sideband.c

2018-01-15 Thread Chris Wilson
sandybride_pcode is another sideband, so move it to their new home. Signed-off-by: Chris Wilson --- drivers/gpu/drm/i915/i915_drv.h | 5 - drivers/gpu/drm/i915/intel_hdcp.c | 3 +- drivers/gpu/drm/i915/intel_pm.c | 190

[Intel-gfx] [PATCH v2 02/11] drm/i915: Lift acquiring the vlv punit magic to a common sb-get

2018-01-15 Thread Chris Wilson
As we now employ a very heavy pm_qos around the punit access, we want to minimise the number of synchronous requests by performing one for the whole punit sequence rather than around individual accesses. The sideband lock is used for this, so push the pm_qos into the sideband lock acquisition and

[Intel-gfx] [PATCH v2 06/11] drm/i915: Replace pcu_lock with sb_lock

2018-01-15 Thread Chris Wilson
We now have two locks for sideband access. The general one covering sideband access across all generation, sb_lock, and a specific one covering sideband access via the punit on vlv/chv. After lifting the sb_lock around the punit into the callers, the pcu_lock is now redudant and can be separated

[Intel-gfx] Vlv punit w/a (take two)

2018-01-15 Thread Chris Wilson
I've incorporated Ville's feedback that this is highly unlikely to be a general problem and tied the w/a to only the Valleyview punit. I kept it reasonably open just in case we need to extend it, and for an interface that makes for locking all sideband access convenient. (If we make punit access

[Intel-gfx] [PATCH v2 08/11] drm/i915: Merge sbi read/write into a single accessor

2018-01-15 Thread Chris Wilson
Since intel_sideband_read and intel_sideband_write differ by only a couple of lines (depending on whether we feed the value in or out), merge the two into a single common accessor. Signed-off-by: Chris Wilson --- drivers/gpu/drm/i915/intel_sideband.c | 92

[Intel-gfx] [PATCH v2 07/11] drm/i915: Separate sideband declarations to intel_sideband.h

2018-01-15 Thread Chris Wilson
Split the sideback declarations out of the ginormous i915_drv.h Signed-off-by: Chris Wilson --- drivers/gpu/drm/i915/i915_debugfs.c | 2 + drivers/gpu/drm/i915/i915_drv.h | 62 drivers/gpu/drm/i915/i915_sysfs.c | 2 +

[Intel-gfx] [PATCH v2 05/11] Revert "drm/i915: Avoid tweaking evaluation thresholds on Baytrail v3"

2018-01-15 Thread Chris Wilson
With the vlv sideband fixed to avoid sleeping while we talk to the punit, the system should be much more stable and be able to utilise the punit without risk. This reverts commit 6067a27d1f01 ("drm/i915: Avoid tweaking evaluation thresholds on Baytrail v3") References: 6067a27d1f01 ("drm/i915:

[Intel-gfx] ✓ Fi.CI.BAT: success for Adding NV12 support (rev5)

2018-01-15 Thread Patchwork
== Series Details == Series: Adding NV12 support (rev5) URL : https://patchwork.freedesktop.org/series/28103/ State : success == Summary == Series 28103v5 Adding NV12 support https://patchwork.freedesktop.org/api/1.0/series/28103/revisions/5/mbox/ Test debugfs_test: Subgroup

  1   2   >