If we don't do that, we have to wait for the job timeout to expire
before the fault jobs gets killed.
Signed-off-by: Boris Brezillon
---
drivers/gpu/drm/panfrost/panfrost_mmu.c | 6 +-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c
b
Things are unlikely to resolve until we reset the GPU. Let's not wait
for other faults/timeout to happen to trigger this reset.
Signed-off-by: Boris Brezillon
---
drivers/gpu/drm/panfrost/panfrost_mmu.c | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm
Job headers contain an exception type field which might be read and
converted to a human readable string by tracing tools. Let's expose
the exception type as an enum so we share the same definition.
Signed-off-by: Boris Brezillon
---
include/uapi/drm/panfrost_drm.h | 65
codes.
Signed-off-by: Boris Brezillon
---
drivers/gpu/drm/panfrost/panfrost_device.c | 134 +
drivers/gpu/drm/panfrost/panfrost_device.h | 1 +
2 files changed, 88 insertions(+), 47 deletions(-)
diff --git a/drivers/gpu/drm/panfrost/panfrost_device.c
b/drivers/gpu/drm/pa
Currently unused. We'll add it back if we need per-GPU definitions.
Signed-off-by: Boris Brezillon
---
drivers/gpu/drm/panfrost/panfrost_device.c | 2 +-
drivers/gpu/drm/panfrost/panfrost_device.h | 2 +-
drivers/gpu/drm/panfrost/panfrost_gpu.c| 2 +-
drivers/gpu/drm/panfrost/panfrost_job.c
lifetime is no longer bound to the FD lifetime and running jobs
can finish properly without generating spurious page faults.
Reported-by: Icecream95
Fixes: 7282f7645d06 ("drm/panfrost: Implement per FD address spaces")
Cc:
Signed-off-by: Boris Brezillon
---
drivers/gpu/drm/panfrost/panfros
tch 11 and 12
Boris Brezillon (12):
drm/panfrost: Make sure MMU context lifetime is not bound to
panfrost_priv
drm/panfrost: Get rid of the unused JS_STATUS_EVENT_ACTIVE definition
drm/panfrost: Drop the pfdev argument passed to
panfrost_exception_name()
drm/panfrost: Expose ex
Currently unused. We'll add it back if we need per-GPU definitions.
Signed-off-by: Boris Brezillon
---
drivers/gpu/drm/panfrost/panfrost_device.c | 2 +-
drivers/gpu/drm/panfrost/panfrost_device.h | 2 +-
drivers/gpu/drm/panfrost/panfrost_gpu.c| 2 +-
drivers/gpu/drm/panfrost/panfrost_job.c
If we don't do that, we have to wait for the job timeout to expire
before the fault jobs gets killed.
Signed-off-by: Boris Brezillon
---
drivers/gpu/drm/panfrost/panfrost_mmu.c | 6 +-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c
b
codes.
Signed-off-by: Boris Brezillon
---
drivers/gpu/drm/panfrost/panfrost_device.c | 134 +
drivers/gpu/drm/panfrost/panfrost_device.h | 1 +
2 files changed, 88 insertions(+), 47 deletions(-)
diff --git a/drivers/gpu/drm/panfrost/panfrost_device.c
b/drivers/gpu/drm/pa
If the process who submitted these jobs decided to close the FD before
the jobs are done it probably means it doesn't care about the result.
Signed-off-by: Boris Brezillon
---
drivers/gpu/drm/panfrost/panfrost_job.c | 33 +
1 file changed, 28 insertions(+), 5 deletions
Job headers contain an exception type field which might be read and
converted to a human readable string by tracing tools. Let's expose
the exception type as an enum so we share the same definition.
Signed-off-by: Boris Brezillon
---
include/uapi/drm/panfrost_drm.h | 65
Things are unlikely to resolve until we reset the GPU. Let's not wait
for other faults/timeout to happen to trigger this reset.
Signed-off-by: Boris Brezillon
---
drivers/gpu/drm/panfrost/panfrost_mmu.c | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm
If we can recover from a fault without a reset there's no reason to
issue one.
Signed-off-by: Boris Brezillon
---
drivers/gpu/drm/panfrost/panfrost_device.c | 9 ++
drivers/gpu/drm/panfrost/panfrost_device.h | 2 ++
drivers/gpu/drm/panfrost/panfrost_job.c| 35 ++
3
Expose a helper to trigger a GPU reset so we can easily trigger reset
operations outside the job timeout handler.
Signed-off-by: Boris Brezillon
---
drivers/gpu/drm/panfrost/panfrost_device.h | 8
drivers/gpu/drm/panfrost/panfrost_job.c| 4 +---
2 files changed, 9 insertions(+), 3
lifetime is no longer bound to the FD lifetime and running jobs
can finish properly without generating spurious page faults.
Reported-by: Icecream95
Fixes: 7282f7645d06 ("drm/panfrost: Implement per FD address spaces")
Cc:
Signed-off-by: Boris Brezillon
---
drivers/gpu/drm/panfrost/panfros
Exception types will be defined as an enum in panfrost_drm.h so userspace
and use the same definitions if needed.
Signed-off-by: Boris Brezillon
---
drivers/gpu/drm/panfrost/panfrost_regs.h | 3 ---
1 file changed, 3 deletions(-)
diff --git a/drivers/gpu/drm/panfrost/panfrost_regs.h
b/drivers
to something
else. Any feedback on that aspect is welcome.
Regards,
Boris
Boris Brezillon (10):
drm/panfrost: Make sure MMU context lifetime is not bound to
panfrost_priv
drm/panfrost: Get rid of the unused JS_STATUS_EVENT_ACTIVE definition
drm/panfrost: Drop the pfdev ar
Make sure all bo->base.pages entries are either NULL or pointing to a
valid page before calling drm_gem_shmem_put_pages().
Reported-by: Tomeu Vizoso
Cc:
Fixes: 187d2929206e ("drm/panfrost: Add support for GPU heap allocations")
Signed-off-by: Boris Brezillon
---
drivers/gpu
On Fri, 12 Mar 2021 19:25:13 +0100
Boris Brezillon wrote:
> > So where does this leave us? Well, it depends on your submit model
> > and exactly how you handle pipeline barriers that sync between
> > engines. If you're taking option 3 above and doing two command
&g
On Fri, 12 Mar 2021 09:37:49 -0600
Jason Ekstrand wrote:
> On Fri, Mar 12, 2021 at 1:31 AM Boris Brezillon
> wrote:
> >
> > On Thu, 11 Mar 2021 12:11:48 -0600
> > Jason Ekstrand wrote:
> >
> > > > > > > > 2/ Queued
On Thu, 11 Mar 2021 12:16:33 +
Steven Price wrote:
> Also the current code completely ignores PANFROST_BO_REF_READ. So either
> that should be defined as 0, or even better we support 3 modes:
>
> * Exclusive ('write' access)
> * Shared ('read' access)
> * No fence - ensures the BO is
On Thu, 11 Mar 2021 12:11:48 -0600
Jason Ekstrand wrote:
> > > > > > 2/ Queued jobs might be executed out-of-order (unless they have
> > > > > > explicit/implicit deps between them), and Vulkan asks that the
> > > > > > out
> > > > > > fence be signaled when all jobs are done. Timeline
Hi Jason,
On Thu, 11 Mar 2021 10:58:46 -0600
Jason Ekstrand wrote:
> Hi all,
>
> Dropping in where I may or may not be wanted to feel free to ignore. : -)
I'm glad you decided to chime in. :-)
> > > > 2/ Queued jobs might be executed out-of-order (unless they have
> > > >
Hi Steven,
On Thu, 11 Mar 2021 12:16:33 +
Steven Price wrote:
> On 11/03/2021 09:25, Boris Brezillon wrote:
> > Hello,
> >
> > I've been playing with Vulkan lately and struggled quite a bit to
> > implement VkQueueSubmit with the submit ioctl we have. There are
&
This should help limit the number of ioctls when submitting multiple
jobs. The new ioctl also supports syncobj timelines and BO access flags.
Signed-off-by: Boris Brezillon
---
drivers/gpu/drm/panfrost/panfrost_drv.c | 303
include/uapi/drm/panfrost_drm.h | 79
So we don't have to change the prototype if we extend the function.
Signed-off-by: Boris Brezillon
---
drivers/gpu/drm/panfrost/panfrost_job.c | 22 --
1 file changed, 8 insertions(+), 14 deletions(-)
diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c
b/drivers/gpu/drm
Now that we have a new SUBMIT ioctl dealing with timelined syncojbs we
can advertise the feature.
Signed-off-by: Boris Brezillon
---
drivers/gpu/drm/panfrost/panfrost_drv.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c
b/drivers
We now have a new ioctl that allows submitting multiple jobs at once
(among other things) and we support timelined syncobjs. Bump the
minor version number to reflect those changes.
Signed-off-by: Boris Brezillon
---
drivers/gpu/drm/panfrost/panfrost_drv.c | 3 ++-
1 file changed, 2 insertions
Jobs reading from the same BO should not be serialized. Add access flags
so we can relax the implicit dependencies in that case. We force RW
access for now to keep the behavior unchanged, but a new SUBMIT ioctl
taking explicit access flags will be introduced.
Signed-off-by: Boris Brezillon
This way we can re-use the standard drm_gem_fence_array_add_implicit()
helper and simplify the panfrost_job_dependency() logic.
Signed-off-by: Boris Brezillon
---
drivers/gpu/drm/panfrost/panfrost_drv.c | 42 +++---
drivers/gpu/drm/panfrost/panfrost_job.c | 57
So we can re-use it from elsewhere.
Signed-off-by: Boris Brezillon
---
drivers/gpu/drm/panfrost/panfrost_drv.c | 52 ++---
1 file changed, 29 insertions(+), 23 deletions(-)
diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c
b/drivers/gpu/drm/panfrost/panfrost_drv.c
index
get
those patches merged until we have a userspace user, but I thought
starting the discussion early would be a good thing.
Feel free to suggest other approaches.
Regards,
Boris
Boris Brezillon (7):
drm/panfrost: Pass a job to panfrost_{acquire,attach_object_fences}()
drm/panfrost: Coll
ink at least, would need some deadlock and testing.
> > >>
> > >> The big problem with this sort of method for triggering the shrinkers is
> > >> that they are called without (many) locks held. Whereas it's entirely
> > >> possible for a shrinker to be ca
On Fri, 5 Feb 2021 12:17:54 +0100
Boris Brezillon wrote:
> Hello,
>
> Here are 2 fixes and one improvement for the page fault handling. Those
> bugs were found while working on indirect draw supports which requires
> the allocation of a big heap buffer for varyings, and t
heap allocations")
Signed-off-by: Boris Brezillon
Reviewed-by: Steven Price
---
drivers/gpu/drm/panfrost/panfrost_mmu.c | 9 -
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c
b/drivers/gpu/drm/panfrost/panfrost_mmu.c
index 90
Doing a hw-irq -> threaded-irq round-trip is counter-productive, stay
in the threaded irq handler as long as we can.
v2:
* Rework the loop to avoid a goto
Signed-off-by: Boris Brezillon
---
drivers/gpu/drm/panfrost/panfrost_mmu.c | 26 +
1 file changed, 14 inserti
pport for GPU heap allocations")
Signed-off-by: Boris Brezillon
Reviewed-by: Steven Price
---
drivers/gpu/drm/panfrost/panfrost_mmu.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c
b/drivers/gpu/drm/panfrost/panfrost_m
handling loop to avoid a goto
Boris Brezillon (3):
drm/panfrost: Clear MMU irqs before handling the fault
drm/panfrost: Don't try to map pages that are already mapped
drm/panfrost: Stay in the threaded MMU IRQ handler until we've handled
all IRQs
drivers/gpu/drm/panfrost/panfrost_mmu.c
On Mon, 1 Feb 2021 13:24:00 +
Steven Price wrote:
> On 01/02/2021 12:59, Boris Brezillon wrote:
> > On Mon, 1 Feb 2021 12:13:49 +
> > Steven Price wrote:
> >
> >> On 01/02/2021 08:21, Boris Brezillon wrote:
> >>> Doing a hw-irq -> thread
On Mon, 1 Feb 2021 12:13:49 +
Steven Price wrote:
> On 01/02/2021 08:21, Boris Brezillon wrote:
> > Doing a hw-irq -> threaded-irq round-trip is counter-productive, stay
> > in the threaded irq handler as long as we can.
> >
> > Signed-off-by: Boris Brez
Doing a hw-irq -> threaded-irq round-trip is counter-productive, stay
in the threaded irq handler as long as we can.
Signed-off-by: Boris Brezillon
---
drivers/gpu/drm/panfrost/panfrost_mmu.c | 7 +++
1 file changed, 7 insertions(+)
diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c
pport for GPU heap allocations")
Signed-off-by: Boris Brezillon
---
drivers/gpu/drm/panfrost/panfrost_mmu.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c
b/drivers/gpu/drm/panfrost/panfrost_mmu.c
index 7c1b3481b785..904d63450
heap allocations")
Signed-off-by: Boris Brezillon
---
drivers/gpu/drm/panfrost/panfrost_mmu.c | 9 -
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c
b/drivers/gpu/drm/panfrost/panfrost_mmu.c
index 904d63450862..21e552d1ac71 100644
---
discussing the first issue with Steve or Robin a while back,
but we never hit it before (now we do :)).
The last patch is a perf improvement: no need to re-enable hardware
interrupts if we know the threaded irq handler will be woken up right
away.
Regards,
Boris
Boris Brezillon (3):
drm/panfrost
On Fri, 4 Dec 2020 13:47:05 +0200
Tomi Valkeinen wrote:
> On 04/12/2020 13:12, Boris Brezillon wrote:
>
> >>> That'd be even better if you implement the bridge interface instead of
> >>> the encoder one so we can get rid of the encoder_{helper}_funcs and use
On Fri, 4 Dec 2020 12:56:27 +0200
Tomi Valkeinen wrote:
> Hi Boris,
>
> On 04/12/2020 12:50, Boris Brezillon wrote:
> > On Tue, 1 Dec 2020 17:48:28 +0530
> > Nikhil Devshatwar wrote:
> >
> >> Remove the old code to iterate over the bridge chain, as this i
On Tue, 1 Dec 2020 17:48:28 +0530
Nikhil Devshatwar wrote:
> Remove the old code to iterate over the bridge chain, as this is
> already done by the framework.
> The bridge state should have the negotiated bus format and flags.
> Use these from the bridge's state.
> If the bridge does not support
On Tue, 1 Dec 2020 17:48:27 +0530
Nikhil Devshatwar wrote:
> input_bus_flags are specified in drm_bridge_timings (legacy) as well
> as drm_bridge_state->input_bus_cfg.flags
>
> The flags from the timings will be deprecated. Bridges are supposed
> to validate and set the bridge state flags from
On Tue, 1 Dec 2020 17:48:26 +0530
Nikhil Devshatwar wrote:
> With new connector model, mhdp bridge will not create the connector and
> SoC driver will rely on format negotiation to setup the encoder format.
>
> Support minimal format negotiations hooks in the drm_bridge_funcs.
> Complete format
On Tue, 1 Dec 2020 17:48:27 +0530
Nikhil Devshatwar wrote:
> input_bus_flags are specified in drm_bridge_timings (legacy) as well
> as drm_bridge_state->input_bus_cfg.flags
>
> The flags from the timings will be deprecated. Bridges are supposed
> to validate and set the bridge state flags from
On Thu, 3 Dec 2020 18:20:48 +0530
Nikhil Devshatwar wrote:
> input_bus_flags are specified in drm_bridge_timings (legacy) as well
> as drm_bridge_state->input_bus_cfg.flags
>
> The flags from the timings will be deprecated. Bridges are supposed
> to validate and set the bridge state flags from
Vetter)
v3:
- Replace the atomic_cmpxchg() by an atomic_xchg() (Robin Murphy)
- Add Steven's R-b
v2:
- Use atomic_cmpxchg() to conditionally schedule the reset work (Steven Price)
Fixes: 1a11a88cfd9a ("drm/panfrost: Fix job timeout handling")
Cc:
Signed-off-by: Boris Brezillon
---
d
On Thu, 5 Nov 2020 13:27:04 +
Steven Price wrote:
> > + old_status = atomic_xchg(>status,
> > +PANFROST_QUEUE_STATUS_STOPPED);
> > + WARN_ON(old_status != PANFROST_QUEUE_STATUS_ACTIVE &&
> > + old_status != PANFROST_QUEUE_STATUS_STOPPED);
> > + if
+amdgpu maintainers
On Wed, 4 Nov 2020 18:07:29 +0100
Boris Brezillon wrote:
> We've fixed many races in panfrost_job_timedout() but some remain.
> Instead of trying to fix it again, let's simplify the logic and move
> the reset bits to a separate work scheduled when one of the queue
work (Steven Price)
Fixes: 1a11a88cfd9a ("drm/panfrost: Fix job timeout handling")
Cc:
Signed-off-by: Boris Brezillon
---
drivers/gpu/drm/panfrost/panfrost_device.c | 1 -
drivers/gpu/drm/panfrost/panfrost_device.h | 6 +-
drivers/gpu/drm/panfrost/panfrost_job.
On Tue, 3 Nov 2020 12:08:47 +0100
Daniel Vetter wrote:
> On Tue, Nov 03, 2020 at 12:03:26PM +0100, Boris Brezillon wrote:
> > On Tue, 3 Nov 2020 11:25:40 +0100
> > Daniel Vetter wrote:
> >
> > > On Tue, Nov 03, 2020 at 09:13:47AM +0100, Boris Brezillon wrote:
On Tue, 3 Nov 2020 11:25:40 +0100
Daniel Vetter wrote:
> On Tue, Nov 03, 2020 at 09:13:47AM +0100, Boris Brezillon wrote:
> > We've fixed many races in panfrost_job_timedout() but some remain.
> > Instead of trying to fix it again, let's simplify the logic and move
>
On Mon, 2 Nov 2020 08:39:29 +
Steven Price wrote:
> On 01/11/2020 17:38, Boris Brezillon wrote:
> > Commit a17d609e3e21 ("drm/panfrost: Don't corrupt the queue mutex on
> > open/close") left unused variables behind, thus generating a warning
> > at compilati
On Mon, 2 Nov 2020 08:42:49 +
Steven Price wrote:
> On 01/11/2020 17:40, Boris Brezillon wrote:
> > panfrost_ioctl_madvise() and panfrost_gem_purge() acquire the mappings
> > and shmem locks in different orders, thus leading to a potential
> > the mappings lock
On Fri, 30 Oct 2020 14:58:33 +
Steven Price wrote:
> When unloading the call to pm_runtime_put_sync_suspend() will attempt to
> turn the GPU cores off, however panfrost_device_fini() will have turned
> the clocks off. This leads to the hardware locking up.
>
> Instead don't call
Steven's R-b
v2:
- Use atomic_cmpxchg() to conditionally schedule the reset work (Steven Price)
Fixes: 1a11a88cfd9a ("drm/panfrost: Fix job timeout handling")
Cc:
Signed-off-by: Boris Brezillon
Reviewed-by: Steven Price
---
drivers/gpu/drm/panfrost/panfrost_device.c | 1 -
drive
On Fri, 30 Oct 2020 14:29:32 +
Robin Murphy wrote:
> On 2020-10-30 10:53, Boris Brezillon wrote:
> [...]
> > + /* Schedule a reset if there's no reset in progress. */
> > + if (!atomic_cmpxchg(>reset.pending, 0, 1))
>
> Nit: this could just be a simple x
Hi Stephen,
On Mon, 2 Nov 2020 12:46:37 +1100
Stephen Rothwell wrote:
> Hi all,
>
> After merging the imx-drm tree, today's linux-next build (arm
> multi_v7_defconfig) produced this warning:
>
> drivers/gpu/drm/panfrost/panfrost_job.c: In function 'panfrost_job_close':
>
ian Hewitt
Signed-off-by: Boris Brezillon
---
drivers/gpu/drm/panfrost/panfrost_gem.c | 4 +---
drivers/gpu/drm/panfrost/panfrost_gem.h | 2 +-
drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c | 14 +++---
3 files changed, 13 insertions(+), 7 deletions(-)
diff --git a/d
Commit a17d609e3e21 ("drm/panfrost: Don't corrupt the queue mutex on
open/close") left unused variables behind, thus generating a warning
at compilation time. Remove those variables.
Fixes: a17d609e3e21 ("drm/panfrost: Don't corrupt the queue mutex on
open/close")
Signed-off
Privce)
Fixes: 1a11a88cfd9a ("drm/panfrost: Fix job timeout handling")
Cc:
Signed-off-by: Boris Brezillon
---
drivers/gpu/drm/panfrost/panfrost_device.c | 1 -
drivers/gpu/drm/panfrost/panfrost_device.h | 6 +-
drivers/gpu/drm/panfrost/panfrost_job.c| 127
On Fri, 30 Oct 2020 10:00:07 +
Steven Price wrote:
> On 30/10/2020 07:08, Boris Brezillon wrote:
> > We've fixed many races in panfrost_job_timedout() but some remain.
> > Instead of trying to fix it again, let's simplify the logic and move
> > the reset bits to a se
the drm node.
>
> Move the initialisation/destruction to panfrost_job_{init,fini} where it
> belongs.
>
Queued to drm-misc-next.
Thanks,
Boris
> Fixes: 1a11a88cfd9a ("drm/panfrost: Fix job timeout handling")
> Signed-off-by: Steven Price
> Reviewed-by: Boris B
On Fri, 30 Oct 2020 10:40:46 +0200
Tomi Valkeinen wrote:
> Hi Boris,
>
> On 30/10/2020 10:08, Boris Brezillon wrote:
> > The "propagate output flags" and soon to be added "use
> > timing->input_flags if present" logic should only be used as a fallback
On Fri, 30 Oct 2020 09:30:01 +0200
Tomi Valkeinen wrote:
> On 30/10/2020 00:48, Laurent Pinchart wrote:
>
> >>> And, hmm... It's too easy to get confused with these, but... If the
> >>> bridge defines timings, and
> >>> timings->input_bus_flags != 0, should we always pick that, even if we got
c:
Signed-off-by: Boris Brezillon
---
drivers/gpu/drm/panfrost/panfrost_device.c | 1 -
drivers/gpu/drm/panfrost/panfrost_device.h | 6 +-
drivers/gpu/drm/panfrost/panfrost_job.c| 130 -
3 files changed, 82 insertions(+), 55 deletions(-)
diff --git a/drivers/gpu/dr
blocking the whole queue.
Let's fix that by tracking timeouts occurring between the
drm_sched_resubmit_jobs() and drm_sched_start() calls.
v2:
- Fix another race (reported by Steven)
Fixes: 1a11a88cfd9a ("drm/panfrost: Fix job timeout handling")
Cc:
Signed-off-by: Boris Brezillon
---
d
> Cc: Kyungmin Park
> Fixes: 26d3ac3cb04d ("drm/shmem-helpers: Redirect mmap for imported dma-buf")
> Cc: Boris Brezillon
Reviewed-by: Boris Brezillon
> Cc: Thomas Zimmermann
> Cc: Gerd Hoffmann
> Cc: Rob Herring
> Cc: dri-devel@lists.freedesktop.org
> Cc: lin
On Mon, 26 Oct 2020 16:16:49 +
Steven Price wrote:
> On 26/10/2020 15:32, Boris Brezillon wrote:
> > In our last attempt to fix races in the panfrost_job_timedout() path we
> > overlooked the case where a re-submitted job immediately triggers a
> > fault. This lead to
blocking the whole queue.
Let's fix that by tracking timeouts occurring between the
drm_sched_resubmit_jobs() and drm_sched_start() calls.
Fixes: 1a11a88cfd9a ("drm/panfrost: Fix job timeout handling")
Cc:
Signed-off-by: Boris Brezillon
---
drivers/gpu/drm/panfrost/panfrost_
On Tue, 6 Oct 2020 10:07:39 +0300
Tomi Valkeinen wrote:
> Adding Boris who added bus format negotiation.
>
> On 06/10/2020 00:31, Nikhil Devshatwar wrote:
> > Hi all,
> >
> > I am trying to convert the upstream tidss drm driver to new
> > connector model.
> > The connector is getting created
On Mon, 5 Oct 2020 16:16:32 +0100
Steven Price wrote:
> On 05/10/2020 15:50, Boris Brezillon wrote:
> > On Tue, 22 Sep 2020 15:16:48 +0100
> > Robin Murphy wrote:
> >
> >> Midgard GPUs have ACE-Lite master interfaces which allows systems to
> >> i
On Tue, 22 Sep 2020 15:16:48 +0100
Robin Murphy wrote:
> Midgard GPUs have ACE-Lite master interfaces which allows systems to
> integrate them in an I/O-coherent manner. It seems that from the GPU's
> viewpoint, the rest of the system is its outer shareable domain, and so
> even when snoop
On Mon, 5 Oct 2020 09:34:06 +0100
Steven Price wrote:
> On 05/10/2020 09:15, Boris Brezillon wrote:
> > Hi Robin, Neil,
> >
> > On Wed, 16 Sep 2020 10:26:43 +0200
> > Neil Armstrong wrote:
> >
> >> Hi Robin,
> >>
> >>
Hi Robin, Neil,
On Wed, 16 Sep 2020 10:26:43 +0200
Neil Armstrong wrote:
> Hi Robin,
>
> On 16/09/2020 01:51, Robin Murphy wrote:
> > According to a downstream commit I found in the Khadas vendor kernel,
> > the GPU on G12b is wired up for ACE-lite, so (now that Panfrost knows
> > how to
On Fri, 2 Oct 2020 10:31:31 +0200
Christian König wrote:
> Am 02.10.20 um 08:55 schrieb Boris Brezillon:
> > If we don't initialize the entity to idle and the entity is never
> > scheduled before being destroyed we end up with an infinite wait in the
> > destroy path.
>
river")
Cc:
Signed-off-by: Boris Brezillon
Reviewed-by: Steven Price
---
drivers/gpu/drm/panfrost/panfrost_job.c | 62 +
1 file changed, 53 insertions(+), 9 deletions(-)
diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c
b/drivers/gpu/drm/panfrost/panfrost_j
On Fri, 2 Oct 2020 09:10:32 +0200
Boris Brezillon wrote:
> @@ -392,19 +411,41 @@ static void panfrost_job_timedout(struct drm_sched_job
> *sched_job)
> job_read(pfdev, JS_TAIL_LO(js)),
> sched_job);
>
> + /* Scheduler is already stop
no timeout handlers are in flight when we reset the GPU (Steven Price)
- Make sure we release the reset lock before restarting the
schedulers (Steven Price)
Signed-off-by: Boris Brezillon
Fixes: f3ba91228e8e ("drm/panfrost: Add initial panfrost driver")
Cc:
---
drivers/gpu/drm/panfrost/panf
If we don't initialize the entity to idle and the entity is never
scheduled before being destroyed we end up with an infinite wait in the
destroy path.
v2:
- Add Steven's R-b
Signed-off-by: Boris Brezillon
Reviewed-by: Steven Price
---
This is something I noticed while debugging another issue
On Thu, 1 Oct 2020 15:49:39 +0100
Steven Price wrote:
> On 01/10/2020 15:01, Boris Brezillon wrote:
> > If more than two or more jobs end up timeout-ing concurrently, only one
> > of them (the one attached to the scheduler acquiring the lock) is fully
> > handled.
On Thu, 1 Oct 2020 15:49:39 +0100
Steven Price wrote:
> On 01/10/2020 15:01, Boris Brezillon wrote:
> > If more than two or more jobs end up timeout-ing concurrently, only one
> > of them (the one attached to the scheduler acquiring the lock) is fully
> > handled.
If we don't initialize the entity to idle and the entity is never
scheduled before being destroyed we end up with an infinite wait in the
destroy path.
Signed-off-by: Boris Brezillon
---
This is something I noticed while debugging another issue on panfrost
causing the scheduler to be in a weird
to repetitive timeouts when new jobs are queued.
Let's make sure all bad jobs are properly handled by the thread acquiring
the lock.
Signed-off-by: Boris Brezillon
Fixes: f3ba91228e8e ("drm/panfrost: Add initial panfrost driver")
Cc:
---
drivers/gpu/drm/panfrost/panfrost_
Oops, the prefix should be "drm/panfrost", will fix that in v2.
On Thu, 1 Oct 2020 16:01:43 +0200
Boris Brezillon wrote:
> If more than two or more jobs end up timeout-ing concurrently, only one
> of them (the one attached to the scheduler acquiring the lock) is fully
> han
On Mon, 18 May 2020 19:39:08 +0200
Enric Balletbo i Serra wrote:
> Convert mtk_dpi to a bridge driver with built-in encoder support for
> compatibility with existing component drivers.
>
> Signed-off-by: Enric Balletbo i Serra
> Reviewed-by: Chun-Kuang Hu
> ---
>
>
On Mon, 18 May 2020 19:39:09 +0200
Enric Balletbo i Serra wrote:
> The mtk_dpi driver uses an empty implementation for its encoder. Replace
> the code with the generic simple encoder.
>
> Signed-off-by: Enric Balletbo i Serra
> Reviewed-by: Chun-Kuang Hu
> ---
>
>
On Wed, 1 Jul 2020 13:23:03 +0200
Boris Brezillon wrote:
> On Mon, 18 May 2020 19:39:07 +0200
> Enric Balletbo i Serra wrote:
>
> > This is really a cosmetic change just to make a bit more readable the
> > code after convert the driver to drm_bridge. The bridge variable
On Mon, 18 May 2020 19:39:07 +0200
Enric Balletbo i Serra wrote:
> This is really a cosmetic change just to make a bit more readable the
> code after convert the driver to drm_bridge. The bridge variable name
> will be used by the encoder drm_bridge, and the chained bridge will be
> named
ch_entry() instead.
>
> Fixes: 033bfe7538a1 ("drm/vc4: dsi: Fix bridge chain handling")
> Signed-off-by: Dan Carpenter
Reviewed-by: Boris Brezillon
> ---
> drivers/gpu/drm/vc4/vc4_dsi.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/g
Hello Dan,
On Wed, 24 Jun 2020 20:58:06 +0300
Dan Carpenter wrote:
> Hello Boris Brezillon,
>
> The patch 033bfe7538a1: "drm/vc4: dsi: Fix bridge chain handling"
> from Dec 27, 2019, leads to the following static checker warning:
>
> drive
ase
>
> v3: I forgot to remove the page_count mangling from the free path too.
> Noticed by Boris while testing.
>
> Cc: Boris Brezillon
Tested-by: Boris Brezillon
> Acked-by: Thomas Zimmermann
> Cc: Gerd Hoffmann
> Cc: Rob Herring
> Cc: Noralf Trønnes
> Sig
pages_use_count to 1 when importing a dma-buf), this patchset seems to
work on panfrost:
Tested-by: Boris Brezillon
>
> Documentation/gpu/drm-kms-helpers.rst | 12 ---
> Documentation/gpu/drm-mm.rst| 12 +++
> drivers/gpu/drm/drm_gem.c | 8 ++
> dri
On Wed, 20 May 2020 20:02:32 +0200
Daniel Vetter wrote:
> @@ -695,36 +702,16 @@ drm_gem_shmem_prime_import_sg_table(struct drm_device
> *dev,
> struct sg_table *sgt)
> {
> size_t size = PAGE_ALIGN(attach->dmabuf->size);
> - size_t npages = size >>
801 - 900 of 2381 matches
Mail list logo