On 2018-10-01 13:29, Sean Paul wrote:
On Wed, Sep 26, 2018 at 11:51:35AM -0700, Jeykumar Sankaran wrote:
On 2018-09-19 11:56, Sean Paul wrote:
> From: Sean Paul <seanp...@chromium.org>
>
> There exists a case where a flush of a plane/dma may have been
triggered
> & started from an async commit. If that plane/dma is subsequently
> disabled
> by the next commit, the flush register will continue to hold the flush
> bit for the disabled plane. Since the bit remains active,
> pending_kickoff_cnt will never decrement and we'll miss frame_done
> events.
>
I suppose this is the vblank in between the async commit and the next
commit
(one where
the plane is disabled).

If this vblank had consumed the flush bits, it means the HW
has read the configuration and it should have cleared bits.

If you still see the flush bit active, it means the async commit has
missed
the VBLANK boundary
and the HW has not yet taken the cursor configuration. So you are not
supposed to
get frame_done event.

Right, we're not getting frame_done until the next frame comes in. The
issue is
that we get 2 commits in between vblanks, the first commit triggers the
cursor
for flush and the second one disables it. Unfortunately the first commit
has
already called CTL_START and made it impossible for the second commit to
clear
that flush bit (afaict).

The frame_done events seem to flow properly, only being triggered once per
vblank and only when a non-async commit has happened.

So is there a way to clear the CTL_FLUSH register on subsequent commits?
I've
poked around some more and can't seem to figure it out.

We shouldn't be explicitly clearing the FLUSH register. Uncleared flush bits
generally indicates that the config is not programmed yet. What is the
observation if there is no followup non-async commit? Are you seeing a hang or delay in frame_done? Eventually, the next VBLANK should pick the configuration and wipe off the flush register and the vblank irq handler should trigger the event.


Comments outside the scope of this patch: To support async and sync
updates
on the same display commit thread, we should be adding more protection
to
support
concurrency scenarios to avoid more than one ctl flushes per VBLANK
period.

Yeah, certainly easier said than done. I'm not really sure how to
implement
that, tbh.

There's no way to know how many commits you'll have, and there's no way to
delay the FLUSH until right before vblank. Do you have any ideas that I
might be
missing?

One way to freeze hw reading from CTL_FLUSH is by writing 0x1 to CTL_FLUSH_MASK.
This guarantees no half-way programmed commit is flushed by the HW.

The other method - probably the non-conventional one - would be to update cursor / async commits outside the display commit thread. But ofc, we need to make
architectural changes to the current driver to support this path.

Thanks,
Jeykumar S.
Sean


Thanks,
Jeykumar S.



> This patch limits the check of flush_register to include only those
bits
> which have been updated with the latest commit.
>
> Signed-off-by: Sean Paul <seanp...@chromium.org>
> ---
>  drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c
> b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c
> index 84de385a9f62..60f146f02b77 100644
> --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c
> +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c
> @@ -331,7 +331,7 @@ static void dpu_encoder_phys_vid_vblank_irq(void
> *arg,
> int irq_idx)
>    if (hw_ctl && hw_ctl->ops.get_flush_register)
>            flush_register = hw_ctl->ops.get_flush_register(hw_ctl);
>
> -  if (flush_register == 0)
> +  if (!(flush_register & hw_ctl->ops.get_pending_flush(hw_ctl)))
>            new_cnt =
> atomic_add_unless(&phys_enc->pending_kickoff_cnt,
>                            -1, 0);
>    spin_unlock_irqrestore(phys_enc->enc_spinlock, lock_flags);

--
Jeykumar S

--
Jeykumar S
_______________________________________________
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno

Reply via email to