On Tue, Feb 28, 2017 at 09:28:10AM +0000, Chris Wilson wrote:
> Following the use of dma_fence_signal() from within our interrupt
> handler, we need to make guc->wq_lock also irq-safe. This was done
> previously as part of the guc scheduler patch (which also started
> mixing our fences with the interrupt handler), but is now required to
> fix the current guc submission backend.
>
> v3: Use cmpxchg for the solitary adjustments of guc->wq_rsvd
>
> Fixes: 67b807a89230 ("drm/i915: Delay disabling the user interrupt for
> breadcrumbs")
> Signed-off-by: Chris Wilson <[email protected]>
> Cc: Tvrtko Ursulin <[email protected]>
> Cc: Mika Kuoppala <[email protected]>
> ---
> drivers/gpu/drm/i915/i915_guc_submission.c | 25 +++++++++++++++++--------
> 1 file changed, 17 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/i915_guc_submission.c
> b/drivers/gpu/drm/i915/i915_guc_submission.c
> index beec88a30347..4a5cee16f901 100644
> --- a/drivers/gpu/drm/i915/i915_guc_submission.c
> +++ b/drivers/gpu/drm/i915/i915_guc_submission.c
> @@ -348,7 +348,7 @@ int i915_guc_wq_reserve(struct drm_i915_gem_request
> *request)
> u32 freespace;
> int ret;
>
> - spin_lock(&client->wq_lock);
> + spin_lock_irq(&client->wq_lock);
> freespace = CIRC_SPACE(client->wq_tail, desc->head, client->wq_size);
> freespace -= client->wq_rsvd;
> if (likely(freespace >= wqi_size)) {
> @@ -358,21 +358,28 @@ int i915_guc_wq_reserve(struct drm_i915_gem_request
> *request)
> client->no_wq_space++;
> ret = -EAGAIN;
> }
> - spin_unlock(&client->wq_lock);
> + spin_unlock_irq(&client->wq_lock);
>
> return ret;
> }
>
> +static void guc_wq_add_reserved(struct i915_guc_client *client, int delta)
Hmm, as delta can be negative, "add" verb maybe little misleading. What about
"update" ?
Also, as in parallel there is ongoing cleanup effort, can we rename this
function a bit
to match subject/verb pattern?
static void guc_client_update_wq_reserved(struct i915_guc_client
*client, int delta)
> +{
> + unsigned int old, new;
> +
> + do {
> + old = client->wq_rsvd;
> + new = old + delta;
> + } while (cmpxchg(&client->wq_rsvd, old, new) != old);
> +}
> +
> void i915_guc_wq_unreserve(struct drm_i915_gem_request *request)
> {
> - const size_t wqi_size = sizeof(struct guc_wq_item);
> + const int wqi_size = sizeof(struct guc_wq_item);
> struct i915_guc_client *client = request->i915->guc.execbuf_client;
>
> GEM_BUG_ON(READ_ONCE(client->wq_rsvd) < wqi_size);
> -
> - spin_lock(&client->wq_lock);
> - client->wq_rsvd -= wqi_size;
> - spin_unlock(&client->wq_lock);
> + guc_wq_add_reserved(client, -wqi_size);
> }
>
> /* Construct a Work Item and append it to the GuC's Work Queue */
> @@ -951,10 +958,12 @@ int i915_guc_submission_enable(struct drm_i915_private
> *dev_priv)
> engine->schedule = NULL;
>
> /* Replay the current set of previously submitted requests */
> + spin_lock_irq(&engine->timeline->lock);
> list_for_each_entry(rq, &engine->timeline->requests, link) {
> - client->wq_rsvd += sizeof(struct guc_wq_item);
> + guc_wq_add_reserved(client, sizeof(struct guc_wq_item));
To be consistent with "unreserve" case, maybe we should define and then use
wqi_size?
const int wqi_size = sizeof(struct guc_wq_item);
> __i915_guc_submit(rq);
> }
> + spin_unlock_irq(&engine->timeline->lock);
> }
>
> return 0;
As I was using similar patch for fixing spin_locks on my local tree,
Reviewed-by: Michal Wajdeczko <[email protected]>
-Michal
_______________________________________________
Intel-gfx mailing list
[email protected]
https://lists.freedesktop.org/mailman/listinfo/intel-gfx