On Mon, 2023-11-27 at 16:51 -0500, Vivi, Rodrigo wrote:

alan: Firstly, thanks for taking the time to review this, knowing you have a 
lot on your plate right now.
> 
alan:snip
> > @@ -3301,19 +3315,38 @@ static inline void guc_lrc_desc_unpin(struct 
> > intel_context *ce)
> >     /* Seal race with Reset */
> >     spin_lock_irqsave(&ce->guc_state.lock, flags);
> >     disabled = submission_disabled(guc);
> > -   if (likely(!disabled)) {
> > -           __intel_gt_pm_get(gt);
> > -           set_context_destroyed(ce);
> > -           clr_context_registered(ce);
> > -   }
> > -   spin_unlock_irqrestore(&ce->guc_state.lock, flags);
> 
> you are now holding this spin lock for too long...
alan: my intention was to ensure that an async H2G request to change this
gucid-context state wouldnt come in while we are in the middle of attempting
to send the context-destruction H2G and later realize it needed to be
unrolled (corner case race we are solving here as discovered on
customer platform). But after discussing with Daniele and John, they
agreed that we should move the unlock back to before the deregister and then
take the lock again inside of the unrolling code (if deregister_context 
fails).
alan:snip
> 
> > -   deregister_context(ce, ce->guc_id.id);
> > +   /* GuC is active, lets destroy this context,
> 
> for multi-line comments you need to start with '/*' only
> and start the real comment below, like:
alan:my bad. will fix.
> *
>  * GuC is active, ...
> > +    * but at this point we can still be racing with
> > +    * suspend, so we undo everything if the H2G fails
> > +    */
> > +
> > +   /* Change context state to destroyed and get gt-pm */
> > +   __intel_gt_pm_get(gt);
> > +   set_context_destroyed(ce);
> > +   clr_context_registered(ce);
> > +
> > +   ret = deregister_context(ce, ce->guc_id.id);
> > +   if (ret) {
> > +           /* Undo the state change and put gt-pm if that failed */
> > +           set_context_registered(ce);
> > +           clr_context_destroyed(ce);
> > +           /*
> > +            * Dont use might_sleep / ASYNC verion of put because
> > +            * CT loss in deregister_context could mean an ongoing
> > +            * reset or suspend flow. Immediately put before the unlock
> > +            */
> > +           __intel_wakeref_put(&gt->wakeref, 0);
> 
> interesting enough you use the '__' version to bypass the might_sleep(),
> but also sending 0 as argument what might lead you in the mutex_lock inside
> the spin lock area, what is not allowed.
alan: so one thing to note, the motivation for an alternative unlock was only
driven by the fact we were holding that spinlock. However, as per the review
comment and response on the spin lock above, if we move the pm-put
outside the spin lock, we can call the intel_wakeref_put_async (which will not
actually trigger a delayed task becase this function (guc_lrc_desc_unpin) starts
with GEM_BUG_ON(!intel_gt_pm_is_awake(gt)); which means it would only decrement
the ref count.
alan:snip
> > +                   spin_lock_irqsave(&guc->submission_state.lock, flags);
> > +                   list_add_tail(&ce->destroyed_link,
> > +                                 
> > &guc->submission_state.destroyed_contexts);
> > +                   spin_unlock_irqrestore(&guc->submission_state.lock, 
> > flags);
> > +                   /* Bail now since the list might never be emptied if 
> > h2gs fail */
> 
> For this GuC interaction part I'd like to get an ack from John Harrison 
> please.
alan:okay - will do. I might alternatively tap on Daniele since he knows the 
guc just as well.
> 
alan:snip

> > @@ -3392,6 +3440,17 @@ static void destroyed_worker_func(struct work_struct 
> > *w)
> >     struct intel_gt *gt = guc_to_gt(guc);
> >     int tmp;
> >  
> > +   /*
> > +    * In rare cases we can get here via async context-free fence-signals 
> > that
> > +    * come very late in suspend flow or very early in resume flows. In 
> > these
> > +    * cases, GuC won't be ready but just skipping it here is fine as these
> > +    * pending-destroy-contexts get destroyed totally at GuC reset time at 
> > the
> > +    * end of suspend.. OR.. this worker can be picked up later on the next
> > +    * context destruction trigger after resume-completes
> > +    */
> > +   if (!intel_guc_is_ready(guc))
> > +           return;
> 
> is this racy?
alan: this is to reduce raciness. This worker function eventually calls
deregister_destroyed_context which calls the guc_lrc_desc_unpin that does
all the work we discussed above (taking locks, taking refs, sending h2g,
potentially unrolling). So this check an optimization to skip that without
going through the locking. So yes its a bit racy but its trying to reduce
raciness. NOTE1: Without this line of code, in theory everything would still
work fine, with the driver potentially experiencing more unroll cases
in those thousands of cycles. NOTE2: This check using intel_guc_is_ready
is done in many places in the driver knowing that it doesnt take any locks.
Of course will get John or Daniele to chime in.

> 
> > +
> >     with_intel_gt_pm(gt, tmp)
> >             deregister_destroyed_contexts(guc);
> >  }
> > -- 
> > 2.39.0
> > 

Reply via email to