* Grygorii Strashko <[email protected]> [170112 14:53]:
>
>
> On 01/12/2017 04:19 PM, Tony Lindgren wrote:
> > * Grygorii Strashko <[email protected]> [170112 13:54]:
> >> On 01/12/2017 03:30 PM, Tony Lindgren wrote:
> >>
> >> Sry, but even if it looks better it might still race with PM runtime :(
> >>
> >>> - if (likely(pm_runtime_active(cdd->ddev.dev)))
> >>> + if (likely(atomic_read(&cdd->active)))
> >>> push_desc_queue(c);
> >>> else
> >>
> >>
> >> - CPU is here (-EINPROGRESS and active == 0) and then preempted
> >> - PM runtime will finish cppi41_runtime_resume and clean-up pending descs
> >> - CPU return here and adds desc to the pending queue
> >> - oops
> >>
> >> Am I wrong?
> >
> > We had cppi41_dma_issue_pending() getting called from atomic contex and
> > cppi41_runtime_resume() getting preempted where cppi41_dma_issue_pending()
> > would add to the queue.
>
> Again, I can be mistaken but cppi41_configure_channel() seems not atomic.
> cppi41_configure_channel()->dma_async_issue_pending()
> + documentation says "This function can be called in an interrupt context"
>
> And definitely it will be preemptive on RT :(
Hmm OK. So are you thinking we should add a spinlock around the
test in cppi41_dma_issue_pending() and when modifying cdd->active?
Something like:
spin_lock_irqsave(&cdd->lock, flags);
if (likely(cdd->active))
push_desc_queue(c);
else
list_add_tail(&c->node, &cdd->pending);
spin_unlock_irqrestore(&cdd->lock, flags);
Or do you have something better in mind?
Regards,
Tony
--
To unsubscribe from this list: send the line "unsubscribe linux-usb" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html