Hi,

Alan Stern <st...@rowland.harvard.edu> writes:
> On Tue, 29 Nov 2016, Felipe Balbi wrote:
>
>> Hi Greg,
>> 
>> Greg KH <g...@kroah.com> writes:
>> > On Tue, Nov 29, 2016 at 03:58:57PM +0200, Felipe Balbi wrote:
>> >> 
>> >> Hi,
>> >> 
>> >> Greg KH <g...@kroah.com> writes:
>> >> > On Tue, Nov 29, 2016 at 01:48:59PM +0200, Felipe Balbi wrote:
>> >> >> 
>> >> >> Hi folks,
>> >> >> 
>> >> >> I've been staring at this part of the code for a while. To make things
>> >> >> easier here's the part of XHCI I mean:
>> >> >> 
>> >> >> > irqreturn_t xhci_irq(struct usb_hcd *hcd)
>> >> >> > {
>> >> >> >      struct xhci_hcd *xhci = hcd_to_xhci(hcd);
>> >> >> >      union xhci_trb *event_ring_deq;
>> >> >> >      irqreturn_t ret = IRQ_NONE;
>> >> >> >      dma_addr_t deq;
>> >> >> >      u64 temp_64;
>> >> >> >      u32 status;
>> >> >> >
>> >> >> >      spin_lock(&xhci->lock);
>> >> >> 
>> >> >> considering this is running in hardirq context, this is lock *really*
>> >> >> necessary? I mean, could this race with khubd or something else?
>> >> >
>> >> > It could be a threaded irq, right?
>> >> 
>> >> except that it isn't :-) Should we handle threaded_irq when we get to it?
>> >
>> > How do you know it isn't?  Doesn't the -rt patchset turn all of these
>> > irqs into threaded irqs automatically without the driver knowing that or
>> > not?
>> 
>> Yes, all IRQs are turned into threaded. You can also do the same with
>> threadirqs CMDLINE option. The catch here is that both -RT and
>> threadirqs option force IRQF_ONESHOT which will keep IRQs disabled until
>> the thread runs, so there shouldn't be any possible races either :-)
>> 
>> > The driver shouldn't really care either way.
>> 
>> agreed :-)
>> 
>> >> > Not that this really matters, it's hard to figure out what this lock
>> >> > actually "protects", it seems like "everything for this hcd", which
>> >> > seems like a lot, but was probably safest/easiest to get up and running,
>> >> > and mirrors what ehci does from what I can tell.
>> >> 
>> >> The problem is that now locking in xhci is rather convoluted and it
>> >> became really difficult to track its usage. I don't mind having a "lock
>> >> everything from this controller" lock, but currently the usage is
>> >> difficult to track because the lock is apparently sprinkled all over the
>> >> place willy-nilly.
>> >> 
>> >> It lacks proper __releases()/__acquires() annotation where code
>> >> *actually* releases the lock and reacquires later. We also have the
>> >> inverse: annotation being added to functions which *don't* release and
>> >> reacquire locks.
>> >> 
>> >> IOW, the code is made hard to maintain because of one simple thing :-)
>> >
>> > Ok, no objection from me to clean it up :)
>> 
>> alright, thanks
>> 
>> >> >> My understanding is that this lock is completely unnecessary and it
>> >> >> should be safe to remove it. In fact we would have the benefit of
>> >> >> removing __releases()/__acquires() from a few places (some of which are
>> >> >> wrong, btw!!) and removing the actual spin_unlock(); do_something();
>> >> >> spin_lock(); which have been sprinkled in several places.
>> >> >
>> >> > Drop the lock entirely?  Ok, but what then protects the fields that this
>> >> > lock was protecting from being accessed at the same time?  How about the
>> >> > host controller removal path?
>> >> 
>> >> well, for those cases we need locking. But from hardirq handler we don't
>> >> need any, right? Unless there's some case where the hardirq handler
>> >> itself could race with something else.
>> >
>> > irq being run on one processor and something else happening on another
>> > one that touches the driver?
>> 
>> and that's exactly my doubt. Is it even possible to have anything racing
>> with the hardirq handler in this case? I mean, sure we have khubd. But
>> that remains sleeping until an IRQ wakes it up :-)
>> 
>> Also, from driver removal point of view, we will call free_irq() which
>> will call synchronize_irq() and, again, it'll wait for hardirq handler
>> to complete.
>> 
>> So the question I have really is: can XHCI hardirq handler race with
>> anything else happening anywhere in any other CPU?
>
> Yes, of course.  It can race with URB submission and cancellation.  Not 
> to mention calls to the hub_control routine.

Fair concern. AFAICT at least for URB submission, it could be done
mostly lockless for XHCI; but that'll be a lot of work anyway.

I'm not very familiar with hub_control() to figure out whether it would
race or not.

> The question of how fine-grained locking should be is a difficult one.  
> uhci-hcd, ohci-hcd, and ehci-hcd all use a coarse-grained approach: A
> single lock protects everything.  (Before I took over its
> maintainership, uhci-hcd was exactly the opposite -- separate spinlocks
> for each endpoint and each URB, maybe others, I can't remember.  
> Locking order problems.  It looked awful.)
>
> Unless you have a good reason to split up the big lock into multiple 
> sub-locks, I say don't worry about it.

I don't mean to split things down into several locks (although in some
cases it might be beneficial, for example if we had one interrupter for
each virtual device, for example. Could have seperate handling threads
and yada-yada-yada...). What I'm pointing out here is the need of
locking the hardirq handler.

It seems, however, that at least in its current setup, we _do_ need that
lock. This may change in the future, though.

I guess, for now, a better task would be look into all the
release/acquire blocks and, instead, see if it's possible to provide
locked and lockless versions of called functions so we don't need all
release/acquire magic all over the place.

I'll put the whole idea of lockless URB submission, lockless hardirq
handler, etc in the bag of "maybe-look-at-it-in-the-future" ideas.

cheers

-- 
balbi
--
To unsubscribe from this list: send the line "unsubscribe linux-usb" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to