On Fri, 21 Oct 2005, Greg KH wrote: > > Is this worth pursuing? > > I think so, it looks good. But why not use the list walk macros we have > availble instead of following the links on your own?
Because I tried to make minimal changes to the existing code and data structures, which use a singly-linked list instead of the standard list_head. Are you referring to some macros for singly-linked lists that I'm not aware of? > And are you sure this is safe to remove an item from the list while in a > callback? What happens if you remove 2 of them? As sure as I can be without actually running it. It doesn't matter whether the removal happens from within the callback or from another thread; the same code gets executed either way. Each time an item is removed, any chain caller about to use that item gets updated to use the next one instead. Then if the next one is removed too, it gets updated to use the one after that. And so on. On Fri, 21 Oct 2005, Pete Zaitcev wrote: > If you go for it, prepare for talking to netdev@, and they will ask > about taking and dropping the spinlock at every iteration when > traversing. They are very sensitive to performance. If the spinlock is uncontested, overhead will be pretty small. It wouldn't matter unless several threads were calling the notifier chain at the same time. I specifically mentioned in a comment that the implementation assumes this won't happen. Now, the one thing I'm not sure about is whether anyone ever uses notifier_call_chain in an interrupt or atomic context. The new code doesn't use spin_lock_irq, so obviously it won't work if anyone does. It seems likely that they don't, because the current code takes the writelock without disabling interrupts. But since there hasn't been a readlock in notifier_call_chain for over 5 years, it's possible that somebody is doing it. For all I know, some notifier chains might be used only within interrupt handlers and others might be just the opposite (i.e., the routines in the chain might sleep). If that's so, it will be very hard to come up with a unified implementation capable of handling all the cases. It might even be better to have two kinds of notifier chains (or two versions of notifier_call_chain): One that runs entirely in an atomic context and one that always runs in a sleepable context. I don't know under what circumstances the network subsystem calls notifier chains, but probably not on any fast paths. It's worth checking, though. > Alan found 22 other instances, so presumably he knows the answer. > All I know is that someone replaced a spinlock with an rwlock. > Might be just idiocy, or perhaps a scalability issue. Ha! I grepped for occurrences of "notifier_call_chain", eliminated duplicate entries (using the same notifier_block), and was left with 22 lines of text. Apart from the names of the variables, I don't have a clue what any of them do. Replacing a spinlock with an rwlock -- did it really happen that way? I got the impression that the rwlock was there all along, and then was taken out of notifier_call_chain because some people wanted their entry to remove itself. Alan Stern ------------------------------------------------------- This SF.Net email is sponsored by: Power Architecture Resource Center: Free content, downloads, discussions, and more. http://solutions.newsforge.com/ibmarch.tmpl _______________________________________________ [email protected] To unsubscribe, use the last form field at: https://lists.sourceforge.net/lists/listinfo/linux-usb-devel
