On Mon, May 02, 2005 at 10:24:08AM -0700, David Brownell wrote: > > No it's not. When debouncing fails the hub driver turns off the > > POST_STAT_C_CONNECTION feature and leaves the port disabled. There > > shouldn't be any looping. > > Well, there _are_ such infinite-loop cases, and that transcript seemed > to be spending quite a lot of time on debounce attempts. Maybe I > just extrapolated too far ... ;)
It seems like the interaction of the two ports at the end: 4170.358423: <7>hub 1-3:1.0: debounce: port 1: total 1500ms stable 0ms status 0x100 4170.358504: <3>hub 1-3:1.0: connect-debounce failed, port 1 disabled 4170.369599: <7>hub 1-3:1.0: port 4, status 0100, change 0001, 12 Mb/s 4172.186746: <7>hub 1-3:1.0: debounce: port 4: total 1500ms stable 25ms status 0x100 4172.186815: <3>hub 1-3:1.0: connect-debounce failed, port 4 disabled 4172.189721: <7>hub 1-3:1.0: state 5 ports 4 chg 0000 evt 0002 4172.195716: <7>hub 1-3:1.0: port 1, status 0100, change 0001, 12 Mb/s 4174.024109: <7>hub 1-3:1.0: debounce: port 1: total 1500ms stable 0ms status 0x100 4178.772071: <3>hub 1-3:1.0: connect-debounce failed, port 1 disabled 4178.772153: <7>hub 1-3:1.0: port 4, status 0100, change 0001, 12 Mb/s 4178.772170: <7>hub 1-3:1.0: debounce: port 4: total 100ms stable 100ms status 0x100 4178.772185: <7>hub 1-3:1.0: state 5 ports 4 chg 0000 evt 0002 4178.772199: <7>hub 1-3:1.0: port 1, status 0100, change 0001, 12 Mb/s 4178.772212: <7>hub 1-3:1.0: debounce: port 1: total 1500ms stable 0ms status 0x100 4178.772226: <3>hub 1-3:1.0: connect-debounce failed, port 1 disabled 4178.772252: <7>hub 1-3:1.0: port 4, status 0100, change 0001, 12 Mb/s 4178.772267: <7>hub 1-3:1.0: debounce: port 4: total 450ms stable 100ms status 0x100 4178.772281: <7>hub 1-3:1.0: state 5 ports 4 chg 0000 evt 0002 4178.772294: <7>hub 1-3:1.0: port 1, status 0100, change 0001, 12 Mb/s 4178.772308: <7>hub 1-3:1.0: debounce: port 1: total 1500ms stable 0ms status 0x100 4178.772321: <3>hub 1-3:1.0: connect-debounce failed, port 1 disabled 4178.772335: <7>hub 1-3:1.0: port 4, status 0100, change 0001, 12 Mb/s 4180.141147: <7>hub 1-3:1.0: debounce: port 4: total 1500ms stable 50ms status 0x100 Before this, the light that caused the EMI was only on for brief bursts, causing the repeated reconnection. At one point, a libusb request failed, and the device was closed and reopened. This left the light in its previous state, causing the interference to become continuous; instead of brief bursts of disconnection, it became brief bursts of connection, enough to trigger the debounce but almost never long enough to finish successfully. I think normally this wouldn't cause a problem: there would still be some delay between the timeout and the next time the port changed state, releasing the hub. But two ports are doing it, so while port 4 is being debounced, port 1 changes state--khubd gets kicked while still in hub_events, and never exits the loop at all, hogging the hub's lock, and causing enumeration to hang. Or that's my conjecture, at least. I'll try something like the connection throttling that was discussed, by inserting a small delay in the hub_events loop if a debounce timed out. I guess the correct way would be to do it per- port, so one messed up port doesn't throttle connections on other hubs, but there's probably no point while khubd handles all hubs in the same thread. (Unfortunately, this is hard to test: sometimes we'll go a whole day without this happening, and the next day it'll happen every half hour. I wonder how I could simulate this ...) -- Glenn Maynard ------------------------------------------------------- This SF.Net email is sponsored by Oracle Space Sweepstakes Want to be the first software developer in space? Enter now for the Oracle Space Sweepstakes! http://ads.osdn.com/?ad_id=7412&alloc_id=16344&op=click _______________________________________________ linux-usb-devel@lists.sourceforge.net To unsubscribe, use the last form field at: https://lists.sourceforge.net/lists/listinfo/linux-usb-devel