On Tue, Mar 20, 2018 at 03:34:16PM +0200, Felipe Balbi wrote:
> 
> Hi,
> 
> Bin Liu <[email protected]> writes:
> >> >> > > BTY, the issue I am trying to debug is when reading bulk IN data 
> >> >> > > from a
> >> >> > > USB2.0 device, if the device doesn't have data to transmit and NAKs 
> >> >> > > the
> >> >> > > IN packet, after 4 pairs of IN-NAK transactions, xhci stops sending
> >> >> > > further IN tokens until the next SOF, which leaves ~90us gape on the
> >> >> > > bus.
> >> >> > >
> >> >> > > But when reading data from a USB2.0 thumb drive, this issue doesn't
> >> >> > > happen, even if the device NAKs the IN tokens, xhci still keeps 
> >> >> > > sending
> >> >> > > IN tokens, which is way more than 4 pairs of IN-NAK transactions.
> >> >> > 
> >> >> > Thumb drive has Bulk endpoints, what is the other device's transfer 
> >> >> > type?
> >> >> 
> >> >> It is bulk too. I asked for device descriptors. This is a remote debug
> >> >> effort for me, I don't have that device...
> >> >> 
> >> >> > 
> >> >> > > Any one has a clue on what causes xhci to stop sending IN tokens 
> >> >> > > after
> >> >> > > the device NAK'd 4 times?
> >> >
> >> > By accident, I reproduced the issue if addng a hub in the middle...
> >> > any comments about why a hub changes this xhci behavior is appreciated.
> >> 
> >> none off the top of my head. Maybe Mathias can suggest something.
> >
> > The issue seems to be not related to how many bulk IN-NAK pairs before
> > host stops sending IN token, but the host stops IN token if 1) the
> > device ever NAK'd an bulk IN token, and 2) there is about 90~100us left
> > to the next SOF. Then all the rest of bandwidth is wasted.
> >
> > Is it about xhci bandwidth schduling? /me started reading...
> 
> is this AM4 or AM5? Perhaps go after Synopsys' known errata list?

I see the issue on both AM4 & AM5. I don't have access to the errata
list, I guess I should talk to TI internal for the list?

Regards,
-Bin.
--
To unsubscribe from this list: send the line "unsubscribe linux-usb" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to