On Tue, Sep 25, 2012 at 09:26:00AM +0300, Adrian Sandu wrote:
> On Tue, Sep 25, 2012 at 12:38 AM, Sarah Sharp
> <[email protected]> wrote:
> > Ok, so 3.4.11 doesn't work, and the log file was from 3.5.
>
> If you want I can provide a 3.4 log...
Hmm, does a 3.3 stable kernel work for you? I have a hypothesis.
Alan, I'm wondering if the xHCI ring expansion is causing issues with
USB hard drives under xHCI. Testing with a Buffalo USB 3.0 hard drive
with an NEC uPD720200 xHCI host, I see that the usb-storage and SCSI
initialization produces I/O errors on random sectors in 3.4.0, 3.4.6,
and 3.5.0. I can't get those errors to be reproduced in 3.3.1.
The xHCI ring expansion was added in 3.4, and we changed the xHCI's
sg_tablesize:
int xhci_gen_setup(struct usb_hcd *hcd, xhci_get_quirks_t get_quirks)
{
...
/* Accept arbitrarily long scatter-gather lists */
hcd->self.sg_tablesize = ~0;
The usb-storage driver sets the tablesize thus:
static unsigned int usb_stor_sg_tablesize(struct usb_interface *intf)
{
struct usb_device *usb_dev = interface_to_usbdev(intf);
if (usb_dev->bus->sg_tablesize) {
return usb_dev->bus->sg_tablesize;
}
return SG_ALL;
}
I notice that SG_ALL is set to SCSI_MAX_SG_SEGMENTS, which is only 128.
Should we be passing an arbitrarily large number to the SCSI core?
There's some wording in include/scsi/scsi.h about also limiting the
number of chained sgs to 2048. I'm wondering if we're hitting some bugs
in the SCSI layer because we're setting the sg_tablesize so high.
Alternately, we could be hitting bugs in the USB 3.0 firmware when we
attempt to issue a read or write that's too big. The read on Adrian's
hard drive failed on a bigger read request (122880 bytes). It would be
interesting to see if it works fine if the xHCI sg_tablesize is limited.
I'm going to try that with my own drive on 3.5.4 and see if it helps.
Sarah Sharp
--
To unsubscribe from this list: send the line "unsubscribe linux-usb" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html