On Mon, 29 Jul 2013, Stoddard, Nate (GE Healthcare) wrote:

> Our design's USB topology is as follows:
> 
> FS device #1 -|                              |
> FS device #2 -|                              |
>                             |--  HS Hub #1--|-------|            
> FS device #3 -|                              |            |
> FS device #4 -|                              |            |
>                                                             |            | -- 
>  HS Hub #3 -- | --------|  USB 2.0 HS Port on USB Host
> FS device #5 -|                              |            |
> FS device #6 -|                              |            |
>                             |--  HS Hub #2 --|-------|
> FS device #7 -|                              |
> FS device #8 -|                              |
> 
> All of the hubs support Multi-TT.  Based on this topology, I would
> assume Hub #1 and Hub #2 perform the FS splitting, and the EHCI
> controller on the USB host performs the FS un-splitting.  Hub #3
> would only be passing high speed traffic between Hubs 1/2 and the PC.  
> Is this correct?

Yes, pretty much.  I'm not sure what you mean by "FS splitting" and "FS 
un-splitting", but it is true that all the split transactions would be 
sent to Hub #1 and #2, and Hub #3 would see only high-speed traffic.

> Does this hub topology and Multi-TT support mean that mean each USB
> device could be able to support up to 15 64-byte interrupt endpoints?

Yes, in theory.  I haven't ever tried to push the limit, so I don't
know for sure what the practical upper bound is.  Probably somewhat
less than 15; maybe 10 or 11.

> If that is true, then would the high speed 63 interrupt transfers (@
> 64-byte) limit become the bottleneck?

The high-speed bus capacity would indeed become the bottleneck.

> > > 2a.  If the high speed limitation is used:  Does the scheduler
> > > multiplex each full speed device's split data packets over the 8
> > > available micro-frames?
> > 
> > To some extent.  Not all 8 microframes are available (the spec prohibits
> > sending a Start Split packet during microframe 6), and the current ehci-hcd
> > implementation is not capable of using all the ones that are available.
> > However, it is capable of using at least 4 of the
> > 8 microframes.
> 
> USB high-speed interrupt transfers can still be sent over all of the
> 8 micro-frames?

Yes.

> > > We performed some testing, but I don't want to make assumptions on
> > > these results alone.
> > > Setup #1:
> > > Kernel 3.10.0
> > > Connect 6 USB devices (each with 2 IN and 1 OUT interrupt endpoints
> > > @64 bytes) to USB 1.1 full speed hubs to a PC USB 2.0 port.  The test
> > > application can communicate with all 18 endpoints.  When we connected
> > > a 7th device, the test application is unable to open and access the
> > > device.  This makes sense because that would be 21 full speed
> > > endpoints.
> > 
> > This doesn't sound right.  What sort of host controller were you using?
> 
> The PC has XHCI, EHCI and OHCI enabled.  Test setup #1 is connected
> to a USB 2.0 port and the lsusb output shows the FS devices on the
> USB 1.1 bus.  I think this means the OHCI driver is in use, but I'm
> not certain.

Yes, it does mean that.

I don't see how you could have gotten more than 15 interrupt endpoints
running at the same time unless the endpoints' bInterval value was
larger than 1.

> > > Setup #2:
> > > Kernel 3.10.0
> > > Connect 7 USB devices (each with 2 IN and 1 OUT interrupt endpoints
> > > @64 bytes) to USB 2.0 high speed hubs to a PC USB 2.0 port.  The test
> > > application can communicate with all 21 endpoints.  This appears to
> > > violate the full speed limitation; however, it wouldn't be violating
> > > the high speed limitation of 63 endpoints per micro-frame.
> > 
> > The limitation you are referring to (Table 5-8 in the spec) is for interrupt
> > transfers to high-speed devices.  It does not apply to interrupt transfers 
> > to
> > full-speed devices.
> > 
> That makes sense.  When FS and HS devices are mixed on a USB 2.0 hub
> (and USB 2.0 port on a PC), how is the interrupt transfer limitation
> calculated?

The way it _is_ calculated is a mess.  I can tell you the way it's 
_supposed_ to be calculated.

For each TT, no more than 90% of the bandwidth on the FS/LS bus below
the TT can be allocated to periodic transfers.  On the high speed bus,
no more than 80% of the total bandwidth can be allocated to periodic
transfers.  Interrupt transfers to FS devices have to satisfy both
constraints.

> Does a FS interrupt endpoint (@ 64 bytes) count as 1 of the HS 63
> transfer limit?  Or does the scheduler handle multiplexing the FS
> transfers over the available 252 (4 micro-frames with 63 transfers
> available)?

You're not thinking about this the right way.  The limit is on the
bandwidth, not on the number of transfers.  The bandwidth limit could
be reached either by a small number of large transfers or by a large
number of small transfers.

Split transfers on the high-speed bus do not occupy the same bandwidth
as regular high-speed transfers.  A high-speed interrupt transfer takes
a single transaction (except for high-bandwidth endpoints, but we're
not concerned with them here).  A split interrupt transfer requires
multiple transactions: a single Start Split and up to four Complete
Splits.  Each of these Splits requires a different amount of bandwidth
from a regular high-speed interrupt transaction, and the Splits occupy
only four or five of the eight microframes in a frame.

Even if the scheduling algorithm were correct and optimal, it would
still be rather difficult to figure out the maximum possible number of
transfers.

I suggest that instead of worrying about it, you divide your devices up 
among different buses.  Most PCs nowadays have two EHCI controllers, 
and you can get more by adding PCI cards.

> Thank you for your help.  I'm trying to get a better understanding of
> the USB limitations, so our design does not fail as the system
> expands.

You're welcome.

Alan Stern

--
To unsubscribe from this list: send the line "unsubscribe linux-usb" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to