On Wednesday 16 May 2007 16:41, David Brownell wrote:
> On Wednesday 16 May 2007, Hans Petter Selasky wrote:
> > Hi,
> >
> > I'm currently working on a Linux USB emulation layer for FreeBSD. In that
> > regard I have some issues that makes the stack peform non-optimal due to
> > its API design.
> >
> >...
> >
> > What I would suggest is that when you allocate an URB and DMA'able
> > memory, you have to specify which pipe {CONTROL, BULK, INTERRUPT or ISOC}
> > it belongs to.
> >
> > What do you think?
>
> Riddle me this:  When should Linux choose to adopt a design mistake
> made by a non-Linux driver stack?
>
> > The reason is that in the new USB stack on FreeBSD, the USB transfer
> > descriptors are allocated along with the data-buffer,
>
> Whereas on Linux, data buffers are not normally bound to a particular
> driver stack (such as USB).  That matches normal hardware usage, and
> provides a less restrictive system which minimizes routine requirements
> to copy data.  (And thus, structural performance limits.)

On the BSD platform there is something called BUS-DMA. Memory is allocated 
according to PCI device capabilities. This is because sometimes the PCI 
device is not capable of addressing the complete memory. Also you need to 
sync memory, like on the SUN architecture. Then you need to send specific 
commands to the PCI-bridge, for example "pshyco". From what I see your model 
will not work in all cases.

>
> > so that when you
> > unsetup an USB transfer, absolutely all memory related to a transfer is
> > freed. This also has a security implication
>
> Calling this "security" seems like quite a stretch to me.  Systems
> that don't behave when buffers are exhausted are buggy, sure.  And
> marginal behavior is always hard to test and debug; and bugs are
> always a potential home for exploits (including DOS).  But this has
> no more security implications than any other tradeoff.
>
> > in that when you have
> > pre-allocated all buffers and all USB host controller descriptors, you
> > will never get in the situation of not being able to allocate transfer
> > descriptors on the fly, like done on Linux.
>
> That's not a failure mode that's been often observed on Linux.  (Never,
> in my own experience... which I admit has not focussed on that particular
> type of stress load.)  So it's hard to argue that it should motivate a
> redesign of core APIs.

That's not the real motiviation for my design. It is just a by-product.

>
> Transfer descriptors are an artifact of one kind of host controller;
> it'd be wrong to assume all HCDs use them.

As long as the core information is not in the header of an USB transfer, like 
IP-packets have an header, you cannot ignore this.

>
> The related issue that's been discussed is how to shrink submit paths,
> giving lower overhead.
>
> Submitting URBs directly to endpoints would remove lots of dispatch
> logic.  Pre-allocating some TDs would remove logic too, but implies
> changing the URB lifecycle.  The peripheral/"gadget" API allows for
> both of those optimizations, but adopting them on the host side would
> not be particularly easy because of the "how to migrate all drivers"
> problem.  I'd expect submit-to-endpoints to be adopted more easily,
> since the low level HCD primitives already work that way.

It was hard enough to rewrite the FreeBSD USB drivers. On Linux it will 
probably be more difficult, hence you have more supported drivers.

--HPS

-------------------------------------------------------------------------
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
_______________________________________________
linux-usb-devel@lists.sourceforge.net
To unsubscribe, use the last form field at:
https://lists.sourceforge.net/lists/listinfo/linux-usb-devel

Reply via email to