Am Freitag, 29. Oktober 2004 20:44 schrieb David Brownell:
> On Friday 29 October 2004 11:13, Johannes Erdfelt wrote:
> > >    If usbfs instead grabbed a bunch of non-contiguous
> > > pages, copied the data into them from user-space, and then sent it (using
> > > scatter-gather io), then there is no longer any memory pressure problem.
> > > What's more, there would also be no reason not to increase the maximum
> > > data size from 16k to something much bigger, say 256k.  Allowing bigger
> > > buffers should increase performance.
> > 
> > Duh, that would be the obvious way to increase performance and
> > reliability.
> > 
> > Not to mention it should be relatively easy to implement.
> 
> Yep, and you'd be able to clean up properly after an error in
> the middle of your N urbs ... the scatterlist code handles
> such things.
> 
> Another tweak would be to remove the limit on transfer size.
> Use whatever algorithm you want to transfer the first chunk;
> for high speed, a scatterlist is *really* desirable.  Then
> repeat for successive chunks ... until error, or all done.
> (And if you can't get enough buffers to use big chunks, then
> you could still make progress using smaller chunks...)

I am afraid there is a problem. In fact, the current situation
cannot be allowed. There has to be a limit on the number
of URBs and buffers, or we allow any user to eat up all memory.

        Regards
                Oliver


-------------------------------------------------------
This SF.Net email is sponsored by:
Sybase ASE Linux Express Edition - download now for FREE
LinuxWorld Reader's Choice Award Winner for best database on Linux.
http://ads.osdn.com/?ad_id=5588&alloc_id=12065&op=click
_______________________________________________
[EMAIL PROTECTED]
To unsubscribe, use the last form field at:
https://lists.sourceforge.net/lists/listinfo/linux-usb-devel

Reply via email to