On Mon, 2 Jul 2007, Oliver Neukum wrote:

> > The number of interrupts is orthogonal to the question of whether HCD 
> > resources are bound to endpoints or to URBs.
> 
> But if resources are bound to an endpoint the HCD must generate an
> interrupt when the resources are no longer in use so they can be reused.

The HCD has to generate an interrupt anyway when all the queued URBs
complete.  Drivers simply have to make sure not to submit more URBs
than the preallocated resources will support.  That's true no matter
what the resources are associated with.

> If we do a full preallocation for the worst case, one code path will do.

What if the full preallocation fails but a partial allocation would 
succeed?

> > > Furthermore, I am afraid
> > > of giving all remaining memory to URBs and not leaving enough for
> > > allocation private to the HCDs.
> > 
> > That's an argument for preallocating fewer URBs, not more.
> 
> Why? What is preallocated is already available. The question arises
> with dynamic allocations.

It's a general question.  We have to allocate both URBs and HCD-private 
stuff.  It can be done beforehand or dynamically.  Either way, if too 
much memory is spent on URBs there might not be enough for the 
HCD-private things.  The way to avoid the problem is to allocate fewer 
URBs.

The advantage of preallocation, as Dave pointed out, is that it can 
be done in process context and hence can use GFP_KERNEL.


> Why? Or rather if the amount is determined according to the current
> granularity or the granularity in the worst case, you associate with the
> URBs. If you allocate less then you cannot associate with the URBs
> as you don't have enough resources.

We don't want to do both!

I give up.  It probably won't make much difference in the end.


> Yes, indeed I am not sure that preallocation is the way to go for the
> storage driver. I care more about cdc-acm, the serial and the video
> drivers.

I thought you were inspired by the problems Pete dsecribed, where mass 
storage transfers failed because of memory pressure?

> > For example, let's say you decide to preallocate resources for a mass 
> > storage device during usb-storage's probe routine.  You don't know how 
> > big the transfers will end up being, so you preallocate enough for 120 
> > KB.  But the user increases max_sectors and you are faced with a 200-KB 
> > transfer.  What will you do?
> 
> Obviously the capabilities advertised to the SCSI layer would need to be
> limited. I am not sure that this is a good idea.

It isn't.  We have tried hard to avoid limiting the capabilities 
unnecessarily.

Alan Stern


-------------------------------------------------------------------------
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
_______________________________________________
linux-usb-devel@lists.sourceforge.net
To unsubscribe, use the last form field at:
https://lists.sourceforge.net/lists/listinfo/linux-usb-devel

Reply via email to