Alan Stern wrote:
Or would it?  The urb in question has to be allocated and freed somehow,
and either the driver or the core has to take care of this.  By (2), this
ought to be handled by the core.  So the submit routine allocates the urb
and passes a pointer to it back to the caller, while the
wait-for-completion/timeout routine frees the urb.  But this leads to an
unavoidable race should the driver try to unlink the urb:  There's no way
to insure that the unlink call arrives before the urb has been freed.

Whatever handles cancelation (and maybe timeout) just needs to synchronize so they can't confuse themselves like that. Just like today.


[Just like today -- what?]

Just like today, "they have to do that same synchronization", that's not a new constraint.


I agree about the need for synchronization.  The point is that with the
only data structure available to the client being a pointer to an urb,
there _is_ no way to carry out this synchronization.  Doing so would
require using one of the synchronization objects contained within the urb,
and they aren't available once the urb has been freed.

You're overlooking the per-device state, which already needs to have synchronization state (for all the different tasks, and completion/irq handlers, involved).


Of course, there's always the possibility of augmenting the data structure. Something containing a pointer to an urb and a spinlock, for example. Then you're starting to get close to the proposal I sent you about a week ago.

I'd put the spinlock into that per-device state, along with the urb handle ... ;)


It's easy enough to export it. My reason for using complete_all() rather than just complete() was that I can imagine situations where more than one thread is waiting for the urb to finish up. For instance, the normal driver operating thread and the khubd thread (in disconnect()).

I had similar thoughts. But I'm not sure the URB lifecycle is quite clean enough to support those models without trouble. It wasn't designed to be used that way (as you can tell), which as a rule means there are going to failure cases there...


Putting (a) and (b) together, it's clear the complete_all() has to be called from within the completion function. That makes sense; after all, the driver best knows when the proper time is to awaken the waiting tasks.

I don't know that I'd agree with all the assumptions you were making to reach that conclusion, particularly about interactions between the synchronous calls and the asynchronous ones.


Can you spell out more precisely which assumptions you don't like? And also which synchronous calls you mean -- my discussion of complete_all() being called from within a completion routine was meant to refer to an entirely asynchronous context.

Now you're mixing up the synchronous and asynchronous scenarios again.


One assumption:  the completion handler (an async mechanism) is available
to users of synchronous API calls ... in cases other than synchronous
unlink.


        usb_unlink_urb(urb);
        wait_for_completion(&urb->cmplt);

That's actually something any driver can already ensure, with no change to usbcore, if they have a correctly synchronized completion object (or similar per-request state).


Yes. But there's an advantage to formalizing and providing core support for a common but currently ad-hoc technique.

Yes and no. I think we still have too many API calls, not all of which play well together. (The synchronous calls seem to trigger the most confusion.) Adding more calls doesn't make things better.


With just these two API additions, look at what usb_bulk_msg() turns into
as a series of low-level API calls (again without error checking):

        urb = usb_allocate_urb(...);
        usb_fill_bulk_urb(urb, ...);    // No completion handler
        usb_submit_urb(urb);
        if (io_wait_completion(&urb->cmplt, timeout) == -ETIMEDOUT) {
                usb_unlink_urb(urb);
                io_wait_completion(&urb->cmplt, 0);
        }
        /* Save urb->status, urb->actual_length */
        usb_free_urb(urb);

Make that io_wait_timeout(..., MAX_SCHEDULE_TIMEOUT) the second time around, remove the urb alloc/free ...


So where does the urb get allocated and freed? Is this the confusion you mentioned above?

See next:


... and it starts to look like usb_start_wait_urb() in that patch

That call currently expects someone else to have allocated it, and (ugly convention, changed in that patch) frees it directly.


I sent (with io_wait_timeout).  usb_bulk_msg() or usb_control_msg()
allocate and partially initialize the URB, usb_start_wait_urb() does
the rest ... all that "middle" code already exists.


I considered your model for the scatter-gather I/O functions, where the
first call merely initializes the data structures and the urbs aren't
actually submitted until the start-wait function is called.  It seemed
like unnecessary complication and overhead, at least when handling a
single control or bulk message.  There was no reason not to simply start
the I/O during the first function call.

The reason is to get rid of a window in SMP/preempt cases:


  thread 1:  (a) export urb to thread 2 (b) submit/wait (c)completion
  thread 2:  (d) cancel the urb

Sequence (a) (b) (c) is normal.
For unlink, (a) (b) (d) (c) is normal

BUT SMP/preempt can produce (a) (d) (b) (c) and that should
behave sanely ... in this case, by ensuring the submit doesn't happen.


- Dave








-------------------------------------------------------
This SF.net email is sponsored by:Crypto Challenge is now open! Get cracking and register here for some mind boggling fun and the chance of winning an Apple iPod:
http://ads.sourceforge.net/cgi-bin/redirect.pl?thaw0031en
_______________________________________________
[EMAIL PROTECTED]
To unsubscribe, use the last form field at:
https://lists.sourceforge.net/lists/listinfo/linux-usb-devel

Reply via email to