Well, I say again that blocking/non-blocking really has nothing to do
with how quickly you can respond.

When you queue data up to be played, whether via a blocking call, or a
non-blocking call, at some point, you no longer have the ability to
abort. The distance in the pipeline between that point, and the actual
output, is your minimum latency.

If you queue up a transfer via a non-blocking call, you either have a
way to abort that transfer at that level, or not. Same with a blocking
call. The only difference is, in a blocking call any such abort has to
come from another thread -- and naturally would, since the thread that
is blocked is just doing transfers. The code that would be in the non-
blocking loop that would do the abort of a queued transfer, no longer
requires a polling test, or whatever, and can live in another thread
-- where it can safely block if needed -- say, waiting on user input.

Re: 2 vs 3 buffers -- actually I mentioned the more general case, of
which 3 is a special case. Any time you want to allow the source to
get further ahead of the sink, you can increase the number of buffers
> 2. There's nothing magic about 3; 2 is often quite enough, if the
source is reliably fast enough to fill it before the next flip. But if
you have a variable-speed source, you may need >2 buffers -- and if
you have a slow one, and no tolerance for undderruns, you might need
to even buffer the entire stream before starting.

So anyway -- I definitely don't agree that blocking is never a good
thing. It's a different thing, with distinct advantages to the coder.
I don't know of any inherent reason or case where one is faster than
the other -- when done properly, and designed to the same parameters.
E.g. if you can abort a transfer-in-progress in one, for a fair
comparison you should be able to abort a transfer-in-progress in the
other.

Now, two different implementations of either may have different
latency. Offloading the audio onto a chip inherently deepens the
pipeline. Presence/absence of the ability to cleanly abort a transfer.
Poor scheduler behavior can make a difference, too. I'm only comparing
ideal implementations when I say they're equivalent.

Ultimately, though, it only has to be "good enough" -- beyond that, I
value programmer productivity and bug-free code very highly.

On Feb 19, 11:37 pm, Steve Lhomme <rob...@gmail.com> wrote:
> On Fri, Feb 19, 2010 at 7:15 PM, Bob Kerns <r...@acm.org> wrote:

> > But aside from that, my experience is that the code based on the
> > blocking API will be simpler and have many fewer bugs, and roughly the
> > same performance characteristics if done right. (But as I mentioned
> > earlier, the need for threads means the blocking version won't scale
> > to large numbers of streams well, which is why serious web servers use
> > non-blocking APIs).
>
> Yeah, it's just a little usual based on what I've seen around.
> Blocking is never a good thing, especially on a phone that is supposed
> to respond fast. For example you don't want the audio to keep playing
> while receiving a call.
>
> Also about your 2 buffers explanation. 3 are often used, so you have
> the one writing, one ready to write (so as soon as one is written, you
> don't have to wait for the second one to start writing) and one been
> feed from the rest of the application.

-- 
You received this message because you are subscribed to the Google
Groups "Android Developers" group.
To post to this group, send email to android-developers@googlegroups.com
To unsubscribe from this group, send email to
android-developers+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/android-developers?hl=en

Reply via email to