Things like ring buffers are still the same idea, but I won't explain
how they're the same, because it would make the explanation more
confusing. Just hold onto the core idea -- the receiver and sender of
data each have their own buffers.
Heartfelt thanks for this great explanation.
I am a
I haven't read the code (not even the Java side). I can only guess
that in streaming mode it allocates a buffer it manages as a ring
buffer, or something like that.
But I'd expect that to only happen at startup. Any delays in write()
calls I'd expect to be simply waiting for room to copy the
Well, I say again that blocking/non-blocking really has nothing to do
with how quickly you can respond.
When you queue data up to be played, whether via a blocking call, or a
non-blocking call, at some point, you no longer have the ability to
abort. The distance in the pipeline between that
When you stream buffered data, you have a minimum of two buffers -- at
least conceptually. (You can implement it somewhat differently, but it
boils down to the same thing the way I look at it).
You fill up one buffer, and give it off to the system or the hardware
or another thread. While it is
Why do you have to call allocate/release? Why not just allocate enough
for your purposes, and suballocate within?
I can definitely see that it'd be a pain to maintain both blocking and
non-blocking versions of your code. Even the cognitive shift would be
a pain.
But aside from that, my
On Fri, Feb 19, 2010 at 7:15 PM, Bob Kerns r...@acm.org wrote:
Why do you have to call allocate/release? Why not just allocate enough
for your purposes, and suballocate within?
Because I first assumed the AudioTrack would be non blocking (from
past experience). And so I thought that the buffer
we would really appreciate if you can explain both buffers?Which
buffers are you referring here.
--
You received this message because you are subscribed to the Google
Groups Android Developers group.
To post to this group, send email to android-developers@googlegroups.com
To unsubscribe from
we would really appreciate if you can explain both buffers?Which
buffers are you referring here.
--
You received this message because you are subscribed to the Google
Groups Android Developers group.
To post to this group, send email to android-developers@googlegroups.com
To unsubscribe from
we would really appreciate if you can explain both buffers?Which
buffers are you referring here.
--
You received this message because you are subscribed to the Google
Groups Android Developers group.
To post to this group, send email to android-developers@googlegroups.com
To unsubscribe from
we would really appreciate if you can explain both buffers?Which
buffers are you referring here.
--
You received this message because you are subscribed to the Google
Groups Android Developers group.
To post to this group, send email to android-developers@googlegroups.com
To unsubscribe from
The 70 ms here isn't due to the blocking nature, but due to the buffer
size. With a 2.5 ms buffer size, you'd be able to stop the sound in
5ms even when both buffers were full. It really has nothing to do with
blocking/non-blocking, which simply has to do with who has to do the
blocking and
I already use a separate thread to feed the audio chunks. For
efficiency it's writing many chunks at once before going to sleep. It
works well on all the platforms I mentioned because they all support
non-blocking calls, in fact I'm not sure any of them support blocking
calls at all. As for
What is the name of your program again? Because just hearing that you
are calling the AudioTrack from inside the NDK makes me NOT want to
every grab a copy of your program. The headers aren't stable and it
will break in the future (high risk anyway).
There's no need to do that anyway, you can
Our program is CorePlayer and is well known for being a great
audio/video player for mobile platforms.
As the title of the thread says, I use AudioTrack which is a *java*
API. Wether I call it from JNI or within dalvik doesn't make any
difference.
On Wed, Feb 17, 2010 at 10:10 PM, niko20
From your description, it sounds like everything is happening just as
designed, and as it should, and that the only problem is that you feel
your program isn't complicated enough.
Is that a fair assessment? You get the right result. Your code spends
most of its time waiting for the hardware,
First of all, it's written nowhere that the AudioTrack is a blocking system.
Second, when there are 2x the hardware minimum size buffered there is
no reason why adding that amount should result in audio artifact.
So wether it feels nice or not, that API is not working as advertised.
Now about
2010/2/17 Steve Lhomme rob...@gmail.com:
First of all, it's written nowhere that the AudioTrack is a blocking system.
My bad, I just read
http://developer.android.com/reference/android/media/AudioTrack.html
again. And it indeeds says that the streaming mode is blocking.
In Streaming mode, the
17 matches
Mail list logo