Ok I first have to say that I'm biased, 'cus I like to write music
apps (like music generation apps). However, this affects games as
well.

It's somewhat disappointing how OpenGL has made it into the NDK now
but there isn't any NDK methods to get to sound to have faster access.
Over the weekend I did some more testing and if you make a "drum" app,
for example, and play the sound using either SoundPool or AudioTrack,
you will get about a 100ms delay buffer, which is clearly audible
(gap).

I made  a test app where when you touch the screen, upon the DOWN
event, it would play a short sound (like a drum sound). It clearly was
not real time responsive or even close - you could hear the delay.

I'd be interested to know if anyone who has made a game has any
trouble with sounds? It seems like they aren't going to really trigger
real time enough if you use SoundPool, for example. Am I completely
wrong? Seems like your game is going to perform an action, play a
sound, but the sound will have latency which I would think would throw
off the game a bit.

I tested to see if it perhaps was just the touch event itself that was
slow - I added code to change the background color of the app when you
touch down, and then change it back when you let up. The color changed
almost instantaneously. So it's not the event handling, but rather,
the sound objects that have the delay issues.

I found that if I used AudioTrack (I tried it in streaming mode so I
had more control over buffer size), it wouldn't allow me to create the
object unless the buffer was at least 2048 samples in size (for the
format I was using whic was I think 22050 mono 16 bit - this would
work out to almost 100ms buffer before the track begins to play), and
strangely I found that the amount of buffer must have been the same
for SoundPool since when loaded with the same sound, the delay was
similar. I also tried AudioTrack in static mode and it still had the
same delay as SoundPool, or using AudioTrack in stream mode too!

To get really good audio response we need to get down to the 30ms
delay range or so (obviously the faster the better), the iphone does
this with it's AudioUnit objects (of course they are in native code,
which you write in C).

I just hope that by the time we get to Flan, we may have NDK audio
access, and hopefully we can get more direct audio streams for some
real processing power.

Hasn't this affected games much? I would think it would still be a
problem in games even now, with latency so high. Even though the docs
say SoundPool and static AudioTrack will have lowest latency possible,
it still seems high to me.


-niko
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google
Groups "Android Developers" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to
[email protected]
For more options, visit this group at
http://groups.google.com/group/android-developers?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to