On Tuesday, December 22, 2015 at 12:14:37 AM UTC-8, gjs wrote:
>
> Hi,
>
> Ok here's two simple example (sine wave) tone generators, one uses the 
> 'streaming' method and the other uses the 'static' method, this relates to 
> how the associated memory is allocated / managed.
>
> The information / tradeoffs - 
>
> 'streaming' is useful where sounds might be changed dynamically as they 
> are being played, where they are long duration more than a few seconds, not 
> suitable for allocating a large amount of static memory, where you don't 
> mind having some latency before sound is heard, don't need to be 
> synchronized accurately with an animation for example.
>
> 'static' is suitable for short duration sounds, don't require a lot of 
> memory, have less latency, are somewhat easier to synchronize.
>
> That said, 'streaming' is usually adequate for most ordinary purposes 
> where accuracy < 100ms is not important, such as in a countdown timer 
> before a camera shutter is fired.
>

For what I'm doing, it sounds like static makes more sense.  Is the 
AudioTrack itself reusable?  Can I simply construct it and set its 
properties and then play it multiple times?


> These are just java examples, high performance low(er) latency sounds 
> would probably use C/C++ code / libraries, eg as discussed here 
> http://superpowered.com/androidaudiopathlatency
>
> If you hear a 'clicking' sound after each tone is played try adding code 
> to dynamically reduce the volume (or amplitude) before stopping the 
> AudioTrack...
>
> Regards
>
>
> private void playToneStreaming(final int frequency, final long duration) // 
> duration in milliseconds
>
> {
>     new Thread(new Runnable() {
>         @Override
>         public void run() {
>             try {
>
>                 final int sampleRate = 8000;
>
>                 final int amplitude = 32000;
>
>                 final double twoPI = Math.PI * 2.0;
>
>                 int minBufferSize = AudioTrack.getMinBufferSize(sampleRate, 
> AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT);
>
>                 final AudioTrack audioTrack = new 
> AudioTrack(AudioManager.STREAM_MUSIC,
>                         sampleRate, AudioFormat.CHANNEL_OUT_MONO,
>                         AudioFormat.ENCODING_PCM_16BIT, minBufferSize,
>                         AudioTrack.MODE_STREAM);
>
>                 int sampleSize = minBufferSize / 2;
>
>                 short samples[] = new short[sampleSize];
>
>                 double phase = 0.0;
>
>                 audioTrack.setStereoVolume(1, 1);
>
>                 audioTrack.play();
>
>                 long end = System.currentTimeMillis() + duration;
>
>                 while (System.currentTimeMillis() < end)
>                 {
>                     for (int i = 0; i < sampleSize; i++)
>                     {
>                         phase += twoPI * frequency / sampleRate;
>
>                         samples[i] = (short) (amplitude * Math.sin(phase));
>                     }
>
>                     audioTrack.write(samples, 0, sampleSize);
>                 }
>
>                 audioTrack.setStereoVolume(0, 0);
>
>                 audioTrack.stop();
>
>                 audioTrack.release();
>
>             } catch (Exception e) {
>                 e.printStackTrace();
>             }
>         }
>
>     }).start();
> }
>
> private void playToneStatic(final int frequency, final long duration) // 
> duration in milliseconds
> {
>     new Thread(new Runnable()
>     {
>         @Override
>         public void run()
>         {
>             try {
>
>                 final int sampleRate = 8000;
>
>                 final int amplitude = 32000;
>
>                 final double twoPI = Math.PI * 2.0;
>
>                 int sampleSize = (sampleRate / 1000) * (int) duration;
>
>                 short samples[] = new short[sampleSize];
>
>                 double phase = 0.0;
>
>                 for (int i = 0; i < sampleSize; i++)
>                 {
>                     phase += twoPI * frequency / sampleRate;
>
>                     samples[i] = (short) (amplitude * Math.sin(phase));
>                 }
>
>                 final AudioTrack audioTrack = new 
> AudioTrack(AudioManager.STREAM_MUSIC,
>                         sampleRate, AudioFormat.CHANNEL_OUT_MONO,
>                         AudioFormat.ENCODING_PCM_16BIT, sampleSize * 2,
>                         AudioTrack.MODE_STATIC);
>
>                 audioTrack.write(samples, 0, sampleSize);
>
>                 audioTrack.setStereoVolume(1, 1);
>
>                 audioTrack.play();
>
>                 try {
>                     Thread.sleep(duration);
>                 } catch (Exception ignore) {
>                 }
>
>                 audioTrack.setStereoVolume(0, 0);
>
>                 audioTrack.stop();
>
>                 audioTrack.release();
>
>             } catch (Exception e) {
>                 e.printStackTrace();
>             }
>         }
>
>     }).start();
> }
>
> private void testTones()
> {
>     new Thread(new Runnable()
>     {
>         @Override
>         public void run ()
>         {
>             long sleep = 500;
>
>             for ( int frequency = 3000; frequency >= 300; frequency -= 300 )
>             {
>                 playToneStreaming(frequency, sleep);
>
>                 try {
>                     Thread.sleep(sleep);
>                 } catch (Exception ignore) {
>                 }
>
>                 if ( frequency < 1000 )
>                 {
>                     sleep = 250;
>                 }
>             }
>
>             sleep = 500;
>
>             for ( int frequency = 3000; frequency >= 300; frequency -= 300 )
>             {
>                 playToneStatic(frequency, sleep);
>
>                 try {
>                     Thread.sleep(sleep);
>                 } catch (Exception ignore) {
>                 }
>
>                 if ( frequency < 1000 )
>                 {
>                     sleep = 250;
>                 }
>             }
>         }
>
>     }).start();
> }
>
>
> On Tuesday, December 22, 2015 at 5:25:07 AM UTC+11, David Karr wrote:
>>
>> On Sunday, December 20, 2015 at 11:27:14 PM UTC-8, gjs wrote:
>>>
>>> Hi,
>>>
>>> Here's some examples of using AudioTrack 
>>> http://www.programcreek.com/java-api-examples/index.php?api=android.media.AudioTrack
>>>
>>> Just be sure to run in a separate thread.
>>>
>>> Regards
>>>
>>
>> Note that I did already say that I'd found multiple ways to do this, and 
>> I was looking for advice on the actual best (or at least better) way to do 
>> this, and you pointed me to a selection of choices, with no information on 
>> tradeoffs.
>>
>> In any case, I tried the first one (creating an AudioTrack and then 
>> playing it), and wrapping it with a Runnable and a started Thread, and I 
>> heard no sound from my device when the code executed.
>>
>> If it matters, here's the method I ended up with:
>>
>> private void playTone() {
>>     new Thread(new Runnable() {
>>         @Override
>>         public void run() {
>>             int minSize = AudioTrack.getMinBufferSize(8000, 
>> AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT);
>>             AudioTrack  audioTrack  = new 
>> AudioTrack(AudioManager.STREAM_MUSIC, 8000, 
>> AudioFormat.CHANNEL_CONFIGURATION_MONO,
>>                     AudioFormat.ENCODING_PCM_16BIT, minSize, 
>> AudioTrack.MODE_STREAM);
>>             audioTrack.play();
>>         }
>>     }).start();
>> }
>>
>>
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Android Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/android-developers.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/android-developers/5a1556d3-9f22-460d-9ab6-f2380c1844c9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to