I have accomplished sth. by using the ByteArrayOutputStream - I divide the recorder short sample into 2 bytes. Thats again - not as efficient as It could be but significantly faster than ArrayList....
Regarding your code. System.arraycopy(mReceivedAudioBufferSrt, 0, mRecordingDesc.mRecordingSrt, mRecordingEndPos, mBufferFrames); What are the declarations of: a) mReceivedAudioBufferSrt b) mRecordingDesc.mRecordingSrt I guest that a) is either an byte or short array (depending on your PCM encoding). But most important what is b)? Is it an Array? If it is an Array how do you set the size of it ?(t must have a fixed size somehow...) At least in my app the recording is controlled by start/stop buttons so I can't predict how long would it be... Or is it another type of container/stream? I still am not able to run ENCODING_PCM_8BIT - the getMinBufferSize functions returns -2 when I use ENCODING_PCM_8BIT as an argument. I tried setting up a fixed value of ie 200000 but still no progress - it's seems stupid that 16 bits is working and 8 bits is not working. Every phone uses 8bits encoding for transferring human voice... Regards and thanks for the contribution!!! On 4 Sty, 19:35, Keith Wiley <[email protected]> wrote: > On Jan 4, 12:41 am, Serdel <[email protected]> wrote: > > > How do you predict the size of the destination array? I mean I think > > you also copy the samples from audiostream into a smaller temp buffer > > and then into your dest. larger one - if you use an array for the > > dest. one how do you set size of that? > > Hmmm. It's been a very long time since I looked at this code. I kick- > start it with something like this: > > int minBufferBytes = AudioRecord.getMinBufferSize(sampleRate, > channelConfig, audioFormat); > mBufferBytes = /*Some value >= minBufferBytes*/; > mBufferFrames = mBufferBytes / (numChannels * sampleBytes); > mReceivedAudioBufferSrt = new short[mBufferFrames]; > mRecordingEndPos = 0; > > mAudioRecord = new AudioRecord(source, sampleRate, channelConfig, > audioFormat, numBufferBytes); > mAudioRecord.startRecording(); > new Thread(new Runnable() { > public void run() { > processRecording(); > } // Runnable.run() > > }).start(); // New Runnable inside New Thread > > ...where processRecording(), obviously now running on its own thread > parallel to the ongoing recording, does something like this: > > while (mAudioRecord.getRecordingState() == > AudioRecord.RECORDSTATE_RECORDING) { > int numDataRead = mAudioRecord.read(mReceivedAudioBufferSrt, 0, > mBufferFrames); > System.arraycopy(mReceivedAudioBufferSrt, 0, > mRecordingDesc.mRecordingSrt, mRecordingEndPos, mBufferFrames); > mRecordingEndPos += mBufferFrames; > if (mRecordingEndPos == mRecordingNumFrames) > mRecordingEndPos = 0; > > } > > That's heavily pseudoized from my considerably more complicated > overall setup, but it's the basic idea. I have to be very careful the > size of the data chunks I grab because I need to perform realtime > power-2 FFTs on them in my app. Plus I have switches all over the > place to handle a variety of sampling rates, sample sizes (1 or 2 > bytes), number of channels, etc. The complexity grows very quickly > once you start worrying about all of that stuff. > > Cheers! -- You received this message because you are subscribed to the Google Groups "Android Developers" group. To post to this group, send email to [email protected] To unsubscribe from this group, send email to [email protected] For more options, visit this group at http://groups.google.com/group/android-developers?hl=en

