Hi all, Back in August, I asked: > Does iOS support recording audio from the RemoteIO Audio Unit at a > fixed sample rate chosen by the app, with automatic sample rate > conversion when the app's sample rate does not match that of the > hardware, without doing simultaneous playback?
I never got a reply on the mailing list, but for the record, an Apple representative I managed to corner at last year's Audio Developer Conference told me the answer is "yes". > And if so, could someone point me at some documentation or sample > code showing how to do it correctly? Clearly, the answer to that question is "no". I did find a recording-only example for macOS in Adamson & Avila's book Learning Core Audio, but that code is essentially the same as my iOS code and presumably suffers from the same bug in the sample rate conversion case. > My experience is that simultaneous recording+playback using a render > callback as in aurioTouch works fine, but when I try to record > without simultaneous playback by using an input callback > (kAudioOutputUnitProperty_SetInputCallback), the recorded audio is > garbled when sample rate conversion occurs. I just can't figure out > whether this is a bug in my app, a bug in iOS, or simply something > that was never guaranteed to work. I finally figured out the cause of the problem. When using an input callback rather than a render callback, you need to provide your own AudioBufferList, and I had duly allocated one on the heap, with bufferList.mBuffers[0].mData pointing at a buffer sized based on the maxFramesPerSlice property (in this case, 4096 bytes) and mDataByteSize set to the allocated size. This AudioBufferList was reused in each call to AudioUnitRender(), like Adamson & Avila do in their example. I was also, of course, taking into account the fact that the inNumberFrames argument to the input callback may vary from call to call as a result of sample rate conversion. This worked fine as long as there was no sample rate conversion involved. But what I hadn't realized was that AudioUnitRender() *modifies* the AudioBuffer mDataByteSize field to reflect the amount of data actually rendered. This is harmless as long as the audio block size stays constant, but if one call to AudioUnitRender() produces, say, 740 bytes, and then a subsequent call wants to produce 744 bytes, the latter call will fail because the previous call has left mDataByteSize showing a size of only 740 bytes even though mData still points at the 4096-byte buffer. To make it work correctly, you need to either reinitialize mDataByteSize to reflect the full allocated size before each call to AudioUnitRender(), or alternatively, simply set mData to NULL and let the audio unit allocate a buffer for you. -- Andreas Gustafsson, [email protected] _______________________________________________ Do not post admin requests to the list. They will be ignored. Coreaudio-api mailing list ([email protected]) Help/Unsubscribe/Update your Subscription: https://lists.apple.com/mailman/options/coreaudio-api/archive%40mail-archive.com This email sent to [email protected]
