I’ve been having some odd behaviors with my audio engine since the iPhone 6s 
and 6s+ have come out and I was hoping someone could help explain these points 
of confusion for me. I’m quite sure it’s because these new models have a 
microphone with a set sample rate of 48kHz, especially since plugging in the 
apple earpods mic fixes the problem, but I still am having the following 
problems:

First, I am having some unintentional sample rate conversions happening 
somewhere in my code. I have an AVAudioSession that maintains a 48kHz 
hardwareSampleRate, which my engine would happily accept, and yet the 
CMSampleBuffers that I am retrieving through my 
AVCaptureAudioDataOutputSampleBufferDelegate’s captureOutput function are 
coming in at 44.1kHz with 940 or 941 samples per buffer, as if there was some 
sample rate conversion happening before I captured the buffer. 

Second, what is determining the amount of frames that an Audio Unit’s render 
callback is allocating for the incoming buffer? I am getting very inconsistent 
results with this and can not understand how it is being determined. It is 
represented by the inNumberFrames parameter in an AURenderCallback function. 
For example, when I start recording it will give me a buffer for 941 samples 
then immediately increase to giving me a buffer for 1010, 1012, or even 1024 
samples. This is causing all sorts of havoc with my render cycles.
 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      ([email protected])
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/coreaudio-api/archive%40mail-archive.com

This email sent to [email protected]

Reply via email to