Re: How to update AVAudioEngine.inputNode to new hardware format?

2021-05-11 Thread Arshia Cont via Coreaudio-api
Hi Sven, It depends on the architecture of your engine. Something is probably evading the propagation of the stream format there. In my experience on a similar issue and after several exchange with Apple DTS (input from Apple engineers on this would be very welcome): Problems might arise if

Re: Recording in stereo using built-in iPhone mics

2020-05-03 Thread Arshia Cont via Coreaudio-api
Fred, To my knowledge, you can not explicitly do that with AVAudioSession. Even the setPreferredDataSource’s location setup can be misleading: Everything depends on the Polar Pattern you choose. So an Omni Polar Pattern on “Front” will result in using two microphones to achieve it. There is

Re: CoreAudio: Calculate total latency between input and output with kAudioUnitSubType_VoiceProcessingIO

2020-01-22 Thread Arshia Cont via Coreaudio-api
We also use a simple measurement mechanism which is fine for a user-facing situation. However, what Brian described remains a problem: Doing such measurements when user is using a Headset doesn’t make sense! This is especially problematic for Bluetooth devices: I haven’t figured out a way to

Re: CoreAudio: Calculate total latency between input and output with kAudioUnitSubType_VoiceProcessingIO

2020-01-21 Thread Arshia Cont via Coreaudio-api
in the second > case, and it would give a different error in the two cases. I still > believe 2 is the correct factor to use! > >> PS: Nice Apps! ;) > > Thanks! :) > > /Jonatan > >> On 20 Jan 2020, at 19:24, Jonatan Liljedahl wrote: >> >>

Re: CoreAudio: Calculate total latency between input and output with kAudioUnitSubType_VoiceProcessingIO

2020-01-20 Thread Arshia Cont via Coreaudio-api
ttp://www.antescofo.com/> PS: Nice Apps! ;) > On 20 Jan 2020, at 19:24, Jonatan Liljedahl wrote: > > On Mon, Jan 20, 2020 at 6:36 PM Arshia Cont via Coreaudio-api > wrote: > >> You get the following from AVAudioSession: >> inputLatency >> outputLatency >> ioBuffe

Re: CoreAudio: Calculate total latency between input and output with kAudioUnitSubType_VoiceProcessingIO

2020-01-20 Thread Arshia Cont via Coreaudio-api
Hi Eric, I did some reverse engineering on this issue about a year ago and this is what I found: Note that it only applies to RemoteIO in the context of AudioUnits / AudioGraphs. I believe that AVAudioEngine uses the same graph under the hood but haven’t replicated measurements there yet. Any

Re: AVAudioEngine input and output devices

2019-09-28 Thread Arshia Cont via Coreaudio-api
Thank you Dominic for sharing this. Is this general to both OSX and iOS or an OSX issue only? on iOS we can manage to route audio using AVAudioSession. I’m not an OSX guy that’s why I’m asking and before I move everything to AVAudioEngine! > On 28 Sep 2019, at 22:53, Dominic Feira via

Re: CoreAudio vs. AVAudioEngine

2019-08-02 Thread Arshia Cont via Coreaudio-api
You can use both! In both cases, you should avoid everything that conflict with real-time Audio. More cautious is needed with Swift (such as no Swift in real-time blocks). AVAudioEngine is of course much more Swift Friendly. > On 2 Aug 2019, at 11:50, Beinan Li wrote: > > Thanks Arshia!

Re: CoreAudio vs. AVAudioEngine

2019-08-02 Thread Arshia Cont via Coreaudio-api
Beinan, This is my understanding of the situation talking to some of the CoreAudio people at this year’s WWDC and following the recent evolution. Ofcourse it all depends on what you’re currently doing with CoreAudio so my input is biased on my needs which are low-latency real-time Audio I/O.