Hi Sven,
It depends on the architecture of your engine. Something is probably evading
the propagation of the stream format there.
In my experience on a similar issue and after several exchange with Apple DTS
(input from Apple engineers on this would be very welcome): Problems might
arise if
Fred,
To my knowledge, you can not explicitly do that with AVAudioSession. Even the
setPreferredDataSource’s location setup can be misleading: Everything depends
on the Polar Pattern you choose. So an Omni Polar Pattern on “Front” will
result in using two microphones to achieve it. There is
We also use a simple measurement mechanism which is fine for a user-facing
situation.
However, what Brian described remains a problem: Doing such measurements when
user is using a Headset doesn’t make sense!
This is especially problematic for Bluetooth devices: I haven’t figured out a
way to
in the second
> case, and it would give a different error in the two cases. I still
> believe 2 is the correct factor to use!
>
>> PS: Nice Apps! ;)
>
> Thanks! :)
>
> /Jonatan
>
>> On 20 Jan 2020, at 19:24, Jonatan Liljedahl wrote:
>>
>>
ttp://www.antescofo.com/>
PS: Nice Apps! ;)
> On 20 Jan 2020, at 19:24, Jonatan Liljedahl wrote:
>
> On Mon, Jan 20, 2020 at 6:36 PM Arshia Cont via Coreaudio-api
> wrote:
>
>> You get the following from AVAudioSession:
>> inputLatency
>> outputLatency
>> ioBuffe
Hi Eric,
I did some reverse engineering on this issue about a year ago and this is what
I found:
Note that it only applies to RemoteIO in the context of AudioUnits /
AudioGraphs. I believe that AVAudioEngine uses the same graph under the hood
but haven’t replicated measurements there yet. Any
Thank you Dominic for sharing this.
Is this general to both OSX and iOS or an OSX issue only? on iOS we can manage
to route audio using AVAudioSession. I’m not an OSX guy that’s why I’m asking
and before I move everything to AVAudioEngine!
> On 28 Sep 2019, at 22:53, Dominic Feira via
You can use both! In both cases, you should avoid everything that conflict with
real-time Audio. More cautious is needed with Swift (such as no Swift in
real-time blocks). AVAudioEngine is of course much more Swift Friendly.
> On 2 Aug 2019, at 11:50, Beinan Li wrote:
>
> Thanks Arshia!
Beinan,
This is my understanding of the situation talking to some of the CoreAudio
people at this year’s WWDC and following the recent evolution. Ofcourse it all
depends on what you’re currently doing with CoreAudio so my input is biased on
my needs which are low-latency real-time Audio I/O.