in the VoiceProcessing mode but we would like to
avoid the overhead there and just configure Speaker route based on selected
input mic (to avoid feedback).
Thanks in advance
Arshia Cont
Sent from my iPhone
___
Do not post admin requests to the list
would be an alternative for ExtAudioFileWriteAsync?
I have been looking at Audio Queue Services for months but
documentation/examples are so old that it keeps me away from it… or should we
dive in?
Thanks in advance,
Arshia Cont
___
Do not post admin
streams (stereo) at the same time
(i.e. two audio files).
Any hints?
> On 14 Feb 2018, at 19:01, Matt Ingalls <m...@8dio.com> wrote:
>
> Have you tried AVAudioFile?
>
>> On Feb 13, 2018, at 12:59 PM, Arshia Cont <arshiac...@antescofo.com> wrote:
>>
>> Hello
rty_IOBufferSizeBytes could prevent
> an overflow.
> but maybe doing your own async writes would be safer :). ExtAudioFile is a
> pretty old API..
>
> -m
>
>> On Feb 14, 2018, at 11:24 AM, Arshia Cont <arshiac...@antescofo.com> wrote:
>>
>> Matt,
>
), I stop
the AUGraph, Update Stream Formats, Make Connections, Initialize AUGraph, and
run AUGraphUpdate and then re-start the graph. I guess the ordering here is
also important.
This said, we are both in the danger zone since AUGraph is doomed for
deprecation. Any updates on this?!
Arshia
Benjamin and list,
I double Benjamin’s request. It would be great if someone from the CoreAudio
Team could respond to the question.
Two years ago, after basic tests I realised that AVAudioEngine was not ready
for Low Latency Audio analysis on iOS. So we used AUGraph. I have a feeling
that
s` was the
> successor of `RenderCallback`.
> For me it was not natural to mix old api like
> `kAudioUnitProperty_SetRenderCallback` in AVAudioEngine.
>
> So as Arshia said, I'm also looking for a way to use real-time processing
> with AVAudioEngine.
>
> Le mer.
its method
>> `installTapOnBus`:
>>
>> @param bufferSize the requested size of the incoming buffers in sample
>> frames. Supported range is [100, 400] ms.
>>
>> Maybe I'm wrong but I don't see any other way to have a lower latency audio
>> proce
possible.
>
> Sorry I don’t know where all the mics are, I could only find four of them,
> three on the front, one on the back. My app only wants to use one mic.
>
> -Dean Reyburn
>
>> On Nov 9, 2018, at 11:36 AM, Arshia Cont > <mailto:arshiac...@antescofo.com>
Hi Dean,
What AVAudioSession Category/Mode are you using? Beyond this, you should
probably tell us how you record (AVAudioEngine? AUnit?).. My first suggestion
si to check the Stream format of your audio chain with that provided by the
system (such as sample rate etc.).
Do you mind sharing
t; coreaudio-api-ow...@lists.apple.com
>>
>> When replying, please edit your Subject line so it is more specific
>> than "Re: Contents of Coreaudio-api digest..."
>>
>>
>> Today's Topics:
>>
>> 1. Re: iPad Pro 3 audio distortion? (Dean Reyburn
this design choice!
But it’d be great to let developers “pay” for what they ask (meaning I gotta
optimise more to get better stuff, if possible).
Cheers,
Arshia Cont
___
Do not post admin requests to the list. They will be ignored.
Coreaudio-api
on these observations it would be great.
Arshia Cont
www.antescofo.com
Sent from my iPhone
> On 15 Apr 2019, at 00:10, Tamás Zahola wrote:
>
> Hi,
>
> I’m writing an iOS audio player utilizing AVAudioEngine with an
> AVAudioPlayerNode and some effect nodes.
>
> I want the audio t
ks Arshia! Really appreciate your pointers. That's great news.
> By the way, do you use Swift or OBjC with AVAudioEngine?
>
>
> Thanks,
> Beinan
>
>
>
> On Fri, Aug 2, 2019 at 5:40 PM Arshia Cont <mailto:arshiac...@antescofo.com>> wrote:
> Beinan,
>
o APIs. But you might wanna start
shopping on AVAudioEngine as it’s gonna be time to move soon after adoption of
Catalina/iOS13 in a few months.
Any further insight from other users would be welcome. I’m going to start
testing the Sink and Source nodes for real-time use soon.
Cheers,
Arshi
Thank you Dominic for sharing this.
Is this general to both OSX and iOS or an OSX issue only? on iOS we can manage
to route audio using AVAudioSession. I’m not an OSX guy that’s why I’m asking
and before I move everything to AVAudioEngine!
> On 28 Sep 2019, at 22:53, Dominic Feira via
We also use a simple measurement mechanism which is fine for a user-facing
situation.
However, what Brian described remains a problem: Doing such measurements when
user is using a Headset doesn’t make sense!
This is especially problematic for Bluetooth devices: I haven’t figured out a
way to
ttp://www.antescofo.com/>
PS: Nice Apps! ;)
> On 20 Jan 2020, at 19:24, Jonatan Liljedahl wrote:
>
> On Mon, Jan 20, 2020 at 6:36 PM Arshia Cont via Coreaudio-api
> wrote:
>
>> You get the following from AVAudioSession:
>> inputLatency
>> outputLatency
>> ioBuffe
ery welcome!
Cheers,
Arshia Cont
www.antescofo.com <http://www.antescofo.com/>
> On 20 Jan 2020, at 15:09, Eric Herbrandson via Coreaudio-api
> wrote:
>
> I am working on an application using CoreAudio on the iPhone/iPad. The
> application both plays audio thro
Just out of curiosity, what is your AVAudioSession mode? What if you use
internal speaker/microphone?
> On 21 Jan 2020, at 09:36, Jonatan Liljedahl wrote:
>
> Hi,
>
> On Mon, Jan 20, 2020 at 9:58 PM Arshia Cont wrote:
>>
>> Jonathan,
>>
>>
Fred,
To my knowledge, you can not explicitly do that with AVAudioSession. Even the
setPreferredDataSource’s location setup can be misleading: Everything depends
on the Polar Pattern you choose. So an Omni Polar Pattern on “Front” will
result in using two microphones to achieve it. There is
but sometimes easily missed).
Hope this helps. Let us know!
Arshia Cont
metronautapp.com
> On 11 May 2021, at 17:35, Sven Thoennissen via Coreaudio-api
> wrote:
>
> Hello,
>
> In my iOS app I am observing the AVAudioEngineConfigurationChange
> notification. It uses AVA
22 matches
Mail list logo