Re: How to update AVAudioEngine.inputNode to new hardware format?

2021-05-11 Thread Arshia Cont via Coreaudio-api
Hi Sven,

It depends on the architecture of your engine. Something is probably evading 
the propagation of the stream format there.

In my experience on a similar issue and after several exchange with Apple DTS 
(input from Apple engineers on this would be very welcome): Problems might 
arise if you use AVAudioEngine’s mainMixer singleton. In my experience I 
noticed that sometimes (and not all the time), the stream format is not 
coherently propagated if I use the mainMixer. I ended up setting up my own 
AVAudioMixerNode, and reconstructing it at the notification if there are 
changes. In my case, this was happening from time to time when the device 
sample rate was changing due to a port change.

I saw afterwards that there are similar workarounds by folks at AudioKit (v5).

If you’re using your custom AU effect, also make sure you are acknowledging 
stream format changes (obvious but sometimes easily missed).

Hope this helps. Let us know!

Arshia Cont
metronautapp.com



> On 11 May 2021, at 17:35, Sven Thoennissen via Coreaudio-api 
>  wrote:
> 
> Hello,
> 
> In my iOS app I am observing the AVAudioEngineConfigurationChange 
> notification. It uses AVAudioEngine, its audio input node, some samplers, 
> some effects, some mixers, and the output node.
> 
> So when I plug in an external audio interface into the iPad while the app is 
> running my observer is called. AVAudioSession.inputNumberOfChannels tells me 
> there are now 6 input channels instead of 1 (the audio interface indeed has 6 
> inputs). My AVAudioEngine's inputNode inputFormat shows the new hardware 
> format (6 channels) but the outputFormat still shows 1 channel.
> How can I update the input node's outputFormat?
> 
> Should I allocate a new engine and therefore a new inputNode? The docs forbid 
> that:
> "The engine must not be deallocated from within the client's notification 
> handler"
> https://developer.apple.com/documentation/foundation/nsnotification/name/1389078-avaudioengineconfigurationchange
>  
> <https://developer.apple.com/documentation/foundation/nsnotification/name/1389078-avaudioengineconfigurationchange>
> (okay, it does not forbid to allocate a new engine, but allocating a new one 
> also means dropping the reference to the old engine)
> 
> Or should I refrain from using AVAudioEngine.inputNode and allocate my own 
> AVAudioInputNode instance?
> Or should I call engine.inputNode.reset() ?
> 
> I have also tried to reconnect the inputNode with the new hardware format to 
> the next node in the graph, like the documentation suggests. As a result the 
> next node render call fails to pullInput() with code 
> kAudioUnitErr_CannotDoInCurrentContext (it's my own custom AU effect so I can 
> tell).
> 
> Any experience on this would be welcome.
> 
> Sven
> 
> ___
> Do not post admin requests to the list. They will be ignored.
> Coreaudio-api mailing list  (Coreaudio-api@lists.apple.com)
> Help/Unsubscribe/Update your Subscription:
> https://lists.apple.com/mailman/options/coreaudio-api/arshiacont%40antescofo.com
> 
> This email sent to arshiac...@antescofo.com

 ___
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list  (Coreaudio-api@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/coreaudio-api/archive%40mail-archive.com

This email sent to arch...@mail-archive.com


Re: Recording in stereo using built-in iPhone mics

2020-05-03 Thread Arshia Cont via Coreaudio-api
Fred,

To my knowledge, you can not explicitly do that with AVAudioSession. Even the 
setPreferredDataSource’s location setup can be misleading: Everything depends 
on the Polar Pattern you choose. So an Omni Polar Pattern on “Front” will 
result in using two microphones to achieve it. There is currently no way in 
accessing a specific source. 

Now on a different topic but related, at WWDC 2019 we saw a way of accessing 
two microphones and capturing them in two streams using AVCapture (link 1 
below). This is nice and it would be really nice to be able to bring this to 
AVAudioSession and CoreAudio Frameworks for Audio crowd to use.
I talked to some engineers at WWDC and submitted a Feedback Ticket for 
“Providing API Access in CoreAudio/AVAudioEngine for multi-microphone capture” 
(FB6131819) similar to AVCapture.

For me the worse is not being able to access Speaker Routing in iOS for devices 
that have four speakers! On those devices, if you have simultaneous use of 
microphone and speaker there’s no way to get rid of echo and to make things 
worse, the speakers are always next to a microphone! Using Echo Suppression 
algorithms doesn’t always help either! There’s also an old Feedback Ticket on 
this: FB6131819 (Provide API Access to individual speaker routing on Quad 
Stereo Devices).

If anyone knows workarounds, it’d be great to share! :) 


Arshia
www.antescofo.com  


(1) WWDC 2019 - Introducing Multi-Camera Capture for iOS: 
https://developer.apple.com/videos/play/wwdc2019/249 
 

> On 2 May 2020, at 23:37, Fred Melbow via Coreaudio-api 
>  wrote:
> 
> Hi everyone,
> 
> the latest iPhones (ever since the XS I believe) can record in stereo using 
> two of the built-in mics, at least using the native iOS Camera app.
> 
> I’m trying to achieve this in my app too. I know that a microphone 
> (front/back/bottom) can be selected using `setPreferredDataSource`, but this 
> method only allows selection of one mic (mono).
> 
> Is there any update on this - is recording on stereo possible using the 
> `AVAudioSession` API or otherwise?
> 
> Many thanks,
> Fred
> 
> ___
> Do not post admin requests to the list. They will be ignored.
> Coreaudio-api mailing list  (Coreaudio-api@lists.apple.com)
> Help/Unsubscribe/Update your Subscription:
> https://lists.apple.com/mailman/options/coreaudio-api/arshiacont%40antescofo.com
> 
> This email sent to arshiac...@antescofo.com

 ___
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list  (Coreaudio-api@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/coreaudio-api/archive%40mail-archive.com

This email sent to arch...@mail-archive.com


Re: CoreAudio: Calculate total latency between input and output with kAudioUnitSubType_VoiceProcessingIO

2020-01-22 Thread Arshia Cont via Coreaudio-api
We also use a simple measurement mechanism which is fine for a user-facing 
situation.

However, what Brian described remains a problem: Doing such measurements when 
user is using a Headset doesn’t make sense!

This is especially problematic for Bluetooth devices: I haven’t figured out a 
way to detect whether the device is a Headset or not! Has anyone figured out a 
way to do this? AVAudioSession properties do not seem to help.


Arshia

> On 21 Jan 2020, at 23:57, Paul Davis via Coreaudio-api 
>  wrote:
> 
> 
> 
> On Tue, Jan 21, 2020 at 3:36 PM Brian Schachter via Coreaudio-api 
> mailto:coreaudio-api@lists.apple.com>> wrote:
> 
> 
> > The most reliable way to do this is to actually measure it. Not necessarily 
> > the most convenient, however. In Ardour we make this measurement optional - 
> > users will get better capture alignment if they do it
> 
> Do you play a metronome out of the speaker (output) while simultaneously 
> recording it with the mic (input) and afterwards look at the peaks in the 
> recorded data? If so, is this method of measuring impossible if the user is 
> wearing headphones? When the output is headphones... the mic won't be able to 
> pick it up.
> 
> Correct. If you want to do the most accurate measurement, you need to be able 
> to detect and measure the signal via a "loopback" signal flow. That's why I 
> noted "not  necessarily the most convenient".
> 
> 
> ___
> Do not post admin requests to the list. They will be ignored.
> Coreaudio-api mailing list  (Coreaudio-api@lists.apple.com)
> Help/Unsubscribe/Update your Subscription:
> https://lists.apple.com/mailman/options/coreaudio-api/arshiacont%40antescofo.com
> 
> This email sent to arshiac...@antescofo.com



smime.p7s
Description: S/MIME cryptographic signature
 ___
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list  (Coreaudio-api@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/coreaudio-api/archive%40mail-archive.com

This email sent to arch...@mail-archive.com


Re: CoreAudio: Calculate total latency between input and output with kAudioUnitSubType_VoiceProcessingIO

2020-01-21 Thread Arshia Cont via Coreaudio-api
Just out of curiosity, what is your AVAudioSession mode? What if you use 
internal speaker/microphone?

> On 21 Jan 2020, at 09:36, Jonatan Liljedahl  wrote:
> 
> Hi,
> 
> On Mon, Jan 20, 2020 at 9:58 PM Arshia Cont  wrote:
>> 
>> Jonathan,
>> 
>> First of all: Paul Davis is right. Both “internalLatency” and 
>> “outputLatency” values from AVAudioSession are estimations and not exact. 
>> They are most reliable when used without any external audio. Least reliable 
>> of course with Bluetooth!
> 
> Of course. In my testing, I get an additional 3ms round-trip latency
> that is not part of the calculation.
> 
>> Now:
>> I was also surprised to see the ‘4’ factor instead of ‘2’! My only raw guess 
>> is because my stream is stereo (which doesn’t make sense!).
>> 
>> The formula you sent is more or less correct. It actually depends on the 
>> values of ioBufferDuration and the other two latencies. Think of it this 
>> way: If the outputLatency is higher than the ioBuffer it probably means that 
>> the output buffer is also longer, then in a Pull system you need more of the 
>> system buffers to fill out that buffer. In most real-time scheduling 
>> mechanisms we introduce lags to make sure underflow or overflow doesn’t 
>> occur.
> 
> I just did a quick test here using my app AUM.
> 
> Focusrite Scarlett 6i6 connected via USB and lightning-to-USB adapter,
> with loop-back cable from out to in, 44.1kHz, using two different
> buffer sizes:
> 
> 64 frame buffer size:
> - buffer duration = 1.45 ms
> - input latency = 1.02 ms
> - output latency = 1.54 ms
> - calculated round-trip latency (inputLatency + outputLatency + 2 *
> ioBufferDuration) = 5.46 ms
> 
> 256 frame buffer size:
> - buffer duration = 5.80 ms
> - input latency = 1.02 ms
> - output latency = 1.54 ms
> - calculated round-trip latency = 14.17 ms
> 
> This is also a stereo stream BTW, which I don't think is relevant. In
> both cases I get about 3 ms extra latency, not part of the
> calculation. I don't have a setting for buffer sizes smaller than 64
> frames, so I didn't test the case where inputLatency >
> ioBufferDuration.
> 
> Using `4 * ioBufferDuration` would yield a larger error in the second
> case, and it would give a different error in the two cases. I still
> believe 2 is the correct factor to use!
> 
>> PS: Nice Apps! ;)
> 
> Thanks! :)
> 
> /Jonatan
> 
>> On 20 Jan 2020, at 19:24, Jonatan Liljedahl  wrote:
>> 
>> On Mon, Jan 20, 2020 at 6:36 PM Arshia Cont via Coreaudio-api
>>  wrote:
>> 
>> You get the following from AVAudioSession:
>> inputLatency
>> outputLatency
>> ioBufferDuration
>> 
>> Then your throughput latency, assuming a Stereo Stream, would be:  
>> inputLatency + outputLatency + 4*ioBufferDuration
>> 
>> 
>> I did the same, but I arrived at inputLatency + outputLatency + 2 *
>> ioBufferDuration!
>> 
>> In case inputLatency > ioBufferDuration, you add one more ioBufferDuration. 
>> Same with outputLatency! This means that when lowering your 
>> ioBufferDuration, your Session mode becomes important (which directly 
>> affects input and output latencies). The lowest you can achieve would thus 
>> be with the Measurement Mode.
>> 
>> 
>> So you mean the formula would be:
>> 
>> inputLatency + outputLatency + (inputLatency > ioBufferDuration ? 5 :
>> 4) * ioBufferDuration
>> 
>> Do you have an explanation why?
>> 
>> --
>> /Jonatan
>> http://kymatica.com
>> 
>> 
> 
> 
> -- 
> /Jonatan
> http://kymatica.com



smime.p7s
Description: S/MIME cryptographic signature
 ___
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list  (Coreaudio-api@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/coreaudio-api/archive%40mail-archive.com

This email sent to arch...@mail-archive.com


Re: CoreAudio: Calculate total latency between input and output with kAudioUnitSubType_VoiceProcessingIO

2020-01-20 Thread Arshia Cont via Coreaudio-api
Jonathan,

First of all: Paul Davis is right. Both “internalLatency” and “outputLatency” 
values from AVAudioSession are estimations and not exact. They are most 
reliable when used without any external audio. Least reliable of course with 
Bluetooth!

Now: 
I was also surprised to see the ‘4’ factor instead of ‘2’! My only raw guess is 
because my stream is stereo (which doesn’t make sense!).

The formula you sent is more or less correct. It actually depends on the values 
of ioBufferDuration and the other two latencies. Think of it this way: If the 
outputLatency is higher than the ioBuffer it probably means that the output 
buffer is also longer, then in a Pull system you need more of the system 
buffers to fill out that buffer. In most real-time scheduling mechanisms we 
introduce lags to make sure underflow or overflow doesn’t occur.


Arshia
www.antescofo.com <http://www.antescofo.com/> 


PS: Nice Apps! ;) 


> On 20 Jan 2020, at 19:24, Jonatan Liljedahl  wrote:
> 
> On Mon, Jan 20, 2020 at 6:36 PM Arshia Cont via Coreaudio-api
>  wrote:
> 
>> You get the following from AVAudioSession:
>> inputLatency
>> outputLatency
>> ioBufferDuration
>> 
>> Then your throughput latency, assuming a Stereo Stream, would be:  
>> inputLatency + outputLatency + 4*ioBufferDuration
> 
> I did the same, but I arrived at inputLatency + outputLatency + 2 *
> ioBufferDuration!
> 
>> In case inputLatency > ioBufferDuration, you add one more ioBufferDuration. 
>> Same with outputLatency! This means that when lowering your 
>> ioBufferDuration, your Session mode becomes important (which directly 
>> affects input and output latencies). The lowest you can achieve would thus 
>> be with the Measurement Mode.
> 
> So you mean the formula would be:
> 
> inputLatency + outputLatency + (inputLatency > ioBufferDuration ? 5 :
> 4) * ioBufferDuration
> 
> Do you have an explanation why?
> 
> -- 
> /Jonatan
> http://kymatica.com



smime.p7s
Description: S/MIME cryptographic signature
 ___
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list  (Coreaudio-api@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/coreaudio-api/archive%40mail-archive.com

This email sent to arch...@mail-archive.com


Re: CoreAudio: Calculate total latency between input and output with kAudioUnitSubType_VoiceProcessingIO

2020-01-20 Thread Arshia Cont via Coreaudio-api
Hi Eric,

I did some reverse engineering on this issue about a year ago and this is what 
I found:

Note that it only applies to RemoteIO in the context of AudioUnits / 
AudioGraphs. I believe that AVAudioEngine uses the same graph under the hood 
but haven’t replicated measurements there yet. Any insights from CoreAudio 
people would be welcome! :) 

You get the following from AVAudioSession:
inputLatency
outputLatency
ioBufferDuration

Then your throughput latency, assuming a Stereo Stream, would be:  inputLatency 
+ outputLatency + 4*ioBufferDuration

In case inputLatency > ioBufferDuration, you add one more ioBufferDuration. 
Same with outputLatency! This means that when lowering your ioBufferDuration, 
your Session mode becomes important (which directly affects input and output 
latencies). The lowest you can achieve would thus be with the Measurement Mode.

Now all the above values are in Seconds (sigh!) but you should be able to 
convert back to samples easily using your Session sample rate.

This all makes sense if you visualise the audio graph and imagine a Pull 
mechanism (which is what’s happening in CoreAudio). 

If anyone else has insights on this, it would be very welcome! 


Cheers,


Arshia Cont
www.antescofo.com <http://www.antescofo.com/> 



> On 20 Jan 2020, at 15:09, Eric Herbrandson via Coreaudio-api 
>  wrote:
> 
> I am working on an application using CoreAudio on the iPhone/iPad. The 
> application both plays audio through the speakers (output) as well as records 
> audio from the microphone (input) at the same time. For the purposes of this 
> application it is extremely important that I be able to compare the input and 
> output, specifically how well they "line up" in the time domain. Because of 
> this, correctly calculating the total latency between the input and output 
> channels is critical.
> 
> I am testing across 3 different devices. An iPhone, an iPad, and the 
> simulator. I've been able to empirically determine that the latency for the 
> iPhone is somewhere around 4050 samples, the iPad is closer to 4125 samples, 
> and the simulator is roughly 2500 samples.
> 
> After much research (aka googling) I found a smattering of discussions online 
> about calculating latency in CoreAudio, but they generally pertain to using 
> CoreAudio on OSX rather than iOS. Because of this, they refer to various 
> functions that do not exist on iOS. However, it seems that for iOS the 
> correct solution will be to use AVAudioSession and some combination of the 
> inputLatency, outputLatency, and IOBufferDuration. However, no combinations 
> of these values seem to add up to the empirically determined values above. In 
> addition, I get wildly different values for each parameter when I check them 
> before vs. after calling AudioUnitInitialize. Even more confusing is that the 
> values are much closer to the expected latency before the call to 
> AudioUnitInitialize, which is the opposite of what I would expect.
> 
> Here are the values I am seeing.
> 
> iPad (before): in 0.032375, out 0.013651, buf 0.023220, total samples 3054
> iPad (after): in 0.000136, out 0.001633, buf 0.023220, total samples 1102
> iPhone (before): in 0.065125, out 0.004500, buf 0.021333, total samples 4011
> iPhone (after): 0.000354, out 0.000292, buf 0.021333, total samples 969
> The simulator always returns 0.01 for in and out, but I suspect these aren't 
> actual/correct values and that the simulator just doesn't support this 
> functionality.
> 
> One other potentially interesting note is that I'm using 
> kAudioUnitSubType_VoiceProcessingIO rather than kAudioUnitSubType_RemoteIO 
> which I do expect to add some additional latency. My assumption is that this 
> would be included in the inputLatency value, but perhaps there's another 
> value I need to query to include this?
> 
> What's the correct way to determine the total latency between input and 
> output in iOS?
> 
> 
> 
> ___
> Do not post admin requests to the list. They will be ignored.
> Coreaudio-api mailing list  (Coreaudio-api@lists.apple.com)
> Help/Unsubscribe/Update your Subscription:
> https://lists.apple.com/mailman/options/coreaudio-api/arshiacont%40antescofo.com
> 
> This email sent to arshiac...@antescofo.com



smime.p7s
Description: S/MIME cryptographic signature
 ___
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list  (Coreaudio-api@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/coreaudio-api/archive%40mail-archive.com

This email sent to arch...@mail-archive.com


Re: AVAudioEngine input and output devices

2019-09-28 Thread Arshia Cont via Coreaudio-api
Thank you Dominic for sharing this.

Is this general to both OSX and iOS or an OSX issue only? on iOS we can manage 
to route audio using AVAudioSession. I’m not an OSX guy that’s why I’m asking 
and before I move everything to AVAudioEngine!

> On 28 Sep 2019, at 22:53, Dominic Feira via Coreaudio-api 
>  wrote:
> 
> Yes, this works if you are only using audio output. I am using the input and 
> output at the same time. If you do this, then you are restricted to using the 
> default input and output devices.
> 
> — Dominic
> 
>> On Sep 28, 2019, at 1:06 AM, Tamas Nagy > > wrote:
>> 
>> Hi Dominic,
>> 
>> you can change the output to an AudioDeviceID on the AudioUnit of the 
>> outputNode.
>> 
>> OSStatus err = AudioUnitSetProperty([[audioEngine outputNode] audioUnit],
>> 
>> kAudioOutputUnitProperty_CurrentDevice,
>> kAudioUnitScope_Global,
>> 0,
>> ,
>> sizeof(AudioDeviceID));
>> 
>> This works for us for years now.
>> 
>> Hope this helps.
>> 
>> Best,
>> Tamas
>> 
>>> On 2019. Sep 28., at 2:56, Dominic Feira via Coreaudio-api 
>>> mailto:coreaudio-api@lists.apple.com>> 
>>> wrote:
>>> 
>>> For the record, DTS has confirmed that everything I have posted below is 
>>> correct. AVAudioEngine can only be used with the default input and output 
>>> devices. The fact that AVAudioEngine ever shipped for the Mac like this at 
>>> all says a lot.
>>> 
>>> — Dominic
>>> 
 I have been working with AVAudioEngine. 
 
 By default the AVAudioEngine is using an aggregate audio device with the 
 default input/output as the subdevices. This makes sense so the entire 
 engine can run on a single clock.
 
 Is it possible to set the input and output devices to use something other 
 than the system?s default input and output?
 
 From experimentation, if I set the input or output to anything other than 
 an aggregate device it fails. If I change the input and output to use the 
 same aggregate device (that I created with non-default devices), it works. 
 However any time the default output or input of the system changes, the 
 engine?s input/output are set back to the aggregate device set up by core 
 audio that uses the default input/output, basically overriding what I told 
 the engine to do.
 
 I hope that I?m missing something obvious, but I think I have run into the 
 limits of this API. For an API that has been on the Mac for several years 
 it is very limited.
>>> 
>>> ___
>>> Do not post admin requests to the list. They will be ignored.
>>> Coreaudio-api mailing list  (Coreaudio-api@lists.apple.com 
>>> )
>>> Help/Unsubscribe/Update your Subscription:
>>> https://lists.apple.com/mailman/options/coreaudio-api/tamas.lov.nagy%40gmail.com
>>>  
>>> 
>>> 
>>> This email sent to tamas.lov.n...@gmail.com 
>>> 
>> 
> 
> ___
> Do not post admin requests to the list. They will be ignored.
> Coreaudio-api mailing list  (Coreaudio-api@lists.apple.com)
> Help/Unsubscribe/Update your Subscription:
> https://lists.apple.com/mailman/options/coreaudio-api/arshiacont%40antescofo.com
> 
> This email sent to arshiac...@antescofo.com

 ___
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list  (Coreaudio-api@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/coreaudio-api/archive%40mail-archive.com

This email sent to arch...@mail-archive.com


Re: CoreAudio vs. AVAudioEngine

2019-08-02 Thread Arshia Cont via Coreaudio-api
You can use both! In both cases, you should avoid everything that conflict with 
real-time Audio. More cautious is needed with Swift (such as no Swift in 
real-time blocks). AVAudioEngine is of course much more Swift Friendly.

> On 2 Aug 2019, at 11:50, Beinan Li  wrote:
> 
> Thanks Arshia! Really appreciate your pointers. That's great news. 
> By the way, do you use Swift or OBjC with AVAudioEngine?
> 
> 
> Thanks,
> Beinan
> 
> 
> 
> On Fri, Aug 2, 2019 at 5:40 PM Arshia Cont  <mailto:arshiac...@antescofo.com>> wrote:
> Beinan,
> 
> This is my understanding of the situation talking to some of the CoreAudio 
> people at this year’s WWDC and following the recent evolution. Ofcourse it 
> all depends on what you’re currently doing with CoreAudio so my input is 
> biased on my needs which are low-latency real-time Audio I/O.
> 
> AuGraph is officially marked as Deprecated starting iOS 13 (and Catalina?). 
> AVAudioEngine seems to be ready to provide the move. For my concerns, it 
> wasn’t still ready until the current Beta (again for low-latency real-time 
> audio). With the addition of the AVAudioSinkNode and AVAudioSourceNode, 
> AVAudioEngine can cope with what AUnit and AUGRAPH could do on low-latency 
> situations. In your case and from your description, you should have a look at 
> AVAudioSourceNode. 
> Here is what you need: https://developer.apple.com/videos/play/wwdc2019/510/ 
> <https://developer.apple.com/videos/play/wwdc2019/510/>
> 
> It also seems like your deprecated warnings are much older than what I 
> describe above! I’ve been using AUnit and AuGraph until now without problem 
> removing Canonical (and other) deprecated stuff as they appeared. So if 
> you’re targeting iOS 12, you can still stay on older CoreAudio APIs. But you 
> might wanna start shopping on AVAudioEngine as it’s gonna be time to move 
> soon after adoption of Catalina/iOS13 in a few months.
> 
> Any further insight from other users would be welcome. I’m going to start 
> testing the Sink and Source nodes for real-time use soon.
> 
> Cheers,
> 
> 
> Arshia Cont
> www.antescofo.com <http://www.antescofo.com/> 
>  
> 
>> On 2 Aug 2019, at 11:01, Beinan Li via Coreaudio-api 
>> mailto:coreaudio-api@lists.apple.com>> wrote:
>> 
>> Hello CoreAudio,
>> 
>> I'm seeking advice on migrating from CoreAudio/ObjC++ to 
>> AVAudioEngine/Swift/ObjC: should I do it or when should I start worry? 
>> 
>> I have a legacy macOS/iOS project that uses CoreAudio heavily, mainly just 
>> AudioQueue, but right now on Mojave / iOS 12 things start to breakdown or 
>> throw "deprecated" warnings at me, e.g.:
>>  
>> AudioSampleType is deprecated: The concept of canonical formats is 
>> deprecated. 
>> 
>> And I need to at least bring in AVFoundation. This prompts me to shop around 
>> for other solutions.
>> 
>> My project is not a synth/FX app. It would only use metering/spectrogram and 
>> some visualization at best. Low-latency is essential though. The info about 
>> latest CoreAudio stack on the internet seems increasingly scarce and I'm 
>> hesitating to adopt 3rd-party frameworks such as JUCE for such a small 
>> project. So I'd love to give the Swift stack or even just ObjC/C stack a 
>> serious try. 
>> 
>> Now my concern is mainly in the performance and latency. I've seen these 
>> posts regarding using AVAudioEngine:
>> 
>> https://stackoverflow.com/questions/1877410/whats-the-difference-between-all-these-audio-frameworks
>>  
>> <https://stackoverflow.com/questions/1877410/whats-the-difference-between-all-these-audio-frameworks>
>> 
>> https://stackoverflow.com/questions/26115626/i-want-to-call-20-times-per-second-the-installtaponbusbuffersizeformatblock/26600077#26600077
>>  
>> <https://stackoverflow.com/questions/26115626/i-want-to-call-20-times-per-second-the-installtaponbusbuffersizeformatblock/26600077#26600077>
>> 
>> https://stackoverflow.com/questions/45644079/performance-between-coreaudio-and-avfoundation
>>  
>> <https://stackoverflow.com/questions/45644079/performance-between-coreaudio-and-avfoundation>
>> 
>> It seems there aren't a lot of agreements on whether or not the Swift stack 
>> could achieve low-latency using its AU wrapper. Some says that the 
>> performance is 30-40% worse than the CoreAudio stack. But there is also a 
>> positive report:
>> 
>> https://stackoverflow.com/questions/45644079/performance-between-coreaudio-and-avfoundation
>>  
>> <https://stackoverflow.com/questions/45644079/performance-between-coreaudio-and-avfou

Re: CoreAudio vs. AVAudioEngine

2019-08-02 Thread Arshia Cont via Coreaudio-api
Beinan,

This is my understanding of the situation talking to some of the CoreAudio 
people at this year’s WWDC and following the recent evolution. Ofcourse it all 
depends on what you’re currently doing with CoreAudio so my input is biased on 
my needs which are low-latency real-time Audio I/O.

AuGraph is officially marked as Deprecated starting iOS 13 (and Catalina?). 
AVAudioEngine seems to be ready to provide the move. For my concerns, it wasn’t 
still ready until the current Beta (again for low-latency real-time audio). 
With the addition of the AVAudioSinkNode and AVAudioSourceNode, AVAudioEngine 
can cope with what AUnit and AUGRAPH could do on low-latency situations. In 
your case and from your description, you should have a look at 
AVAudioSourceNode. 
Here is what you need: https://developer.apple.com/videos/play/wwdc2019/510/ 
<https://developer.apple.com/videos/play/wwdc2019/510/>

It also seems like your deprecated warnings are much older than what I describe 
above! I’ve been using AUnit and AuGraph until now without problem removing 
Canonical (and other) deprecated stuff as they appeared. So if you’re targeting 
iOS 12, you can still stay on older CoreAudio APIs. But you might wanna start 
shopping on AVAudioEngine as it’s gonna be time to move soon after adoption of 
Catalina/iOS13 in a few months.

Any further insight from other users would be welcome. I’m going to start 
testing the Sink and Source nodes for real-time use soon.

Cheers,


Arshia Cont
www.antescofo.com <http://www.antescofo.com/> 
 

> On 2 Aug 2019, at 11:01, Beinan Li via Coreaudio-api 
>  wrote:
> 
> Hello CoreAudio,
> 
> I'm seeking advice on migrating from CoreAudio/ObjC++ to 
> AVAudioEngine/Swift/ObjC: should I do it or when should I start worry? 
> 
> I have a legacy macOS/iOS project that uses CoreAudio heavily, mainly just 
> AudioQueue, but right now on Mojave / iOS 12 things start to breakdown or 
> throw "deprecated" warnings at me, e.g.:
>  
> AudioSampleType is deprecated: The concept of canonical formats is 
> deprecated. 
> 
> And I need to at least bring in AVFoundation. This prompts me to shop around 
> for other solutions.
> 
> My project is not a synth/FX app. It would only use metering/spectrogram and 
> some visualization at best. Low-latency is essential though. The info about 
> latest CoreAudio stack on the internet seems increasingly scarce and I'm 
> hesitating to adopt 3rd-party frameworks such as JUCE for such a small 
> project. So I'd love to give the Swift stack or even just ObjC/C stack a 
> serious try. 
> 
> Now my concern is mainly in the performance and latency. I've seen these 
> posts regarding using AVAudioEngine:
> 
> https://stackoverflow.com/questions/1877410/whats-the-difference-between-all-these-audio-frameworks
>  
> <https://stackoverflow.com/questions/1877410/whats-the-difference-between-all-these-audio-frameworks>
> 
> https://stackoverflow.com/questions/26115626/i-want-to-call-20-times-per-second-the-installtaponbusbuffersizeformatblock/26600077#26600077
>  
> <https://stackoverflow.com/questions/26115626/i-want-to-call-20-times-per-second-the-installtaponbusbuffersizeformatblock/26600077#26600077>
> 
> https://stackoverflow.com/questions/45644079/performance-between-coreaudio-and-avfoundation
>  
> <https://stackoverflow.com/questions/45644079/performance-between-coreaudio-and-avfoundation>
> 
> It seems there aren't a lot of agreements on whether or not the Swift stack 
> could achieve low-latency using its AU wrapper. Some says that the 
> performance is 30-40% worse than the CoreAudio stack. But there is also a 
> positive report:
> 
> https://stackoverflow.com/questions/45644079/performance-between-coreaudio-and-avfoundation
>  
> <https://stackoverflow.com/questions/45644079/performance-between-coreaudio-and-avfoundation>
> 
> Before I dive into a whole new world (Swift newbie), any suggestions from the 
> list would be greatly appreciated!
> 
> 
> 
> Thanks,
> Beinan
> 
> ___
> Do not post admin requests to the list. They will be ignored.
> Coreaudio-api mailing list  (Coreaudio-api@lists.apple.com)
> Help/Unsubscribe/Update your Subscription:
> https://lists.apple.com/mailman/options/coreaudio-api/arshiacont%40antescofo.com
> 
> This email sent to arshiac...@antescofo.com

 ___
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list  (Coreaudio-api@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/coreaudio-api/archive%40mail-archive.com

This email sent to arch...@mail-archive.com


Re: Playback time at AVAudioEngineConfigurationChangeNotification?

2019-04-15 Thread Arshia Cont
Tamás,

I gave up AVAudioEngine for the same reasons a while back. I achieve similar 
results by counting on AVAudioSession RouteChange notifications (which I 
believe is what AVAudioEngine uses anyway) and low level AudioUnits to 
synchronize buffers during interruptions.

My understanding is that AVAUdioEngine is a wrapper on top of good-old 
Avaudiosession and audiounits! My main issue with not wanting to go further on 
AVAUdioEngine was its poor performance on real-time tap nodes while classical 
AUnits are controllable to the slightest detail.

If anyone can shed light on these observations it would be great.

Arshia Cont
www.antescofo.com 

Sent from my iPhone

> On 15 Apr 2019, at 00:10, Tamás Zahola  wrote:
> 
> Hi,
> 
> I’m writing an iOS audio player utilizing AVAudioEngine with an 
> AVAudioPlayerNode and some effect nodes. 
> 
> I want the audio to continue as seamlessly as possible on an audio route 
> change, in other words: when playing audio through the external speakers and 
> we plug in a headphone, then the audio in the headphones should continue 
> exactly where the speakers have left off.
> 
> According to the docs, AVAudioEngine is stopped and connections are severed 
> when such an audio route change occurs; thus the audio graph connections have 
> to be re-established and playback has to be started afresh (buffers has to be 
> enqueued again, erc.). When this happens a notification is posted 
> (AVAudioEngineConfigurationChangeNotification).
> 
> In response to this notification, I wanted to simply re-enqueue the 
> previously enqueued audio buffers, possibly skipping a bunch of samples from 
> the start of the buffer that was playing at the time of the interruption, so 
> that those parts that made it through the speaker won’t be played again in 
> the headphones.
> 
> But there’s an issue here: by the time this notification is posted, the 
> engine’s internal state seems to be torn down (the nodes are stopped and 
> their `lastRenderTime` is nil), so I can’t figure out exactly where the 
> playback was interrupted...
> 
> Have I missed an API that would let me query the playback time after such an 
> interruption? 
> 
> What is the recommended approach for handling these route/configuration 
> changes seamlessly? 
> 
> Calculating playback time from “wall time” (i.e. mach_absolute_time) feels a 
> bit icky to me when working with a high-level API like AVAudioEngine...
> 
> Best regards,
> Tamás Zahola
> ___
> Do not post admin requests to the list. They will be ignored.
> Coreaudio-api mailing list  (Coreaudio-api@lists.apple.com)
> Help/Unsubscribe/Update your Subscription:
> https://lists.apple.com/mailman/options/coreaudio-api/arshiacont%40antescofo.com
> 
> This email sent to arshiac...@antescofo.com
 ___
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list  (Coreaudio-api@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/coreaudio-api/archive%40mail-archive.com

This email sent to arch...@mail-archive.com


Re: Coreaudio-api Digest, Vol 15, Issue 9 - microphone locations

2018-11-12 Thread Arshia Cont
Thank you Jason!!! Very useful.. I’m searching for a similar page for 
iPhones..in case you have it handy! :)

> On 11 Nov 2018, at 09:27, Jason Cowling  wrote:
> 
> Hi - documentation (images) of microphone locations here: 
> https://support.apple.com/en-us/HT201471 
> <https://support.apple.com/en-us/HT201471> 
> <177F6616C5954E4485FE0F800CA87B80.png>
> On Nov 10, 2018, 3:00 PM -0500, coreaudio-api-requ...@lists.apple.com, wrote:
>> Send Coreaudio-api mailing list submissions to
>> coreaudio-api@lists.apple.com
>> 
>> To subscribe or unsubscribe via the World Wide Web, visit
>> https://lists.apple.com/mailman/listinfo/coreaudio-api
>> or, via email, send a message with subject or body 'help' to
>> coreaudio-api-requ...@lists.apple.com
>> 
>> You can reach the person managing the list at
>> coreaudio-api-ow...@lists.apple.com
>> 
>> When replying, please edit your Subject line so it is more specific
>> than "Re: Contents of Coreaudio-api digest..."
>> 
>> 
>> Today's Topics:
>> 
>> 1. Re: iPad Pro 3 audio distortion? (Dean Reyburn)
>> 2. Re: iPad Pro 3 audio distortion? (Arshia Cont)
>> 
>> 
>> --
>> 
>> Message: 1
>> Date: Fri, 09 Nov 2018 15:18:16 -0500
>> From: Dean Reyburn 
>> To: CoreAudio 
>> Subject: Re: iPad Pro 3 audio distortion?
>> Message-ID: 
>> Content-Type: text/plain; charset="utf-8"
>> 
>> Hi Arshia,
>> I’m using AVAudioSessionCategoryPlayAndRecord but maybe I should use 
>> AVAudioSessionCategoryRecord since my app does not itself do any playback. I 
>> do wish to let the user listen to background music on headphones if they 
>> wish.
>> 
>> Apple is suggesting I try the .measurement or AVAudioSessionModeMeasurement 
>> setting. Need to test that.
>> 
>> I use AudioUnits, the lowest level audio input possible.
>> 
>> Sorry I don’t know where all the mics are, I could only find four of them, 
>> three on the front, one on the back. My app only wants to use one mic.
>> 
>> -Dean Reyburn
>> 
>>> On Nov 9, 2018, at 11:36 AM, Arshia Cont  wrote:
>>> 
>>> Hi Dean,
>>> 
>>> What AVAudioSession Category/Mode are you using? Beyond this, you should 
>>> probably tell us how you record (AVAudioEngine? AUnit?).. My first 
>>> suggestion si to check the Stream format of your audio chain with that 
>>> provided by the system (such as sample rate etc.).
>>> 
>>> Do you mind sharing the positions of the 5 microphones?! Time to buy one of 
>>> those I guess! :)
>>> 
>>> Arshia
>>> 
>>>> On 9 Nov 2018, at 17:29, Dean Reyburn  wrote:
>>>> 
>>>> Hi all,
>>>> My app, CyberTuner does low level audio recording and custom DSP, then 
>>>> displays the results in a way useful to pro piano tuners. On iPad Pro 
>>>> generation 3 which was just released on Nov. 7th, the audio appears to be 
>>>> distorted. This is the first time I’ve ever seen any general recording 
>>>> issues with iOS devices.
>>>> 
>>>> There are five (!!) microphones on these new devices so I suspect that 
>>>> fact has something to do with the problem. Any suggestions or anyone else 
>>>> see issues with the new iPads?
>>>> 
>>>> My app simply uses the default microphone.
>>>> 
>>>> Thanks in advance,
>>>> 
>>>> -Dean Reyburn
>>>> d...@reyburn.com
>>>> www.cybertuner.com
>>>> ___
>>>> Do not post admin requests to the list. They will be ignored.
>>>> Coreaudio-api mailing list (Coreaudio-api@lists.apple.com)
>>>> Help/Unsubscribe/Update your Subscription:
>>>> https://lists.apple.com/mailman/options/coreaudio-api/arshiacont%40antescofo.com
>>>> 
>>>> This email sent to arshiac...@antescofo.com
>>> 
>> 
>> 1-888-SOFT-440 or 1-888-763-8440
>> Reyburn CyberTuner, Inc.
>> http://www.cybertuner.com
>> 
>> -- next part --
>> An HTML attachment was scrubbed...
>> URL: 
>> <https://lists.apple.com/mailman/private/coreaudio-api/attachments/20181109/c14a5da8/attachment.html>
>> 
>> --
>> 
>> Message: 2
>> Date: Fri, 09 Nov 2018 21:36:31 +0100
>> From: Arshia Cont 
>> To: Dean Reyburn 
>> Cc: CoreAudio 

Re: iPad Pro 3 audio distortion?

2018-11-09 Thread Arshia Cont
It all depends on the type of live processing you employ with AudioUnits! You 
would want to use Measurement Mode if you want low latency by lowering the 
AVAudioSession’s Buffer Duration. Note that Measurement mode will remove AGC 
and Equalisation to achieve low-latency. So if you record audio, you need to 
take that into account.

I would also recommend double checking all the Stream Formats in your AudioUnit 
chain.

If you are getting continuous glitches it might be that your custom DSP is not 
delivering before the main audio thread cycle. But I’d be surprised if that’s 
the case since “iPad Pro 3rd gen” should naturally improve performance with 
regards to prior devices I would assume!

> On 9 Nov 2018, at 21:18, Dean Reyburn  wrote:
> 
> Hi Arshia,
> I’m using AVAudioSessionCategoryPlayAndRecord but maybe I should use 
> AVAudioSessionCategoryRecord since my app does not itself do any playback. I 
> do wish to let the user listen to background music on headphones if they wish.
> 
> Apple is suggesting I try the .measurement or AVAudioSessionModeMeasurement 
> setting. Need to test that.
> 
> I use AudioUnits, the lowest level audio input possible.
> 
> Sorry I don’t know where all the mics are, I could only find four of them, 
> three on the front, one on the back.  My app only wants to use one mic.
> 
> -Dean Reyburn
> 
>> On Nov 9, 2018, at 11:36 AM, Arshia Cont > <mailto:arshiac...@antescofo.com>> wrote:
>> 
>> Hi Dean,
>> 
>> What AVAudioSession Category/Mode are you using? Beyond this, you should 
>> probably tell us how you record (AVAudioEngine? AUnit?).. My first 
>> suggestion si to check the Stream format of your audio chain with that 
>> provided by the system (such as sample rate etc.).
>> 
>> Do you mind sharing the positions of the 5 microphones?! Time to buy one of 
>> those I guess! :)
>> 
>> Arshia
>> 
>>> On 9 Nov 2018, at 17:29, Dean Reyburn >> <mailto:d...@reyburn.com>> wrote:
>>> 
>>> Hi all,
>>> My app, CyberTuner does low level audio recording and custom DSP, then 
>>> displays the results in a way useful to pro piano tuners.  On iPad Pro 
>>> generation 3 which was just released on Nov. 7th, the audio appears to be 
>>> distorted. This is the first time I’ve ever seen any general recording 
>>> issues with iOS devices.
>>> 
>>> There are five (!!) microphones on these new devices so I suspect that fact 
>>> has something to do with the problem. Any suggestions or anyone else see 
>>> issues with the new iPads?
>>> 
>>> My app simply uses the default microphone.
>>> 
>>> Thanks in advance,
>>> 
>>> -Dean Reyburn
>>> d...@reyburn.com <mailto:d...@reyburn.com>
>>> www.cybertuner.com
>>> ___
>>> Do not post admin requests to the list. They will be ignored.
>>> Coreaudio-api mailing list  (Coreaudio-api@lists.apple.com)
>>> Help/Unsubscribe/Update your Subscription:
>>> https://lists.apple.com/mailman/options/coreaudio-api/arshiacont%40antescofo.com
>>> 
>>> This email sent to arshiac...@antescofo.com
>> 
> 
> 1-888-SOFT-440  or 1-888-763-8440
> Reyburn CyberTuner, Inc.
> http://www.cybertuner.com <http://www.cybertuner.com/>

 ___
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list  (Coreaudio-api@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/coreaudio-api/archive%40mail-archive.com

This email sent to arch...@mail-archive.com


Re: iPad Pro 3 audio distortion?

2018-11-09 Thread Arshia Cont
Hi Dean,

What AVAudioSession Category/Mode are you using? Beyond this, you should 
probably tell us how you record (AVAudioEngine? AUnit?).. My first suggestion 
si to check the Stream format of your audio chain with that provided by the 
system (such as sample rate etc.).

Do you mind sharing the positions of the 5 microphones?! Time to buy one of 
those I guess! :)

Arshia

> On 9 Nov 2018, at 17:29, Dean Reyburn  wrote:
> 
> Hi all,
> My app, CyberTuner does low level audio recording and custom DSP, then 
> displays the results in a way useful to pro piano tuners.  On iPad Pro 
> generation 3 which was just released on Nov. 7th, the audio appears to be 
> distorted. This is the first time I’ve ever seen any general recording issues 
> with iOS devices.
> 
> There are five (!!) microphones on these new devices so I suspect that fact 
> has something to do with the problem. Any suggestions or anyone else see 
> issues with the new iPads?
> 
> My app simply uses the default microphone.
> 
> Thanks in advance,
> 
> -Dean Reyburn
> d...@reyburn.com
> www.cybertuner.com
> ___
> Do not post admin requests to the list. They will be ignored.
> Coreaudio-api mailing list  (Coreaudio-api@lists.apple.com)
> Help/Unsubscribe/Update your Subscription:
> https://lists.apple.com/mailman/options/coreaudio-api/arshiacont%40antescofo.com
> 
> This email sent to arshiac...@antescofo.com

 ___
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list  (Coreaudio-api@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/coreaudio-api/archive%40mail-archive.com

This email sent to arch...@mail-archive.com


VoiceProcessingIO Quality / Muting Control on iOS

2018-09-26 Thread Arshia Cont
Hello list,

I am testing the quality of Echo Suppression in VoiceProcessingIO subtype of 
AudioUnit.

I realise that I do not have the same quality from device to device, which I 
can partly understand.

What I see most is this: In a “double-talk scenario”, I get more muting on 
iPads than iPhones. Even on iPhones this differs from model to model and 
“iPhone X” does not seem to be best on my list! It seems to me that there is 
“muting” on top of an adaptive filtering in action here. I have been playing 
around with available parameters (such AGC) which obviously have no effect in 
controlling the quality / muting. Thus my questions:

* Is there a way to control the Muting effect?!

* Is there a way to control the overall Quality?  I know that there is a 
deprecated parameter on this. I assume that CoreAudio now chooses that 
parameter based on device/context capacities. I understand this design choice! 
But it’d be great to let developers “pay” for what they ask (meaning I gotta 
optimise more to get better stuff, if possible).


Cheers,



Arshia Cont
 ___
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list  (Coreaudio-api@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/coreaudio-api/archive%40mail-archive.com

This email sent to arch...@mail-archive.com


Re: AUGraph deprecation

2018-07-11 Thread Arshia Cont
Bartosz,

Laurent was mentioning the installTapOnBus. Your published code would not need 
that. You are just playing MIDI. You would be concerned if you had to do custom 
real-time audio processing on the audio output of your MIDI device (such as FFT 
analysis).

Arshia

> On 11 Jul 2018, at 16:04, Bartosz Nowotny  wrote:
> 
> Laurent,
> 
> What you said about not being able to achieve latency lower than 100ms is 
> worrisome. I need a realtime MIDI synth, low latency is absolutely crucial. 
> Does the limitation you mention apply only to signal processing or other 
> applications of the API as well, in particular MIDI synthesis?
> 
> Regards,
> Bartosz
> 
> 
> On Wed, Jul 11, 2018 at 3:30 PM, Laurent Noudohounsi 
> mailto:laurent.noudohou...@gmail.com>> wrote:
> Thanks Benjamin for the precision. I thought that `installTapOnBus` was the 
> successor of `RenderCallback`. 
> For me it was not natural to mix old api like 
> `kAudioUnitProperty_SetRenderCallback` in AVAudioEngine.
> 
> So as Arshia said, I'm also looking for a way to use real-time processing 
> with AVAudioEngine.
> 
> Le mer. 11 juil. 2018 à 15:05, Arshia Cont  <mailto:arshiac...@antescofo.com>> a écrit :
> Interesting thread here!
> 
> Anyone has achieved low-latency processing on AVAudioEngine? 
> 
> The RenderCallback seems natural to me (which is the good “old” way of doing 
> it with AUGraph). But I’m curious to hear if anyone has done/achieved real 
> stuff here with AVAudioEngine real-time processing and how.
> 
> 
> Arshia 
> 
> 
>> On 11 Jul 2018, at 15:00, Benjamin Federer > <mailto:benja...@boinx.com>> wrote:
>> 
>> Laurent,
>> 
>> `installTapOnBus` is not intended for realtime processing as a tap only 
>> provides the current frame buffer but does not pass it back into the signal 
>> chain. The documentation reads `Installs an audio tap on the bus to record. 
>> monitor, and observe the output of the node`.
>> 
>> Although I have not done that myself yet my understanding is that for 
>> realtime processing you can still retrieve the underlying audio unit from an 
>> AVAudioNode (or at least some nodes?) and attach an input render callback 
>> via AudioUnitSetProperty with kAudioUnitProperty_SetRenderCallback.
>> 
>> I assume the other way would be to subclass AUAudioUnit and wrap that into 
>> an AVAudioUnit which is a subclass of AVAudioNode. Yes, it confuses me, too. 
>> Random Google result with further information: 
>> https://forums.developer.apple.com/thread/72674 
>> <https://forums.developer.apple.com/thread/72674>
>> 
>> Benjamin
>> 
>> 
>>> Am 11.07.2018 um 14:34 schrieb Laurent Noudohounsi 
>>> mailto:laurent.noudohou...@gmail.com>>:
>>> 
>>> Hi all,
>>> 
>>> I'm interested in this topic since I've not found any information about it 
>>> yet.
>>> 
>>> Correct me if I'm wrong but AVAudioEngine is not able to lower than 100ms 
>>> latency. It's what I see in the header file of `AVAudioNode` with its 
>>> method `installTapOnBus`: 
>>> 
>>> @param bufferSize the requested size of the incoming buffers in sample 
>>> frames. Supported range is [100, 400] ms.
>>> 
>>> Maybe I'm wrong but I don't see any other way to have a lower latency audio 
>>> processing in an AVAudioNode.
>>> 
>>> Best,
>>> Laurent
>>> 
>>> Le mer. 11 juil. 2018 à 13:57, Arshia Cont >> <mailto:arshiac...@antescofo.com>> a écrit :
>>> Benjamin and list,
>>> 
>>> I double Benjamin’s request. It would be great if someone from the 
>>> CoreAudio Team could respond to the question.
>>> 
>>> Two years ago, after basic tests I realised that AVAudioEngine was not 
>>> ready for Low Latency Audio analysis on iOS. So we used AUGraph. I have a 
>>> feeling that this is no longer the case on iOS and we can move to 
>>> AVAudioEngine for low-latency audio processing. Anyone can share experience 
>>> here? We do real-time spectral analysis and resynthesis of sound and go as 
>>> low as 64 samples per cycle if the device allows.
>>> 
>>> Thanks in advance.
>>> 
>>> 
>>> Arshia
>>> 
>>> 
>>> PS: I actually brought the deprecation issue of AUGraph in a local Apple 
>>> Dev meeting where the EU director of developer relation was present. 
>>> According to him, when Apple announces a deprecation, it WILL happen. My 
>>> interpretation of the conversation is that AUGraph is no longer ma

Re: AUGraph deprecation

2018-07-11 Thread Arshia Cont
Interesting thread here!

Anyone has achieved low-latency processing on AVAudioEngine? 

The RenderCallback seems natural to me (which is the good “old” way of doing it 
with AUGraph). But I’m curious to hear if anyone has done/achieved real stuff 
here with AVAudioEngine real-time processing and how.


Arshia 

> On 11 Jul 2018, at 15:00, Benjamin Federer  wrote:
> 
> Laurent,
> 
> `installTapOnBus` is not intended for realtime processing as a tap only 
> provides the current frame buffer but does not pass it back into the signal 
> chain. The documentation reads `Installs an audio tap on the bus to record. 
> monitor, and observe the output of the node`.
> 
> Although I have not done that myself yet my understanding is that for 
> realtime processing you can still retrieve the underlying audio unit from an 
> AVAudioNode (or at least some nodes?) and attach an input render callback via 
> AudioUnitSetProperty with kAudioUnitProperty_SetRenderCallback.
> 
> I assume the other way would be to subclass AUAudioUnit and wrap that into an 
> AVAudioUnit which is a subclass of AVAudioNode. Yes, it confuses me, too. 
> Random Google result with further information: 
> https://forums.developer.apple.com/thread/72674 
> <https://forums.developer.apple.com/thread/72674>
> 
> Benjamin
> 
> 
>> Am 11.07.2018 um 14:34 schrieb Laurent Noudohounsi 
>> mailto:laurent.noudohou...@gmail.com>>:
>> 
>> Hi all,
>> 
>> I'm interested in this topic since I've not found any information about it 
>> yet.
>> 
>> Correct me if I'm wrong but AVAudioEngine is not able to lower than 100ms 
>> latency. It's what I see in the header file of `AVAudioNode` with its method 
>> `installTapOnBus`: 
>> 
>> @param bufferSize the requested size of the incoming buffers in sample 
>> frames. Supported range is [100, 400] ms.
>> 
>> Maybe I'm wrong but I don't see any other way to have a lower latency audio 
>> processing in an AVAudioNode.
>> 
>> Best,
>> Laurent
>> 
>> Le mer. 11 juil. 2018 à 13:57, Arshia Cont > <mailto:arshiac...@antescofo.com>> a écrit :
>> Benjamin and list,
>> 
>> I double Benjamin’s request. It would be great if someone from the CoreAudio 
>> Team could respond to the question.
>> 
>> Two years ago, after basic tests I realised that AVAudioEngine was not ready 
>> for Low Latency Audio analysis on iOS. So we used AUGraph. I have a feeling 
>> that this is no longer the case on iOS and we can move to AVAudioEngine for 
>> low-latency audio processing. Anyone can share experience here? We do 
>> real-time spectral analysis and resynthesis of sound and go as low as 64 
>> samples per cycle if the device allows.
>> 
>> Thanks in advance.
>> 
>> 
>> Arshia
>> 
>> 
>> PS: I actually brought the deprecation issue of AUGraph in a local Apple Dev 
>> meeting where the EU director of developer relation was present. According 
>> to him, when Apple announces a deprecation, it WILL happen. My 
>> interpretation of the conversation is that AUGraph is no longer maintained 
>> but provided as is.
>> 
>>> On 11 Jul 2018, at 12:36, Benjamin Federer >> <mailto:benja...@boinx.com>> wrote:
>>> 
>>> Since it was mentioned in another email (thread) I’m giving this topic a 
>>> bump. Would be great if someone at Apple, or anyone else in the know, could 
>>> take the time to respond. The documentation at the link cited below still 
>>> has no indication of deprecation. Will it come with one of the next Xcode 
>>> Beta releases?
>>> 
>>> On another note I am really interested in how transitioning over to 
>>> AVAudioEngine is working out for everyone. I know AVAudioEngine on iOS. 
>>> What I am interested in is any macOS specifics or hardships.
>>> 
>>> From my experience AVAudioEngine is relatively robust in handling multiple 
>>> graphs, i.e. separate chains of audio units. I had some issues with the 
>>> AVAudioPlayerNode connecting to multiple destinations in that scenario. 
>>> Also connect:toConnectionPoints:fromBus:format: did not work for me as it 
>>> only connected to one of the destination points. Anyone else experienced 
>>> problems in that regard?
>>> 
>>> Thanks
>>> 
>>> Benjamin
>>> 
>>> 
>>>> Am 08.06.2018 um 16:59 schrieb Benjamin Federer >>> <mailto:benja...@boinx.com>>:
>>>> 
>>>> Last year at WWDC it was announced that AUGraph would be deprecated in 
>>>> 2018. I just brow

Re: AUGraph deprecation

2018-07-11 Thread Arshia Cont
Benjamin and list,

I double Benjamin’s request. It would be great if someone from the CoreAudio 
Team could respond to the question.

Two years ago, after basic tests I realised that AVAudioEngine was not ready 
for Low Latency Audio analysis on iOS. So we used AUGraph. I have a feeling 
that this is no longer the case on iOS and we can move to AVAudioEngine for 
low-latency audio processing. Anyone can share experience here? We do real-time 
spectral analysis and resynthesis of sound and go as low as 64 samples per 
cycle if the device allows.

Thanks in advance.


Arshia


PS: I actually brought the deprecation issue of AUGraph in a local Apple Dev 
meeting where the EU director of developer relation was present. According to 
him, when Apple announces a deprecation, it WILL happen. My interpretation of 
the conversation is that AUGraph is no longer maintained but provided as is.

> On 11 Jul 2018, at 12:36, Benjamin Federer  wrote:
> 
> Since it was mentioned in another email (thread) I’m giving this topic a 
> bump. Would be great if someone at Apple, or anyone else in the know, could 
> take the time to respond. The documentation at the link cited below still has 
> no indication of deprecation. Will it come with one of the next Xcode Beta 
> releases?
> 
> On another note I am really interested in how transitioning over to 
> AVAudioEngine is working out for everyone. I know AVAudioEngine on iOS. What 
> I am interested in is any macOS specifics or hardships.
> 
> From my experience AVAudioEngine is relatively robust in handling multiple 
> graphs, i.e. separate chains of audio units. I had some issues with the 
> AVAudioPlayerNode connecting to multiple destinations in that scenario. Also 
> connect:toConnectionPoints:fromBus:format: did not work for me as it only 
> connected to one of the destination points. Anyone else experienced problems 
> in that regard?
> 
> Thanks
> 
> Benjamin
> 
> 
>> Am 08.06.2018 um 16:59 schrieb Benjamin Federer > >:
>> 
>> Last year at WWDC it was announced that AUGraph would be deprecated in 2018. 
>> I just browsed the documentation 
>> (https://developer.apple.com/documentation/audiotoolbox?changes=latest_major 
>> )
>>  but found 
>> Audio Unit Processing Graph Services not marked for deprecation. The AUGraph 
>> header files rolled out with Xcode 10 beta also have no mention of a 
>> deprecation in 10.14. I searched for audio-specific sessions at this year’s 
>> WWDC but wasn’t able to find anything relevant. Has anyone come across new 
>> information regarding this?
>> 
>> Judging by how much changes and features Apple seems to be holding back 
>> until next year I dare ask: Has AUGraph API deprecation been moved to a 
>> later time?
>> 
>> Benjamin
> 
> ___
> Do not post admin requests to the list. They will be ignored.
> Coreaudio-api mailing list  (Coreaudio-api@lists.apple.com)
> Help/Unsubscribe/Update your Subscription:
> https://lists.apple.com/mailman/options/coreaudio-api/arshiacont%40antescofo.com
> 
> This email sent to arshiac...@antescofo.com

 ___
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list  (Coreaudio-api@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/coreaudio-api/archive%40mail-archive.com

This email sent to arch...@mail-archive.com


Re: AUGraph reconfiguration in response to route change

2018-07-10 Thread Arshia Cont
Bortosz,

Looking at your example and comparing it to mine (which dates now), I would try 
two things:

(1) I believe that AUGraphUninitialize alters (or unloads) the SoundFonts and 
hence your new Note On calls end up nowhere. 

(2) In my setup for updating Sample Rate (as a result of RouteChange), I stop 
the AUGraph, Update Stream Formats, Make Connections, Initialize AUGraph, and 
run AUGraphUpdate and then re-start the graph. I guess the ordering here is 
also important.

This said, we are both in the danger zone since AUGraph is doomed for 
deprecation. Any updates on this?!


Arshia Cont
www.antescofo.com <http://www.antescofo.com/> 

> On 10 Jul 2018, at 20:53, Bartosz Nowotny  wrote:
> 
> Yes, I have tried AVAudioSession route change notification as well - same 
> result.
> 
> Even though the code can handle multiple AUMIDISynths, I have set it up so 
> that for testing only 1 is ever created.
> 
> There is definitely something weird going on: if I respond to the route 
> change and then try to play some notes - the app always crashes. If I plug in 
> my headphones while some notes are being played, it either continues to play, 
> goes to silence or crashes.
> 
> I have created a snippet that shows how my set up looks like: 
> https://hastebin.com/ugumilofid.m <https://hastebin.com/ugumilofid.m>
> At the top, you can find a brief explaination of what the included code is 
> supposed to do, what the test scenario is and what is the app output.
> 
> I am out of ideas as to what is wrong here. Is it possible that this is a 
> threading issue? Other than the route change handling the set up works great: 
> I can load up multiple soundfonts, play notes, unload soundfonts, shut the 
> graph down and then start it again later.
> 
> Regards,
> Bartosz
> 
> On Tue, Jul 10, 2018 at 11:20 AM, Sven Thoennissen  <mailto:bioch...@me.com>> wrote:
> Hello,
> 
> It may have to do something with the property listener. Have you tried using 
> AVAudioSession.routeChangeNotification instead?
> Did you connect more than 8 AUMIDISynth to your mixer? (IIRC 8 is the maximum 
> possible, at least with AVAudioMixerNode)
> 
> In any case it is hard to tell without seeing code. I recommend to set up a 
> minimal test project with only 1 AUMIDISynth.
> 
> Have you considered using AVFoundation classes instead of the old V2 API? (at 
> least to reproduce the problem)
> 
> Best regards,
> Sven
> 
> > Am 09.07.2018 um 21:32 schrieb Bartosz Nowotny  > <mailto:bartosznowo...@gmail.com>>:
> > 
> > Hello,
> > 
> > I am trying to programmatically reconfigure AUGraph at runtime, in response 
> > to a route change.
> > 
> > My set up consists of a couple of AUMIDISynth nodes connected to a multi 
> > channel mixer node which in turn is connected to RemoteIO node. This set up 
> > works fine and I am able to produce audio by sending MIDI note on/off 
> > events.
> > 
> > I want to avoid audio data resampling at any point in the graph. I can 
> > start with a properly set up AUGraph that has all audio units use the same 
> > sample rate (starting from midi synths, through mixer to remote IO). Route 
> > changes (e.g. plugging in headphones) can change the output sample rate and 
> > thus introduce resampling and other side effects.
> > 
> > To respond to a route change I set up a property listener for StreamFormat 
> > on my IO unit. When the stream format changes, I call a method that 
> > reconfigures the AUGraph in the following manner:
> > 1. Stop the AUGraph
> > 2. Uninitialize the AUGraph
> > 3. Clear all graph connections
> > 4. Set mixer output sample rate (propagates to IO input sample rate)
> > 5. Set synth output sample rates (propagates to mixer input sample rates)
> > 6. Connect synth nodes to mixer node
> > 7. Connect mixer node to IO node
> > 8. Update, Initialize and Start the AUGraph
> > 
> > None of the above operations returns an error result.
> > 
> > The issue occurs when I send some note on/off events - the app crashes. 
> > What am I missing?
> > 
> > Regards,
> > Bartosz Nowotny
> > ___
> > Do not post admin requests to the list. They will be ignored.
> > Coreaudio-api mailing list  (Coreaudio-api@lists.apple.com 
> > <mailto:Coreaudio-api@lists.apple.com>)
> > Help/Unsubscribe/Update your Subscription:
> > https://lists.apple.com/mailman/options/coreaudio-api/biochill%40me.com 
> > <https://lists.apple.com/mailman/options/coreaudio-api/biochill%40me.com>
> > 
> > This email sent to bioch...@me.com <mailto:bioch

IPad Pro/iPhone X internal speaker route config

2018-06-11 Thread Arshia Cont
Hi list 

Has anyone here been able to configure internal Speakers (2x2) on iPad Pro or 
iPhone X in order to use 2 instead of 4 (bottom or top)?

The AVAudioSession’s setPreferredNumberOfChannels seems ineffective and the 
route shows 2 stereo outputs.

This seems to be automatic in the VoiceProcessing mode but we would like to 
avoid the overhead there and just configure Speaker route based on selected 
input mic (to avoid feedback).

Thanks in advance 

Arshia Cont

Sent from my iPhone
 ___
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list  (Coreaudio-api@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/coreaudio-api/archive%40mail-archive.com

This email sent to arch...@mail-archive.com


Re: iOS : ExtAudioFileWriteAsync alternatives / Audio Queue Services availability?

2018-02-14 Thread Arshia Cont
Huh.. Just needed some one else to say it.. back to basics.. :)

Thanks Matt for taking time to reply.

Sent from my iPhone

> On 14 Feb 2018, at 21:31, Matt Ingalls <m...@8dio.com> wrote:
> 
> OK I see what you are doing now.
> 
> Sounds like increasing kExtAudioFileProperty_IOBufferSizeBytes could prevent 
> an overflow.
> but maybe doing your own async writes would be safer :).  ExtAudioFile is a 
> pretty old API..
> 
> -m
> 
>> On Feb 14, 2018, at 11:24 AM, Arshia Cont <arshiac...@antescofo.com> wrote:
>> 
>> Matt,
>> 
>> Thanks for the reply! :)
>> 
>> AVAudioFile would be one way to go except that I am recording live streams, 
>> and using AVAudioFile I should make the ‘write’ asynchronous myself and make 
>> sure the direct-to-disk stuff won’t hic the AudioIO thread.
>> 
>> This is supposed to be the promise of ‘ExtAudioFileWriteAsync’.
>> 
>> Has anyone tried playing around with the 
>> kExtAudioFileProperty_IOBufferSizeBytes property of Extended Audio File 
>> Services? I have feeling that this is the way to go (increasing the buffer 
>> so that writing would occur less frequently.
>> 
>> Apparently the kExtAudioFileError_AsyncWriteBufferOverflow is thrown when 
>> write requests are arriving faster than the system’s ability to access disk 
>> (can happen at any time and all of a sudden). The bad news is that when this 
>> is thrown it actually halts the IOThread! Can’t figure out why… .
>> 
>> Hint that can help: I am recording two live streams (stereo) at the same 
>> time (i.e. two audio files).
>> 
>> Any hints?
>> 
>>> On 14 Feb 2018, at 19:01, Matt Ingalls <m...@8dio.com> wrote:
>>> 
>>> Have you tried AVAudioFile?
>>> 
>>>> On Feb 13, 2018, at 12:59 PM, Arshia Cont <arshiac...@antescofo.com> wrote:
>>>> 
>>>> Hello list,
>>>> 
>>>> This is my first post here so sorry if this is already asked!
>>>> 
>>>> We have been using ExtAudioFileWriteAsync on iOS successfully to write two 
>>>> PCM Audio Streams to disk. We have been getting reports from users nagging 
>>>> on sudden audio drop outs on some [older] devices and during 
>>>> “performance”. We managed finally to find the root of the problem to the 
>>>> thrown kExtAudioFileError_AsyncWriteBufferOverflow exception which means 
>>>> (probably?) that ExtAudioFileWriteAsync is not keeping the pace with the 
>>>> system writing to disk.
>>>> 
>>>> So I believe we must use lower level calls here (?).
>>>> 
>>>> What would be an alternative for ExtAudioFileWriteAsync?
>>> 
>>>> 
>>>> I have been looking at Audio Queue Services for months but 
>>>> documentation/examples are so old that it keeps me away from it… or should 
>>>> we dive in?
>>>> 
>>>> Thanks in advance,
>>>> 
>>>> 
>>>> Arshia Cont
>>>> ___
>>>> Do not post admin requests to the list. They will be ignored.
>>>> Coreaudio-api mailing list  (Coreaudio-api@lists.apple.com)
>>>> Help/Unsubscribe/Update your Subscription:
>>>> https://lists.apple.com/mailman/options/coreaudio-api/matt%408dio.com
>>>> 
>>>> This email sent to m...@8dio.com
>>> 
>> 
> 
 ___
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list  (Coreaudio-api@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/coreaudio-api/archive%40mail-archive.com

This email sent to arch...@mail-archive.com


Re: iOS : ExtAudioFileWriteAsync alternatives / Audio Queue Services availability?

2018-02-14 Thread Arshia Cont
Matt,

Thanks for the reply! :)

AVAudioFile would be one way to go except that I am recording live streams, and 
using AVAudioFile I should make the ‘write’ asynchronous myself and make sure 
the direct-to-disk stuff won’t hic the AudioIO thread.

This is supposed to be the promise of ‘ExtAudioFileWriteAsync’.

Has anyone tried playing around with the 
kExtAudioFileProperty_IOBufferSizeBytes property of Extended Audio File 
Services? I have feeling that this is the way to go (increasing the buffer so 
that writing would occur less frequently.

Apparently the kExtAudioFileError_AsyncWriteBufferOverflow is thrown when write 
requests are arriving faster than the system’s ability to access disk (can 
happen at any time and all of a sudden). The bad news is that when this is 
thrown it actually halts the IOThread! Can’t figure out why… .

Hint that can help: I am recording two live streams (stereo) at the same time 
(i.e. two audio files).

Any hints?

> On 14 Feb 2018, at 19:01, Matt Ingalls <m...@8dio.com> wrote:
> 
> Have you tried AVAudioFile?
> 
>> On Feb 13, 2018, at 12:59 PM, Arshia Cont <arshiac...@antescofo.com> wrote:
>> 
>> Hello list,
>> 
>> This is my first post here so sorry if this is already asked!
>> 
>> We have been using ExtAudioFileWriteAsync on iOS successfully to write two 
>> PCM Audio Streams to disk. We have been getting reports from users nagging 
>> on sudden audio drop outs on some [older] devices and during “performance”. 
>> We managed finally to find the root of the problem to the thrown 
>> kExtAudioFileError_AsyncWriteBufferOverflow exception which means 
>> (probably?) that ExtAudioFileWriteAsync is not keeping the pace with the 
>> system writing to disk.
>> 
>> So I believe we must use lower level calls here (?).
>> 
>> What would be an alternative for ExtAudioFileWriteAsync?
> 
>> 
>> I have been looking at Audio Queue Services for months but 
>> documentation/examples are so old that it keeps me away from it… or should 
>> we dive in?
>> 
>> Thanks in advance,
>> 
>> 
>> Arshia Cont
>> ___
>> Do not post admin requests to the list. They will be ignored.
>> Coreaudio-api mailing list  (Coreaudio-api@lists.apple.com)
>> Help/Unsubscribe/Update your Subscription:
>> https://lists.apple.com/mailman/options/coreaudio-api/matt%408dio.com
>> 
>> This email sent to m...@8dio.com
> 

 ___
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list  (Coreaudio-api@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/coreaudio-api/archive%40mail-archive.com

This email sent to arch...@mail-archive.com


iOS : ExtAudioFileWriteAsync alternatives / Audio Queue Services availability?

2018-02-13 Thread Arshia Cont
Hello list,

This is my first post here so sorry if this is already asked!

We have been using ExtAudioFileWriteAsync on iOS successfully to write two PCM 
Audio Streams to disk. We have been getting reports from users nagging on 
sudden audio drop outs on some [older] devices and during “performance”. We 
managed finally to find the root of the problem to the thrown 
kExtAudioFileError_AsyncWriteBufferOverflow exception which means (probably?) 
that ExtAudioFileWriteAsync is not keeping the pace with the system writing to 
disk.

So I believe we must use lower level calls here (?).

What would be an alternative for ExtAudioFileWriteAsync?

I have been looking at Audio Queue Services for months but 
documentation/examples are so old that it keeps me away from it… or should we 
dive in?

Thanks in advance,


Arshia Cont
 ___
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list  (Coreaudio-api@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/coreaudio-api/archive%40mail-archive.com

This email sent to arch...@mail-archive.com