Application crash when calling AudioUnitUninitialize

2019-01-03 Thread Bartosz Nowotny
Hello

I have an application that plays midi songs with different instruments.
When the user picks a song, a new MIDISynth node is added to the audio
graph and the related soundfont is loaded. When user decides to play a
different song, he has to go back which causes audio unit to be
uninitialized and the graph is updated. Then, the user can pick a new song
to play.

The issue is that sometimes when user decides to go back (which removes and
uninitializes a node), the application crashes.

When the application crashes, the last call I make is to:
AudioUnitUninitialize. The crashing thread is AURemoteIO::IOThread,
crashing at SamplerBaseElement::IncrementActiveLayerVoiceCount. Last thing
I see in the logs is: '238: Illegal decrement of empty layer bin count'

The uninitialization flow my app uses is this:
1. Disconnect synth node from the mixer (AUGraphDisconnectNodeInput)
2. Remove synth node from the graph (AUGraphRemoveNode)
3. Uninitialize the synth audio unit (AudioUnitUninitialize) - crash
4. Update the graph (AUGraphUpdate)

>From my testing, the crash is seemingly random - it takes anywhere between
2 to 20 repeats to reproduce (pick a song, go back, then again). The music
plays as intended. Tested on iPhone XS with iOS 12.1.2.

I haven't tested this on a mono iPhone yet. Will try to reproduce and get
back with findings.

Any ideas what's wrong here? Could this be a threading issue?

Regards,
Bartosz Nowotny
 ___
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list  (Coreaudio-api@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/coreaudio-api/archive%40mail-archive.com

This email sent to arch...@mail-archive.com


Re: AUGraph reconfiguration in response to route change

2018-07-11 Thread Bartosz Nowotny
Arshia,

Thank you for your suggestions. I double checked that soundfonts are
properly loaded and tried reordering the AUGraph operations to match yours.
Unfortunately, the problem persists.

In the meantime, I spiked an approach where in response to a route change I
completely destroy and recreate AUGraph from scratch. This approach works
fine with the only downside being the time it takes to reconstruct the
graph from scratch - during that time no notes can play. Nonetheless, this
is good enough for my app.

Why changing the sample rate of AudioUnits at runtime leads to crashes when
playing notes remains a mystery. For the next iteration of my app I will
migrate to the new API, using AVAudioEngine and AVAudioUnitSampler.
Hopefully that will be enough to avoid the problem altogether.

Regards,
Bartosz


On Tue, Jul 10, 2018 at 10:17 PM, Arshia Cont 
wrote:

> Bortosz,
>
> Looking at your example and comparing it to mine (which dates now), I
> would try two things:
>
> (1) I believe that AUGraphUninitialize alters (or unloads) the SoundFonts
> and hence your new Note On calls end up nowhere.
>
> (2) In my setup for updating Sample Rate (as a result of RouteChange), I
> stop the AUGraph, Update Stream Formats, Make Connections, Initialize
> AUGraph, and run AUGraphUpdate and then re-start the graph. I guess the
> ordering here is also important.
>
> This said, we are both in the danger zone since AUGraph is doomed for
> deprecation. Any updates on this?!
>
>
> Arshia Cont
> www.antescofo.com
>
> On 10 Jul 2018, at 20:53, Bartosz Nowotny 
> wrote:
>
> Yes, I have tried AVAudioSession route change notification as well - same
> result.
>
> Even though the code can handle multiple AUMIDISynths, I have set it up so
> that for testing only 1 is ever created.
>
> There is definitely something weird going on: if I respond to the route
> change and then try to play some notes - the app always crashes. If I plug
> in my headphones while some notes are being played, it either continues to
> play, goes to silence or crashes.
>
> I have created a snippet that shows how my set up looks like:
> https://hastebin.com/ugumilofid.m
> At the top, you can find a brief explaination of what the included code is
> supposed to do, what the test scenario is and what is the app output.
>
> I am out of ideas as to what is wrong here. Is it possible that this is a
> threading issue? Other than the route change handling the set up works
> great: I can load up multiple soundfonts, play notes, unload soundfonts,
> shut the graph down and then start it again later.
>
> Regards,
> Bartosz
>
> On Tue, Jul 10, 2018 at 11:20 AM, Sven Thoennissen 
> wrote:
>
>> Hello,
>>
>> It may have to do something with the property listener. Have you tried
>> using AVAudioSession.routeChangeNotification instead?
>> Did you connect more than 8 AUMIDISynth to your mixer? (IIRC 8 is the
>> maximum possible, at least with AVAudioMixerNode)
>>
>> In any case it is hard to tell without seeing code. I recommend to set up
>> a minimal test project with only 1 AUMIDISynth.
>>
>> Have you considered using AVFoundation classes instead of the old V2 API?
>> (at least to reproduce the problem)
>>
>> Best regards,
>> Sven
>>
>> > Am 09.07.2018 um 21:32 schrieb Bartosz Nowotny <
>> bartosznowo...@gmail.com>:
>> >
>> > Hello,
>> >
>> > I am trying to programmatically reconfigure AUGraph at runtime, in
>> response to a route change.
>> >
>> > My set up consists of a couple of AUMIDISynth nodes connected to a
>> multi channel mixer node which in turn is connected to RemoteIO node. This
>> set up works fine and I am able to produce audio by sending MIDI note
>> on/off events.
>> >
>> > I want to avoid audio data resampling at any point in the graph. I can
>> start with a properly set up AUGraph that has all audio units use the same
>> sample rate (starting from midi synths, through mixer to remote IO). Route
>> changes (e.g. plugging in headphones) can change the output sample rate and
>> thus introduce resampling and other side effects.
>> >
>> > To respond to a route change I set up a property listener for
>> StreamFormat on my IO unit. When the stream format changes, I call a method
>> that reconfigures the AUGraph in the following manner:
>> > 1. Stop the AUGraph
>> > 2. Uninitialize the AUGraph
>> > 3. Clear all graph connections
>> > 4. Set mixer output sample rate (propagates to IO input sample rate)
>> > 5. Set synth output sample rates (propagates to mixer input sample
>> rates)
>> > 6. Con

Re: AUGraph deprecation

2018-07-11 Thread Bartosz Nowotny
Arshia,

Thank you for clearing that up.

On Wed, Jul 11, 2018 at 4:10 PM, Arshia Cont 
wrote:

> Bartosz,
>
> Laurent was mentioning the installTapOnBus. Your published code would not
> need that. You are just playing MIDI. You would be concerned if you had to
> do custom real-time audio processing on the audio output of your MIDI
> device (such as FFT analysis).
>
> Arshia
>
> On 11 Jul 2018, at 16:04, Bartosz Nowotny 
> wrote:
>
> Laurent,
>
> What you said about not being able to achieve latency lower than 100ms is
> worrisome. I need a realtime MIDI synth, low latency is absolutely crucial.
> Does the limitation you mention apply only to signal processing or other
> applications of the API as well, in particular MIDI synthesis?
>
> Regards,
> Bartosz
>
>
> On Wed, Jul 11, 2018 at 3:30 PM, Laurent Noudohounsi <
> laurent.noudohou...@gmail.com> wrote:
>
>> Thanks Benjamin for the precision. I thought that `installTapOnBus` was
>> the successor of `RenderCallback`.
>> For me it was not natural to mix old api like `kAudioUnitProperty_
>> SetRenderCallback` in AVAudioEngine.
>>
>> So as Arshia said, I'm also looking for a way to use real-time processing
>> with AVAudioEngine.
>>
>> Le mer. 11 juil. 2018 à 15:05, Arshia Cont  a
>> écrit :
>>
>>> Interesting thread here!
>>>
>>> Anyone has achieved low-latency processing on AVAudioEngine?
>>>
>>> The RenderCallback seems natural to me (which is the good “old” way of
>>> doing it with AUGraph). But I’m curious to hear if anyone has done/achieved
>>> real stuff here with AVAudioEngine real-time processing and how.
>>>
>>>
>>> Arshia
>>>
>>>
>>> On 11 Jul 2018, at 15:00, Benjamin Federer  wrote:
>>>
>>> Laurent,
>>>
>>> `installTapOnBus` is not intended for realtime processing as a tap only
>>> provides the current frame buffer but does not pass it back into the signal
>>> chain. The documentation reads `Installs an audio tap on the bus to record.
>>> monitor, and observe the output of the node`.
>>>
>>> Although I have not done that myself yet my understanding is that for
>>> realtime processing you can still retrieve the underlying audio unit from
>>> an AVAudioNode (or at least some nodes?) and attach an input render
>>> callback via AudioUnitSetProperty with kAudioUnitProperty_SetRenderCa
>>> llback.
>>>
>>> I assume the other way would be to subclass AUAudioUnit and wrap that
>>> into an AVAudioUnit which is a subclass of AVAudioNode. Yes, it confuses
>>> me, too. Random Google result with further information:
>>> https://forums.developer.apple.com/thread/72674
>>>
>>> Benjamin
>>>
>>>
>>> Am 11.07.2018 um 14:34 schrieb Laurent Noudohounsi <
>>> laurent.noudohou...@gmail.com>:
>>>
>>> Hi all,
>>>
>>> I'm interested in this topic since I've not found any information about
>>> it yet.
>>>
>>> Correct me if I'm wrong but AVAudioEngine is not able to lower than
>>> 100ms latency. It's what I see in the header file of `AVAudioNode` with its
>>> method `installTapOnBus`:
>>>
>>> @param bufferSize the requested size of the incoming buffers in sample
>>> frames. Supported range is [100, 400] ms.
>>>
>>> Maybe I'm wrong but I don't see any other way to have a lower latency
>>> audio processing in an AVAudioNode.
>>>
>>> Best,
>>> Laurent
>>>
>>> Le mer. 11 juil. 2018 à 13:57, Arshia Cont  a
>>> écrit :
>>>
>>>> Benjamin and list,
>>>>
>>>> I double Benjamin’s request. It would be great if someone from the
>>>> CoreAudio Team could respond to the question.
>>>>
>>>> Two years ago, after basic tests I realised that AVAudioEngine was not
>>>> ready for Low Latency Audio analysis on iOS. So we used AUGraph. I have a
>>>> feeling that this is no longer the case on iOS and we can move to
>>>> AVAudioEngine for low-latency audio processing. Anyone can share experience
>>>> here? We do real-time spectral analysis and resynthesis of sound and go as
>>>> low as 64 samples per cycle if the device allows.
>>>>
>>>> Thanks in advance.
>>>>
>>>>
>>>> Arshia
>>>>
>>>>
>>>> PS: I actually brought the deprecation issue of AUGraph in a local
>>>> Apple Dev meeting 

Re: AUGraph reconfiguration in response to route change

2018-07-10 Thread Bartosz Nowotny
Yes, I have tried AVAudioSession route change notification as well - same
result.

Even though the code can handle multiple AUMIDISynths, I have set it up so
that for testing only 1 is ever created.

There is definitely something weird going on: if I respond to the route
change and then try to play some notes - the app always crashes. If I plug
in my headphones while some notes are being played, it either continues to
play, goes to silence or crashes.

I have created a snippet that shows how my set up looks like:
https://hastebin.com/ugumilofid.m
At the top, you can find a brief explaination of what the included code is
supposed to do, what the test scenario is and what is the app output.

I am out of ideas as to what is wrong here. Is it possible that this is a
threading issue? Other than the route change handling the set up works
great: I can load up multiple soundfonts, play notes, unload soundfonts,
shut the graph down and then start it again later.

Regards,
Bartosz

On Tue, Jul 10, 2018 at 11:20 AM, Sven Thoennissen  wrote:

> Hello,
>
> It may have to do something with the property listener. Have you tried
> using AVAudioSession.routeChangeNotification instead?
> Did you connect more than 8 AUMIDISynth to your mixer? (IIRC 8 is the
> maximum possible, at least with AVAudioMixerNode)
>
> In any case it is hard to tell without seeing code. I recommend to set up
> a minimal test project with only 1 AUMIDISynth.
>
> Have you considered using AVFoundation classes instead of the old V2 API?
> (at least to reproduce the problem)
>
> Best regards,
> Sven
>
> > Am 09.07.2018 um 21:32 schrieb Bartosz Nowotny  >:
> >
> > Hello,
> >
> > I am trying to programmatically reconfigure AUGraph at runtime, in
> response to a route change.
> >
> > My set up consists of a couple of AUMIDISynth nodes connected to a multi
> channel mixer node which in turn is connected to RemoteIO node. This set up
> works fine and I am able to produce audio by sending MIDI note on/off
> events.
> >
> > I want to avoid audio data resampling at any point in the graph. I can
> start with a properly set up AUGraph that has all audio units use the same
> sample rate (starting from midi synths, through mixer to remote IO). Route
> changes (e.g. plugging in headphones) can change the output sample rate and
> thus introduce resampling and other side effects.
> >
> > To respond to a route change I set up a property listener for
> StreamFormat on my IO unit. When the stream format changes, I call a method
> that reconfigures the AUGraph in the following manner:
> > 1. Stop the AUGraph
> > 2. Uninitialize the AUGraph
> > 3. Clear all graph connections
> > 4. Set mixer output sample rate (propagates to IO input sample rate)
> > 5. Set synth output sample rates (propagates to mixer input sample rates)
> > 6. Connect synth nodes to mixer node
> > 7. Connect mixer node to IO node
> > 8. Update, Initialize and Start the AUGraph
> >
> > None of the above operations returns an error result.
> >
> > The issue occurs when I send some note on/off events - the app crashes.
> What am I missing?
> >
> > Regards,
> > Bartosz Nowotny
> > ___
> > Do not post admin requests to the list. They will be ignored.
> > Coreaudio-api mailing list  (Coreaudio-api@lists.apple.com)
> > Help/Unsubscribe/Update your Subscription:
> > https://lists.apple.com/mailman/options/coreaudio-api/biochill%40me.com
> >
> > This email sent to bioch...@me.com
>
>  ___
> Do not post admin requests to the list. They will be ignored.
> Coreaudio-api mailing list  (Coreaudio-api@lists.apple.com)
> Help/Unsubscribe/Update your Subscription:
> https://lists.apple.com/mailman/options/coreaudio-api/
> bartosznowotny%40gmail.com
>
> This email sent to bartosznowo...@gmail.com
>
 ___
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list  (Coreaudio-api@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/coreaudio-api/archive%40mail-archive.com

This email sent to arch...@mail-archive.com


Re: iOS CoreAudio MIDISynth music device configuration

2018-07-06 Thread Bartosz Nowotny
A quick followup:

Is it possible that some of the instruments are running out of voices (e.g.
the aforementioned piano instrument)? As an experiment I switched synth
units from AUMIDISynth to AUSampler and tried setting voice count. It seems
the default is 64 voices and I tried to set it to 128 or even 256 but
unfortunately that seemed to have no effect on the issue. I accomplished
that by getting the ClassInfo property with global scope, casting to a
dictionary, assigning an NSNumber to "voice count" key and finally setting
the ClassInfo back. Is that the correct way to do this? There were no
errors logged.

Bartosz

On Thu, Jul 5, 2018 at 12:19 PM, Bartosz Nowotny 
wrote:

> I'm using AVAudioSession to configure my audio session and set up
> preferred sample rate, buffer size. Other than that, I'm using exclusively
> the C API - AUGraph and AudioUnits.
>
> My app *sometimes* needs more than 1 MIDISynth unit running at a time
> because some songs require 2 different soundfonts to be loaded
> simultaneously. As far as my understanding goes, a MIDISynth unit can only
> load a single soundfont. Am I fundamentally mistaken here?
>
> Since my original email, I configured my audio session and all the audio
> units to use consistent sample rate so that no resampling has to be done at
> any point. The issue still pertains.
>
> The issue is clearly audible when using Yamaha 9ft Grand piano preset from
> CompiFONT (http://pphidden.wixsite.com/compifont). This particular
> soundfont is really big in size. Since I only need the piano preset, I use
> a soundfont that has just that one preset extracted (download:
> https://mega.nz/#!nYoz0YxZ!gvwd7hCibvG0_n8xEunSJlBapo9d6VhvLg7uNQFsSrw).
>
> I should also say that this issue is present regardless of the number of
> MIDISynth units running - it sounds the same with 1 MIDISynth unit or more.
> Moreover, that very same soundfont and bank/preset is used in Android
> version of the app where the backing synth is FluidSynth and it sounds
> lovely - with polyphony count set to 64!
>
> If it would be helpful, I can record how the piano sounds in my iOS app vs
> a synth on Windows or Android.
>
> Regards,
> Bartosz
>
> On Thu, Jul 5, 2018 at 4:01 AM, douglas_sc...@apple.com <
> douglas_sc...@apple.com> wrote:
>
>> Are you using the C API or the Objective C API?
>>
>> Why do you have multiple 16-channel MIDISynth units running?  You could
>> possibly run out of CPU because they cannot steal voices from each other.
>>
>> If your MIDISynth code works for one bank but not another, I find it hard
>> to imagine it is a configuration issue.
>>
>> Can you point me to the banks in question?
>>
>> -DS
>>
>> > On Jul 3, 2018, at 3:02 PM, Bartosz Nowotny 
>> wrote:
>> >
>> > Hello
>> >
>> > I need advice on how to properly configure AudioUnits in my MIDISynth
>> iOS app.
>> >
>> > In my code I start by configuring AudioSession: I set the right
>> category (playback), preferred sample rate and buffer size and then start
>> the session.
>> > Next up, I create the graph: multiple synth units
>> (kAudioUnitSubType_MIDISynth) -> multichannel mixer -> remote IO.
>> > For mixer unit, I configure number of input elements (buses) and
>> maximum frames per slice.
>> > For synth units, I configure the soundbank URL and maximum frames per
>> slice.
>> >
>> > This set up is enough for my app to successfully produce music by
>> sending MIDI note on/off events to specific synth units. For some
>> soundfonts, the produced sound is not correct, as if it was distorted.
>> Because the soundfonts I'm using are popular and publicly available
>> soundfonts, tested on multiple devices and different synths, I'm pretty
>> certain the soundfonts are not at fault here. My best guess is that I'm
>> missing parts of the configuration:
>> >
>> > 1. Is any additional configuration required for any of the AudioUnits I
>> use? In particular, should I configure synth units output stream format, so
>> that for instance, the sample rate matches what is actually used by the
>> hardware? Should I also configure stream format for the mixer or IO units?
>> How should the stream format configs look like?
>> > 2. If I do need to do the above configuration, how should I respond to
>> audio session route changes? I noticed, for instance, that plugging in
>> headphones changes the hardware output sample rate from 48kHz to 44.1kHz.
>> >
>> > Regards,
>> > Bartosz
>> >
>> > ___
>> > Do not po

Re: iOS CoreAudio MIDISynth music device configuration

2018-07-05 Thread Bartosz Nowotny
I'm using AVAudioSession to configure my audio session and set up preferred
sample rate, buffer size. Other than that, I'm using exclusively the C API
- AUGraph and AudioUnits.

My app *sometimes* needs more than 1 MIDISynth unit running at a time
because some songs require 2 different soundfonts to be loaded
simultaneously. As far as my understanding goes, a MIDISynth unit can only
load a single soundfont. Am I fundamentally mistaken here?

Since my original email, I configured my audio session and all the audio
units to use consistent sample rate so that no resampling has to be done at
any point. The issue still pertains.

The issue is clearly audible when using Yamaha 9ft Grand piano preset from
CompiFONT (http://pphidden.wixsite.com/compifont). This particular
soundfont is really big in size. Since I only need the piano preset, I use
a soundfont that has just that one preset extracted (download:
https://mega.nz/#!nYoz0YxZ!gvwd7hCibvG0_n8xEunSJlBapo9d6VhvLg7uNQFsSrw).

I should also say that this issue is present regardless of the number of
MIDISynth units running - it sounds the same with 1 MIDISynth unit or more.
Moreover, that very same soundfont and bank/preset is used in Android
version of the app where the backing synth is FluidSynth and it sounds
lovely - with polyphony count set to 64!

If it would be helpful, I can record how the piano sounds in my iOS app vs
a synth on Windows or Android.

Regards,
Bartosz

On Thu, Jul 5, 2018 at 4:01 AM, douglas_sc...@apple.com <
douglas_sc...@apple.com> wrote:

> Are you using the C API or the Objective C API?
>
> Why do you have multiple 16-channel MIDISynth units running?  You could
> possibly run out of CPU because they cannot steal voices from each other.
>
> If your MIDISynth code works for one bank but not another, I find it hard
> to imagine it is a configuration issue.
>
> Can you point me to the banks in question?
>
> -DS
>
> > On Jul 3, 2018, at 3:02 PM, Bartosz Nowotny 
> wrote:
> >
> > Hello
> >
> > I need advice on how to properly configure AudioUnits in my MIDISynth
> iOS app.
> >
> > In my code I start by configuring AudioSession: I set the right category
> (playback), preferred sample rate and buffer size and then start the
> session.
> > Next up, I create the graph: multiple synth units
> (kAudioUnitSubType_MIDISynth) -> multichannel mixer -> remote IO.
> > For mixer unit, I configure number of input elements (buses) and maximum
> frames per slice.
> > For synth units, I configure the soundbank URL and maximum frames per
> slice.
> >
> > This set up is enough for my app to successfully produce music by
> sending MIDI note on/off events to specific synth units. For some
> soundfonts, the produced sound is not correct, as if it was distorted.
> Because the soundfonts I'm using are popular and publicly available
> soundfonts, tested on multiple devices and different synths, I'm pretty
> certain the soundfonts are not at fault here. My best guess is that I'm
> missing parts of the configuration:
> >
> > 1. Is any additional configuration required for any of the AudioUnits I
> use? In particular, should I configure synth units output stream format, so
> that for instance, the sample rate matches what is actually used by the
> hardware? Should I also configure stream format for the mixer or IO units?
> How should the stream format configs look like?
> > 2. If I do need to do the above configuration, how should I respond to
> audio session route changes? I noticed, for instance, that plugging in
> headphones changes the hardware output sample rate from 48kHz to 44.1kHz.
> >
> > Regards,
> > Bartosz
> >
> > ___
> > Do not post admin requests to the list. They will be ignored.
> > Coreaudio-api mailing list  (Coreaudio-api@lists.apple.com)
> > Help/Unsubscribe/Update your Subscription:
> > https://lists.apple.com/mailman/options/coreaudio-api/
> douglas_scott%40apple.com
> >
> > This email sent to douglas_sc...@apple.com
>
>
 ___
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list  (Coreaudio-api@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/coreaudio-api/archive%40mail-archive.com

This email sent to arch...@mail-archive.com


iOS CoreAudio MIDISynth music device configuration

2018-07-03 Thread Bartosz Nowotny
 Hello

I need advice on how to properly configure AudioUnits in my MIDISynth iOS
app.

In my code I start by configuring AudioSession: I set the right category
(playback), preferred sample rate and buffer size and then start the
session.
Next up, I create the graph: multiple synth units
(kAudioUnitSubType_MIDISynth) -> multichannel mixer -> remote IO.
For mixer unit, I configure number of input elements (buses) and maximum
frames per slice.
For synth units, I configure the soundbank URL and maximum frames per slice.

This set up is enough for my app to successfully produce music by sending
MIDI note on/off events to specific synth units. For some soundfonts, the
produced sound is not correct, as if it was distorted. Because the
soundfonts I'm using are popular and publicly available soundfonts, tested
on multiple devices and different synths, I'm pretty certain the soundfonts
are not at fault here. My best guess is that I'm missing parts of the
configuration:

1. Is any additional configuration required for any of the AudioUnits I
use? In particular, should I configure synth units output stream format, so
that for instance, the sample rate matches what is actually used by the
hardware? Should I also configure stream format for the mixer or IO units?
How should the stream format configs look like?
2. If I do need to do the above configuration, how should I respond to
audio session route changes? I noticed, for instance, that plugging in
headphones changes the hardware output sample rate from 48kHz to 44.1kHz.

Regards,
Bartosz
 ___
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list  (Coreaudio-api@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/coreaudio-api/archive%40mail-archive.com

This email sent to arch...@mail-archive.com