Hello

I need advice on how to properly configure AudioUnits in my MIDISynth iOS
app.

In my code I start by configuring AudioSession: I set the right category
(playback), preferred sample rate and buffer size and then start the
session.
Next up, I create the graph: multiple synth units
(kAudioUnitSubType_MIDISynth) -> multichannel mixer -> remote IO.
For mixer unit, I configure number of input elements (buses) and maximum
frames per slice.
For synth units, I configure the soundbank URL and maximum frames per slice.

This set up is enough for my app to successfully produce music by sending
MIDI note on/off events to specific synth units. For some soundfonts, the
produced sound is not correct, as if it was distorted. Because the
soundfonts I'm using are popular and publicly available soundfonts, tested
on multiple devices and different synths, I'm pretty certain the soundfonts
are not at fault here. My best guess is that I'm missing parts of the
configuration:

1. Is any additional configuration required for any of the AudioUnits I
use? In particular, should I configure synth units output stream format, so
that for instance, the sample rate matches what is actually used by the
hardware? Should I also configure stream format for the mixer or IO units?
How should the stream format configs look like?
2. If I do need to do the above configuration, how should I respond to
audio session route changes? I noticed, for instance, that plugging in
headphones changes the hardware output sample rate from 48kHz to 44.1kHz.

Regards,
Bartosz
 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      ([email protected])
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/coreaudio-api/archive%40mail-archive.com

This email sent to [email protected]

Reply via email to