Hi Charles,

Now I'm beginning to understand what you need. Thanks for explaining a little 
further.

I'm fairly certain that you need an AUGraph set up for offline rendering. Then, 
users can select a range of audio to apply an effect. As soon as they hit "go" 
- or whatever UI element is supposed to start the effect rendering - your 
AUGraph can pull the selected audio through the effect or effects and save the 
resulting audio samples into your app's audio data.

However, you'll also need an AUGraph set up for real-time rendering if you want 
to allow the user to tweak the third-party effect and hear the results. The 
results of this will probably just be sent to the speakers and not stored, at 
least not if the user is just previewing the effect and not intending to save 
the results before they hear something they like.

One important concept is that CoreAudio is a pull model. When playing to the 
speakers for auditioning the tweaks, the audio output hardware controls the 
timing and requests samples from your application, which uses the AUGraph to 
provide those samples. When printing the effect, you need another mechanism 
besides an audio output to pull the samples through the graph so you can save 
them to memory or disk. The latter is called offline rendering, because it 
doesn't have to happen in real time. If your effects are simple enough, 
printing the effects can be much faster than listening to the entire audio 
selection.

In either case, you use the same basic AUGraph, but substitute a different 
piece as the master for pulling audio samples through the graph. Within the 
AUGraph, pulling the master output causes all the included effects and 
generators to produce output, thereby forcing them to each pull from their 
inputs. You are correct that Apple provides a Generator AU that can read from a 
file, but I think you might need something different to pull from your app's 
internal arrays instead of the files, unless all audio comes unmodified from 
files on disk.

I hope the above distinction between real-time rendering to live audio output 
hardware versus offline rendering to memory or a file is clear enough for you 
to start your research. However, there is another concept that you might need 
to consider when designing your application.

You've mentioned having 3 or more channels on multiple tracks. I'm going to 
assume that users are listening to a mono or stereo mix down of these tracks 
while tweaking the effects. If I understand correctly, this means that the 
AUGraph is reducing the number of tracks from 3 or more on the input to 2 or 1 
on the output. If so, that means you need to think about what it means to 
"print" the effect. Are you modifying the original tracks and replacing their 
audio with processed audio? … or are you mixing down those tracks to a new mono 
or stereo file that includes these effects as part of the mix-down?

Of course, I could have misunderstood, and you may be playing back to surround 
or other multi-channel outputs, where each track stays on its own channel and 
is not mixed with any other track. That's maybe a little easier, provided that 
you get the multi-channel output set up correctly, but I don't know which way 
you're planning.

AUGraph allows you to do it either way. The beauty of AUGraph is that you can 
set up something as complex or as simple as you need. It can handle the usual 
DAW style multitrack mixdown like Logic, or it can handle a bizarre surround 
processing setup. As I alluded earlier, I used AUGraph to process a hexaphonic 
guitar input, keeping 6 channels completely separate while applying the same 
effect in parallel to each track. That particular app had no mix down. At the 
other end of the spectrum, I've also worked on systems that had 8 microphone 
channels which were all mixed down to 1 mono output for listening, and I used 
an AUGraph to compress the 8-track recordings while keeping each channel 
separate.

If your case, you'll need to manage an audio selection. That means some user 
interface that might be in Swift, and some sample arrays that hold the audio 
along with some structures that point to the start and duration or end of the 
current selection. When the user wants to "listen" or "print" the effect, the 
AUGraph will have to be able to find the audio data from your app's arrays. You 
probably don't need to write an AU for this. Either the AUGenerator can handle 
it, or you can probably just hook in a render callback that will grab the 
correct audio samples from the selected sample arrays as needed.

As for displaying the third party AudioUnit window, you'll only really need 
that while the user is changing the AU parameters. It's not absolutely 
necessary for it to be visible when you're applying the effect to the audio. 
One of the key concepts of AudioUnits is the separation of UI and DSP engine. 
You can use the DSP engine without ever showing the custom UI, provided that 
you're happy with the default parameter settings. And even if you do need to 
change a few parameters, most AudioUnits allow an AU Host to discover the 
available parameters and change them without showing the UI. In that sense, you 
might save some time by sticking to the simple parameters (although some 
AudioUnits have "parameters" that can only be accessed from their custom UI - a 
bad design, but something you can't change unless you have source to the 
third-party AU). All of Apple's AUs have no custom UI - they merely enumerate 
all available parameters for the effect, along with the range and type of each 
parameter.

And, yes, you can avoid copying the entire selection at once because the 
AUGraph will only try to pull one render slice at a time. As long as you design 
a central access point in your application that can get to all of the audio, 
your AUGraph should function in a fairly straightforward manner. Try to avoid 
multiple copies of the same audio samples and allow just-in-time access.

I think you might be on the right path, so try coding up individual pieces and 
get them working. Then you can combine all the necessary parts into your 
application.

Brian


On Jan 31, 2016, at 4:21 PM, Charles Constant <[email protected]> wrote:
> Thanks again, Brian!
> 
> > The AudioUnit contains its own buffers. The C++ classes handle the data 
> > transfer between the 
> > AU host and the unit. Your AU code merely performs DSP on the objects' own 
> > buffers.
> 
> My reasoning here is that I could create a Generator unit that takes my app's 
> audio data (which has arrays of multiple buffers for each track, gaps for 
> silences, etc) and streams out with the silence rendered. This would happen 
> when the user makes a selection of audio to apply a filter. I'm trying to 
> avoid rendering the *entire* range of frames + selection of tracks before 
> displaying the AudioUnit UI. 
> 
> And while the user is tweaking the third party effect, I need some way to 
> preview the audio, properly mixed... not sure how to that exactly yet :( Then 
> finally, I'll need to take the rendered audio, and copy it back to my app's 
> audio data. 
> 
> > If all you need is an AU host, then why wrap any audio rendering in Swift?
> 
> I'm mainly looking for a way to avoid copying the entire selection at once 
> before displaying the third party AudioUnit window. I can't just make an 
> AudioBufferList by reference, because of the mixing, and silent areas. My 
> assumption, which might be horribly wrong, is that the best way to send data 
> would be to use an AU that knows how to convert between the custom data 
> structure I use to organize my buffers, and a normal ABL.
> 
> Not sure if my message is even making sense here. No doubt about it, I'm 
> foggy about the right way to do all this!
> 
> > Do you even need to write an AudioUnit?
> 
> It just occurred to me though, that Apple probably has some generic 
> "Generator" AU that I could send just a "render callback" to, like you can do 
> with the "Output" (i.e.: kAudioUnitType_Output). I should take a look at 
> documentation again...
> 
> I don't know enough yet about how AudioUnits work... but I'm not sure I 
> understand how an AUGraph would help me do this. I had intended to pipe a 
> custom AU generator to the third party effect, and back to output for the 
> user to hear, and later to a custom offlline AU to package the output back to 
> my softfile. Sigh, confusing! 


 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      ([email protected])
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/coreaudio-api/archive%40mail-archive.com

This email sent to [email protected]

Reply via email to