>> To answer re “drive audio source(1) using audio source(2)” - have you tried using AudioUnitAddRenderNotify()
Recently, as a test, I successfully sub-classed and used AVAudioUnitEffect with the following init: << I had not considered using AudioUnitAddRenderNotify() because I’ve been unsure of a safe place to use it. It actually took me weeks to figure out how to use a callback on the input node but seeing your response has revived my interest. Why did it take weeks? I don’t know if this is a bug or just different behavior for iOS versus MacOS but 95% of the time, my code would crash, so after repeatedly getting BAD_EXEC errors in the debugger, I tracked the issue to a single line. What I found is that it I needed to pass the retained reference to self to the callback on MacOS. AudioUnitAddRenderNotify(testNode!, renderCallback, Unmanaged.passUnretained(self).toOpaque()) # This works on iOS (passUnretained) AudioUnitAddRenderNotify(testNode!, renderCallback, Unmanaged.passRetained(self).toOpaque()) # This works on MacOS Sierra (passRetained) During my research weeks ago, I came across this thread https://forums.developer.apple.com/thread/72674 This mostly waived me off from subclassing AVAudioUnitEffect but I had been thinking about it A LOT yesterday – wondering if I should do it anyway or use AUAudioUnit. I was concerned that all properties and methods may not be present that I needed or that when I performed a super init, there would be an unpleasant surprise by something additional that I didn’t want. Again, seeing this response has also renewed my interest in subclassing and I think that I will subclass AVAudioUnitEffect and if I’m successful, I will pursue subclassing AUAudioUnit BTW, I love your mission and values statement. Thank you VERY much, W. From: Leo Thiessen [mailto:[email protected]] Sent: Wednesday, June 07, 2017 3:58 PM To: [email protected] Subject: Re: Seeking advice for modifying audio input source Hello W., To answer re “drive audio source(1) using audio source(2)” - have you tried using AudioUnitAddRenderNotify() Recently, as a test, I successfully sub-classed and used AVAudioUnitEffect with the following init: - (instancetype)init { AudioComponentDescription component = VEComponentDescriptionMake(kAudioUnitManufacturer_Apple, kAudioUnitType_Effect, kAudioUnitSubType_PeakLimiter); if ( (self=[super initWithAudioComponentDescription:component]) ) { _audioUnit = self.audioUnit; _didAddRenderNotify = VECheckOSStatus(AudioUnitAddRenderNotify(_audioUnit, VEAVLimiterRenderCallback, (__bridge void * _Nullable)(self)), "AudioUnitAddRenderNotify"); } return self; } …with a render function like: OSStatus VEAVLimiterRenderCallback(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData) { if ((*ioActionFlags & kAudioUnitRenderAction_PostRender) && (inRefCon)) { __unsafe_unretained VEAVLimiter * THIS = (__bridge VEAVLimiter*)inRefCon; // This is essentially a “tap” do something, e.g. here you could do some calculations such as // determine peak amplitude, or whatever, then store those values and use them to do something like // “modulate" the “pan” on the source(1) AVAudioNode } return noErr; } If I understand you correctly, I think this could work. Kind regards, Leo Thiessen http://visionsencoded.com
_______________________________________________ Do not post admin requests to the list. They will be ignored. Coreaudio-api mailing list ([email protected]) Help/Unsubscribe/Update your Subscription: https://lists.apple.com/mailman/options/coreaudio-api/archive%40mail-archive.com This email sent to [email protected]
