Hi folks,

I’m new to coreaudio programming and audio programming in general.  Having read 
through some online materials, I’m trying to wrap my head around what is safe 
to do on in a realtime audio thread render callback and what’s not.  As far as 
I can determine I should really only be calling functions that have guarantees 
about the way they behave: fast, no memory allocations/freeing, consistent cpu 
demand vs high spikes of demand, etc.

This is maybe really obvious, but help me out: how do I know/find out what’s 
safe to do?  For example, can I call AudioUnitGetProperty() in a render thread? 
  I’m suspecting the answer might be it a “that depends…” type; but any 
pointers to help me locate the answers would be much appreciated.  Included 
here is an example of what I’m doing in a render thread I’m working on.  Is 
this sane/OK to do?

I’m using theamazingaudioengine.com <http://theamazingaudioengine.com/>, 
running a Mac OS X 10.9+ and a iOS 7+ target executable.

static OSStatus _renderCallback2(__unsafe_unretained VEAETrack         *THIS,
                                 __unsafe_unretained AEAudioController 
*audioController,
                                 const AudioTimeStamp                  *time,
                                 UInt32                                 
frameCount,
                                 AudioBufferList                       *audio) {
    
    // Do the main audio processing
    THIS->_superclassRenderCallback(THIS, audioController, time, frameCount, 
audio); // the superclass uses an AUAudioFilePlayer to render the audio, then 
applies a gain and pan filter using Apple’s vdsp functions
    
    // Get our current time data
    if(noErr==AudioUnitGetProperty(THIS->_au,
                                   kAudioUnitProperty_CurrentPlayTime,
                                   kAudioUnitScope_Global,
                                   0,
                                   &THIS->_audioTimeStamp,
                                   &THIS->_audioTimeStampSize)) {
        UInt32 currLoopCount = floor(THIS->_audioTimeStamp.mSampleTime / 
THIS->_mFramesToPlay);
        THIS->_currentTime = (THIS->_audioTimeStamp.mSampleTime - 
((float)currLoopCount * THIS->_mFramesToPlay)) / THIS->_outSampleRate;
        
        // Check for callbacks to be done
        if(THIS->_completionBlock) {
            if(THIS->_isLooping) {
                // If we are on a new loop number, trigger completion callback
                if(currLoopCount > THIS->_numLoopsCompleted) {
                    THIS->_numLoopsCompleted++;
                    
AEAudioControllerSendAsynchronousMessageToMainThread(audioController, 
_notifyCompletion, &THIS, sizeof(VEAETrack*)); // does not lock/block the 
realtime thread
                }
            } else {
                // If we're in the last renderCallback of a non-looping 
channel, trigger the completion callback
                UInt32 remainderPlusFramesThisRender = 
((UInt32)THIS->_audioTimeStamp.mSampleTime % THIS->_mFramesToPlay) + frameCount;
                if(remainderPlusFramesThisRender >= THIS->_mFramesToPlay) {
                    
AEAudioControllerSendAsynchronousMessageToMainThread(audioController, 
_notifyCompletion, &THIS, sizeof(VEAETrack*));
                }
            }
        }
    }

    return noErr;
}

The part I’m wondering about is my call to AudioUnitGetProperty(…) - how would 
I know or find out if that’s OK to do realtime like above?  How about other 
coreaudio functions such as calling MusicDeviceMIDIEvent() on an Apple 
AUSampler instrument audio unit, from the realtime audio thread?  Or must I run 
my own, separate thread parallel to the realtime audio thread to do these types 
of things?





 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      ([email protected])
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/coreaudio-api/archive%40mail-archive.com

This email sent to [email protected]

Reply via email to