To answer your first question, if you are developing a new audio application, you should not plan to use AUGraph, as it is set for deprecation. Use AVAudioEngine instead. (There are no current plans to deprecate ExtAudioFile or the AudioUnit v2 API. But there are newer APIs: AVAudioFile and AUAudioUnit.)
Second, to implement playing back from random places in a file, you would use an AVAudioPlayerNode and either scheduleSegment ( https://developer.apple.com/documentation/avfoundation/avaudioplayernode/1385884-schedulesegment <https://developer.apple.com/documentation/avfoundation/avaudioplayernode/1385884-schedulesegment> ) if you want to schedule segments end to end, or scheduleBuffer ( https://developer.apple.com/documentation/avfoundation/avaudioplayernode/1388422-schedulebuffer <https://developer.apple.com/documentation/avfoundation/avaudioplayernode/1388422-schedulebuffer> ) if you need to be able to interrupt currently playing material with new material. In the latter case you will need to load buffers from the file yourself. > On Jun 13, 2017, at 2:29 AM, Laurent Noudohounsi > <[email protected]> wrote: > > Hi every Core Audio expert! > > I’m currently developing a professional audio application (research oriented) > but I’ve questions about moving the playhead (seek function). > > My application would be a Swift application but regarding the advices of > Chris Adamson, I decided to build all the core audio in C/C++. > > My application would be one ore more source the user could mixe and only one > output. > And I would use only custom DSP code. > So I thought to do an AUGraph Like this: > > > FileAu-1 - - - — - - > | | > … | | > | Mixer unit | > FileAU-n - - - - — - > | | > RenderCallBack - - - > | | > > > > Where: > - the 'FileAu’ would be my source generator from audio on disk and use > ExtAudioFile (and properties 'kAudioUnitProperty_ScheduledFileIDs', > 'kAudioUnitProperty_ScheduledFileRegion’, > 'kAudioUnitProperty_ScheduledFilePrime' and > ’kAudioUnitProperty_ScheduleStartTimeStamp') > > - the render callback would be my custom DSP processing (I use a personal lib > where I can link several DSP units so only one render call back with the > right DSP unit initialization could do the job). > > > My questions are: > > 1°) Does AUGraph, ExtAudioFileRef, AudioUnit and all the low level will be > deprecated and do I need to tuse AV stuff like AVAudioEngine, AVAudioUnit and > so on instead? If no, what are the difference between AUgraph and > AVAudioEngine because they look like the same. > > 2°) How can I perform a seek in this situation? I cannot manage to do > something as simple as a seek. Is it possible to do a global seek using only > AUGraph? Or must I use a ringbuffer with rendercallback like few examples I > saw even if I don’t use one? > _______________________________________________ > Do not post admin requests to the list. They will be ignored. > Coreaudio-api mailing list ([email protected]) > Help/Unsubscribe/Update your Subscription: > https://lists.apple.com/mailman/options/coreaudio-api/jmccartney%40apple.com > > This email sent to [email protected] James McCartney Apple CoreAudio [email protected]
_______________________________________________ Do not post admin requests to the list. They will be ignored. Coreaudio-api mailing list ([email protected]) Help/Unsubscribe/Update your Subscription: https://lists.apple.com/mailman/options/coreaudio-api/archive%40mail-archive.com This email sent to [email protected]
