> On Mar 8, 2015, at 9:42 AM, Patrick J. Collins 
> <[email protected]> wrote:
>> 2) Playback position is a special case where you don't even need to
>> pass data. The CoreAudio time line will progress in a predictable way.
>> You merely need to make a note of the start time, and then all time
>> stamps afterwards are relative to your start time, and thus reflect
>> your playback position. Loops will require an update to the start
>> time. There is a header that details the timing calls. If you want to
>> get fancy, you can even calculate the presentation time of your
>> AudioUnit chain, to adjust for the difference between the time audio
>> data is placed in the buffer versus when it is actually heard, and
>> then your view position will be quite accurate.
> 
> I don't see anything in <CoreAudio/HostTime.h> other than:
>  AudioGetCurrentHostTime
>  AudioGetHostClockFrequency
>  AudioGetHostClockMinimumTimeDelta
>  AudioConvertHostTimeToNanos
> 
> How would I get the "time line" of the specified audiounit?

Inside your AudioUnit, the Render time stamp can be converted to HostTime. You 
can then use this HostTime to compare to HostTime in your code outside the 
Render calls. You'll want to ask the output device and other elements of your 
CoreAudio chain for the "presentation time" delay, so you know the difference 
between HostTime and how long it takes before you actually hear the audio. That 
way, your visual position will line up exactly with what you're hearing.

Brian Willoughby
Sound Consulting


 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      ([email protected])
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/coreaudio-api/archive%40mail-archive.com

This email sent to [email protected]

Reply via email to