Thank you!!!!

My problem was that I was just connecting the output unit to a mixer, then
grabbing the audio data and timeStamp through a render notify (which must
be the output timestamp).  I followed your instruction and set it up with
kAudioOutputUnitProperty_SetInputCallback and am now getting predictable
(and sensible) timestamps. Now all I have to do is go through everything
and un-hack my hacky latency compensation.

Thank you,
Dave


On Mon, Apr 13, 2015 at 10:36 AM, Dan Klingler <[email protected]> wrote:

> > How can a future time stamp represent a buffer of samples from the
> microphone that has already been captured.
>
>
> You’re correct, host time for an input buffer should be less than
> mach_absolute_time(). For input, you should look at the timestamp that’s
> passed to you as part of your input callback (the one set on the AU with
> kAudioOutputUnitProperty_SetInputCallback). I would expect this host time
> to be less than mach_absolute_time().
>
> For input, you’re the one calling AudioUnitRender (from the input
> callback), so you should call AudioUnitRender with the timestamp you get in
> the input callback.
>
> Dan
 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      ([email protected])
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/coreaudio-api/archive%40mail-archive.com

This email sent to [email protected]

Reply via email to