On Sun, Sep 11, 2016 at 9:09 AM, Hovik Melikyan <[email protected]>
wrote:

>
> On a second thought, mute and gain say on a mixer bus are in fact
> related. If your app mutes the bus and then changes the gain you don't
> want to hear the change of gain. So maybe a good implementation of a
> node with mute/gain should guarantee sequential consistency, and I'm
> now wondering how for example Apple's AU's deal with this.
>

mute, gain (and solo) have VASTLY more complex semantics than this suggests
and most of the complications have nothing to do with processor-level
atomicity.

remember that in all cases today, whether you write an AU or some other
audio processing code, you will almost certainly be processing *blocks* of
audio rather than individual samples, and so another type of atomicity that
matters quite a lot is atomicity across the block rather than at the
processor level. fortunately this tends to be easy to arrange - it just
implies picking up target/requested values at the start of the block and
using them with no possibility of changing them mid-block.
 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      ([email protected])
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/coreaudio-api/archive%40mail-archive.com

This email sent to [email protected]

Reply via email to