Hi.

I'm building a simple streaming system:  I want to capture sound while doing 
FLAC compression on n-packets at a time and uploading to a server in real-time.

I haven't used AudioQueues before, but have some experience with AudioUnits.  I 
seem to recall reading someplace (maybe in the Adamson/Avila book) that 
AudioQueues are well-suited for streaming.  Can someone explain to me why that 
is?

Down the road, I do need to also eliminate silence from the data before I 
compress and send (silence is relative and based on background noise).  Would 
AudioQueues also be a good choice if I'll need to access the sound data for 
silence detection and elimination?

Thank you.

-mz
 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      ([email protected])
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/coreaudio-api/archive%40mail-archive.com

This email sent to [email protected]

Reply via email to