I did a great deal of experimentation and research to get the VPIO unit to work 
for me on both OS X and iOS.  The VPIO unit is not, in general, a drop in 
replacement for remoteIO.  There are some cases where it can drop in, others 
where it cannot.  It is not completely the same on OS X and iOS, either - or 
rather, it cannot be, because of subtle differences in the two environments.  I 
did this work on Mavericks and iOS 7, so it was a short while back.

Here are some potentially helpful remarks from my notes:
* It seems to be necessary to turn off MuteOutput on the VPIO unit; that 
property doesn't exist on AHAL.  ("Seems" means I was uncertain.)
* The VPIO unit gives two callbacks, the "input" callback (VPIO gives the 
callee data from the microphone) and "render" callback (VPIO expects the callee 
to provide data destined for the speaker).  However if you use the AUGraph API, 
the AUGraphSetInputCallback() function sets the render callback, not the input 
callback.  This cost me a great deal of time.
* The VPIO unit does some fancy footwork so that its input and output hardware 
do not need to be synchronized or linked in any way, very unlike AHAL.
* VPIO and AHAL each have a certain amount of built-in format conversion 
capability, but they are not the same.  Hence some apps will need a converter 
unit for one but not the other.  I did not note the specifics, sorry.
* When your callback provides data to VPIO, it must fill in exactly the number 
of samples requested - no more, no less.  Also, it cannot block (or at least, 
not for long).  It's called on a real time thread and if it doesn't return 
quickly enough you'll get crap out the speaker.  If you think about it, this is 
(to me, anyway) obvious.  The example programs I've seen all use CARingBuffer 
to buffer data, but for some purposes that's not necessary and ends up copying 
data extra times.
* When you set up your input callback, on AHAL the element is ignored but on 
VPIO it has to be element 1 (input, i.e. microphone) and the global scope.
* The VPIO unit does not have the StartTimestampsAtZero property even though 
all output units are supposed to have it.
* A number of audio unit properties are write-only.  This really sucked when 
trying to debug what was going on, especially since this is not mentioned in 
any Apple documentation nor anywhere on the web that I could find.

I was never able to get an AUConverter unit to automatically connect to a VPIO 
unit.  I wanted data to flow from the AUConverter to the VPIO (and out the 
speaker).  So, I wanted the VPIO's rendering to pull data from the AUConverter. 
 I tried using the AUGraph API - AUGraphConnectNodeInput(), and that didn't 
work.  I tried using the equivalent AudioUnit API - AudioUnitSetProperty(..., 
kAudioUnitProperty_MakeConnection, ...) and that didn't work either.  I wrote a 
trivial function for the render callback of the VPIO, that calls 
AudioUnitRender() on the AUConverter, and it worked perfectly when set up with 
either the AUGraph or Audio Unit API.  (That is, with 
AUGraphSetNodeInputCallback() or AudioUnitSetProperty(..., 
kAudioUnitProperty_SetRenderCallback,...).)  I posted about it to this list and 
a helpful person eventually reproduced the problem.  I submitted a bug to Apple 
and last I heard nothing happened.

The actual echo cancellation of VPIO is quite good.  I just wish it weren't 
such a massive research project to get it to work in an application.

Hope this helps.

Steven J. Clark
VGo Communications

-----Original Message-----
From: [email protected] 
[mailto:[email protected]] On 
Behalf Of Adit Ranadive
Sent: Monday, October 20, 2014 1:58 PM
To: [email protected]
Subject: Using VoiceProcessingIO Audio Unit

Hi,

I am trying to write a sample VoIP application that uses the VoiceProcessingIO 
audio unit to perform echo cancellation. However, I am ending up with an error 
"-10851" (InvalidPropertyValue) when calling AudioUnitRender in my 
inputCallback function.

I am running this on Mac OS X 10.9 and from the Mac developer library it seems 
that VoiceProcessing was added in OS X 10.7.

My sample code is similar to the one given here:
http://atastypixel.com/blog/using-remoteio-audio-unit/

Except that I am using VoiceProcessingIO type audio unit and do not call 
enableIO on the VoiceProcessing unit. In my input callback however, I am 
passing my own allocated buffers to AudioUnitRender.

Any suggestions for getting to fix the error?

Thanks,
Adit
 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      ([email protected])
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/coreaudio-api/steven.clark%40vgocom.com

This email sent to [email protected]

 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      ([email protected])
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/coreaudio-api/archive%40mail-archive.com

This email sent to [email protected]

Reply via email to