Re: AUGraph deprecation

2018-07-11 Thread Laurent Noudohounsi
Thanks Benjamin for the precision. I thought that `installTapOnBus` was the
successor of `RenderCallback`.
For me it was not natural to mix old api like
`kAudioUnitProperty_SetRenderCallback`
in AVAudioEngine.

So as Arshia said, I'm also looking for a way to use real-time processing
with AVAudioEngine.

Le mer. 11 juil. 2018 à 15:05, Arshia Cont  a
écrit :

> Interesting thread here!
>
> Anyone has achieved low-latency processing on AVAudioEngine?
>
> The RenderCallback seems natural to me (which is the good “old” way of
> doing it with AUGraph). But I’m curious to hear if anyone has done/achieved
> real stuff here with AVAudioEngine real-time processing and how.
>
>
> Arshia
>
>
> On 11 Jul 2018, at 15:00, Benjamin Federer  wrote:
>
> Laurent,
>
> `installTapOnBus` is not intended for realtime processing as a tap only
> provides the current frame buffer but does not pass it back into the signal
> chain. The documentation reads `Installs an audio tap on the bus to record.
> monitor, and observe the output of the node`.
>
> Although I have not done that myself yet my understanding is that for
> realtime processing you can still retrieve the underlying audio unit from
> an AVAudioNode (or at least some nodes?) and attach an input render
> callback via AudioUnitSetProperty with kAudioUnitProperty_SetRenderCallback.
>
> I assume the other way would be to subclass AUAudioUnit and wrap that into
> an AVAudioUnit which is a subclass of AVAudioNode. Yes, it confuses me,
> too. Random Google result with further information:
> https://forums.developer.apple.com/thread/72674
>
> Benjamin
>
>
> Am 11.07.2018 um 14:34 schrieb Laurent Noudohounsi <
> laurent.noudohou...@gmail.com>:
>
> Hi all,
>
> I'm interested in this topic since I've not found any information about it
> yet.
>
> Correct me if I'm wrong but AVAudioEngine is not able to lower than 100ms
> latency. It's what I see in the header file of `AVAudioNode` with its
> method `installTapOnBus`:
>
> @param bufferSize the requested size of the incoming buffers in sample
> frames. Supported range is [100, 400] ms.
>
> Maybe I'm wrong but I don't see any other way to have a lower latency
> audio processing in an AVAudioNode.
>
> Best,
> Laurent
>
> Le mer. 11 juil. 2018 à 13:57, Arshia Cont  a
> écrit :
>
>> Benjamin and list,
>>
>> I double Benjamin’s request. It would be great if someone from the
>> CoreAudio Team could respond to the question.
>>
>> Two years ago, after basic tests I realised that AVAudioEngine was not
>> ready for Low Latency Audio analysis on iOS. So we used AUGraph. I have a
>> feeling that this is no longer the case on iOS and we can move to
>> AVAudioEngine for low-latency audio processing. Anyone can share experience
>> here? We do real-time spectral analysis and resynthesis of sound and go as
>> low as 64 samples per cycle if the device allows.
>>
>> Thanks in advance.
>>
>>
>> Arshia
>>
>>
>> PS: I actually brought the deprecation issue of AUGraph in a local Apple
>> Dev meeting where the EU director of developer relation was present.
>> According to him, when Apple announces a deprecation, it WILL happen. My
>> interpretation of the conversation is that AUGraph is no longer maintained
>> but provided as is.
>>
>> On 11 Jul 2018, at 12:36, Benjamin Federer  wrote:
>>
>> Since it was mentioned in another email (thread) I’m giving this topic a
>> bump. Would be great if someone at Apple, or anyone else in the know, could
>> take the time to respond. The documentation at the link cited below still
>> has no indication of deprecation. Will it come with one of the next Xcode
>> Beta releases?
>>
>> On another note I am really interested in how transitioning over to
>> AVAudioEngine is working out for everyone. I know AVAudioEngine on iOS.
>> What I am interested in is any macOS specifics or hardships.
>>
>> From my experience AVAudioEngine is relatively robust in handling
>> multiple graphs, i.e. separate chains of audio units. I had some issues
>> with the AVAudioPlayerNode connecting to multiple destinations in that
>> scenario. Also connect:toConnectionPoints:fromBus:format: did not work for
>> me as it only connected to one of the destination points. Anyone else
>> experienced problems in that regard?
>>
>> Thanks
>>
>> Benjamin
>>
>>
>> Am 08.06.2018 um 16:59 schrieb Benjamin Federer :
>>
>> Last year at WWDC it was announced that AUGraph would be deprecated in
>> 2018. I just browsed the documentation (
>> https://developer.apple.com/documentati

Re: AUGraph deprecation

2018-07-11 Thread Laurent Noudohounsi
Hi all,

I'm interested in this topic since I've not found any information about it
yet.

Correct me if I'm wrong but AVAudioEngine is not able to lower than 100ms
latency. It's what I see in the header file of `AVAudioNode` with its
method `installTapOnBus`:

@param bufferSize the requested size of the incoming buffers in sample
frames. Supported range is [100, 400] ms.

Maybe I'm wrong but I don't see any other way to have a lower latency audio
processing in an AVAudioNode.

Best,
Laurent

Le mer. 11 juil. 2018 à 13:57, Arshia Cont  a
écrit :

> Benjamin and list,
>
> I double Benjamin’s request. It would be great if someone from the
> CoreAudio Team could respond to the question.
>
> Two years ago, after basic tests I realised that AVAudioEngine was not
> ready for Low Latency Audio analysis on iOS. So we used AUGraph. I have a
> feeling that this is no longer the case on iOS and we can move to
> AVAudioEngine for low-latency audio processing. Anyone can share experience
> here? We do real-time spectral analysis and resynthesis of sound and go as
> low as 64 samples per cycle if the device allows.
>
> Thanks in advance.
>
>
> Arshia
>
>
> PS: I actually brought the deprecation issue of AUGraph in a local Apple
> Dev meeting where the EU director of developer relation was present.
> According to him, when Apple announces a deprecation, it WILL happen. My
> interpretation of the conversation is that AUGraph is no longer maintained
> but provided as is.
>
> On 11 Jul 2018, at 12:36, Benjamin Federer  wrote:
>
> Since it was mentioned in another email (thread) I’m giving this topic a
> bump. Would be great if someone at Apple, or anyone else in the know, could
> take the time to respond. The documentation at the link cited below still
> has no indication of deprecation. Will it come with one of the next Xcode
> Beta releases?
>
> On another note I am really interested in how transitioning over to
> AVAudioEngine is working out for everyone. I know AVAudioEngine on iOS.
> What I am interested in is any macOS specifics or hardships.
>
> From my experience AVAudioEngine is relatively robust in handling multiple
> graphs, i.e. separate chains of audio units. I had some issues with the
> AVAudioPlayerNode connecting to multiple destinations in that scenario.
> Also connect:toConnectionPoints:fromBus:format: did not work for me as it
> only connected to one of the destination points. Anyone else experienced
> problems in that regard?
>
> Thanks
>
> Benjamin
>
>
> Am 08.06.2018 um 16:59 schrieb Benjamin Federer :
>
> Last year at WWDC it was announced that AUGraph would be deprecated in
> 2018. I just browsed the documentation (
> https://developer.apple.com/documentation/audiotoolbox?changes=latest_major)
> but found
> Audio Unit Processing Graph Services not marked for deprecation.
> The AUGraph header files rolled out with Xcode 10 beta also have no mention
> of a deprecation in 10.14. I searched for audio-specific sessions at this
> year’s WWDC but wasn’t able to find anything relevant. Has anyone come
> across new information regarding this?
>
> Judging by how much changes and features Apple seems to be holding back
> until next year I dare ask: Has AUGraph API deprecation been moved to a
> later time?
>
> Benjamin
>
>
> ___
> Do not post admin requests to the list. They will be ignored.
> Coreaudio-api mailing list  (Coreaudio-api@lists.apple.com)
> Help/Unsubscribe/Update your Subscription:
>
> https://lists.apple.com/mailman/options/coreaudio-api/arshiacont%40antescofo.com
>
> This email sent to arshiac...@antescofo.com
>
>
>  ___
> Do not post admin requests to the list. They will be ignored.
> Coreaudio-api mailing list  (Coreaudio-api@lists.apple.com)
> Help/Unsubscribe/Update your Subscription:
>
> https://lists.apple.com/mailman/options/coreaudio-api/laurent.noudohounsi%40gmail.com
>
> This email sent to laurent.noudohou...@gmail.com
>
 ___
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list  (Coreaudio-api@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/coreaudio-api/archive%40mail-archive.com

This email sent to arch...@mail-archive.com


Play multiples AUFilePlayers in sync

2017-08-09 Thread Laurent Noudohounsi
Hi,

I've been struggling playing multiple AUFilePlayers at the same time.

I use an AUGraph and setup all my AUFilePlayer regions like this:

memset(_file_region_.mTimeStamp, 0,
sizeof(audio_file_region_.mTimeStamp));
audio_file_region_.mTimeStamp.mFlags = kAudioTimeStampSampleTimeValid;
audio_file_region_.mTimeStamp.mSampleTime = 0;
audio_file_region_.mCompletionProc = NULL;
audio_file_region_.mCompletionProcUserData = NULL;
audio_file_region_.mAudioFile = audio_file_ID;
audio_file_region_.mLoopCount = 0;
audio_file_region_.mStartFrame = 0;
audio_file_region_.mFramesToPlay = 0;

Notice that the `mFramesToPlay = 0` because I Play/Pause/Stop from the
AUFilePlayers and not the AUGraph.

This way I start the graph but I start the playback when I want by using
AUFilPlayer->Play(). Inside the Play() method I set the right number of
frames for `mFramesToPlay` and the playback start.

Everything works well this way.

But now let's say I've 2 different AUFilePlayer called file_player_1 and
file_player_2 which will play the same audio file.

When I do something like this:

// Do all the setup things
file_player_1->Play();
file_player_2->Play();

I can hear the mix of my AUFilePlayer but I hear that the playbacks are
phase shifted.
I think it's because of the delay caused by `file_player_1->Play();` so
before `file_player_2->Play();` is done the file_player_1 playback has
already started…


I looked at how to fix it and the workaround I found is to change the
`kAudioUnitProperty_ScheduleStartTimeStamp` property.
Instead of using `time_stamp_.mSampleTime = -1;` i.e start at the next
render cycle I used `time_stamp_.mSampleTime = 0;` and it seems to work.

But honestly I don't really know what
"kAudioUnitProperty_ScheduleStartTimeStamp" do, and if it's the good way to
play multiple AUFilePlayer playback in a sync since everybody uses
`time_stamp_.mSampleTime = -1`.

What would be your recommandation for this kind of behaviour?


Thank you
Laurent Noudohounsi.
 ___
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list  (Coreaudio-api@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/coreaudio-api/archive%40mail-archive.com

This email sent to arch...@mail-archive.com


Re: How to setup the sample rate in an AUGraph

2017-07-22 Thread Laurent Noudohounsi
Brian thank you so much for your quick and clear answer.

So to set the right stream format, I just need to connect the audio units
and only after I could do

AudioUnitSetProperty(
audio_unit_,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Global,
0,
_stream_format,
sizeof(my_stream_format);

or even AudioUnitSetProperty(
audio_unit_,
kAudioUnitProperty_SampleRate,
kAudioUnitScope_Global,
0,
_sample_rate,
sizeof(new_sample_rate);

Indeed it worked, I didn’t manage to make it working before because I used
theses functions before the connections between the nodes…

I think I use the default sample rate and convert every AUFilePlayer. But, :
1°) what do I need to use to have the AUFilePlayer providing sample rate
conversion? Is it an other audio unit connected to the output of the
AUFilePlayer?
2°) does this approach is expensive?





And since you told me that using a callback and tracking the frames wasn’t
a good solution, what’s your option to know when the end of a file is
reached?

I don’t see any callback called at the end, and using mSamplingTime is not
accurate since when I pause the AUFilePlayer I just set the field
`mFramesToPlay` of an ScheduledAudioFileRegion to 0.
Indeed since I use an AUGraph, I start this one only once so all my audio
unit are rendering every time and when i want to pause one I use the trick
I described. I do this because I manage to stop one with `AudioUnitReset`
but I don’t see any other function to play except `AudioUnitRender`but ask
many parameters that I don’t have like `AudioBufferList* ioData` since I
use AUGraph to handle it…

2017-07-20 22:27 GMT+02:00 Brian Willoughby <bri...@audiobanshee.com>:

>
> On Jul 20, 2017, at 4:02 AM, Laurent Noudohounsi <
> laurent.noudohou...@gmail.com> wrote:
> > But I've some question about the behaviour of a custom callback within
> an AUGraph.
> > Indeed, I've the feeling that the AUGraph use default settings for the
> callback options. Like all the graph is in 44100kHz with 2 channels.
>
> Yes, the documentation states that all AudioUnits have 44.1kHz stereo
> 32-bit float as the format. You must change the format after connecting all
> nodes, but before starting the graph. Note that each AU has both an input
> format and an output format. Those should all be set (there are some
> shortcuts, but when in doubt set them all).
>
>
> > I use a file with a sampling rate of 11025 (= 1/4 * 441000). The
> callback tell me that the frame position reach the number of frames too
> early. For a 16sec file, I reach the end at 4sec (btw the 1/4 of 16sec)
>
> It seems that 99% of your problem is the mismatch of sample rates.
> However, even after you solve this problem you should be wary of using this
> technique for file playback. The FilePlayer AudioUnit will still load from
> your file in advance of the playback time, which can lead to your code
> reacting too early. You need to account for the latency of your graph, and
> not stop the audio until all the samples are flushed. If you do not do
> this, your code will cut off the end of the audio file, which won't be
> heard. In other words, even after you get rid of the 1/4 rate error, you
> will still see the frame position reach the number of frames too early.
>
>
> > So how can I handle this problem? Can I set global setting to the
> AUGraph or should I set the format for every audio units?
>
> There is no global setting for an AUGraph with regard to sample rate or
> bit depth. A global setting is impossible because some nodes can have
> sample rate conversion, such that their input and output are different.
>
> You need to choose either 44.1 kHz or 11.025 kHz, and then set all of the
> AU nodes to have your chosen rate on both input and output. If you run your
> graph at 44.1 kHz, then the FilePlayer should provide sample rate
> conversion. If you run your graph at 11.025 kHz, then your output AU will
> handle conversion to the hardware sample rate.
>
> Brian Willoughby
>
>
 ___
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list  (Coreaudio-api@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/coreaudio-api/archive%40mail-archive.com

This email sent to arch...@mail-archive.com


Re: How to move a playhead (seek) in an AUGraph

2017-07-12 Thread Laurent Noudohounsi
Thank you very much James for your answer. Also I apologize for the very
late reply but I had to move from programming things this last month.

Indeed you're right about AVAudioEngine, I didn't know AUgraph will be
deprecated...
But I've a question, I've heard for many years that using Obj-C and Swift
code will decrease the performance of Core Audio. Will I get lower
performance using AVAudioEngine instead of AUGraph?

Btw for the ones who are interested using the seek in AUGraph, I find the
answer. On each FilePlayer into your AUGraph, change the `mStartFrame`
field of the `ScheduledAudioFileRegion` and setup again the file player
with this new `ScheduledAudioFileRegion` structure.

2017-06-19 23:36 GMT+02:00 James McCartney <jmccart...@apple.com>:

>
> To answer your first question, if you are developing a new audio
> application, you should not plan to use AUGraph, as it is set for
> deprecation. Use AVAudioEngine instead.
> (There are no current plans to deprecate ExtAudioFile or the AudioUnit v2
> API. But there are newer APIs: AVAudioFile and AUAudioUnit.)
>
> Second, to implement playing back from random places in a file, you would
> use an AVAudioPlayerNode and either scheduleSegment (
> https://developer.apple.com/documentation/avfoundation/
> avaudioplayernode/1385884-schedulesegment ) if you want to schedule
> segments end to end, or scheduleBuffer ( https://developer.apple.com/
> documentation/avfoundation/avaudioplayernode/1388422-schedulebuffer ) if
> you need to be able to interrupt currently playing material with new
> material. In the latter case you will need to load buffers from the file
> yourself.
>
> On Jun 13, 2017, at 2:29 AM, Laurent Noudohounsi <
> laurent.noudohou...@gmail.com> wrote:
>
> Hi every Core Audio expert!
>
> I’m currently developing a professional audio application (research
> oriented) but I’ve questions about moving the playhead (seek function).
>
> My application would be a Swift application but regarding the advices of
> Chris Adamson, I decided to build all the core audio in C/C++.
>
> My application would be one ore more source the user could mixe and only
> one output.
> And I would use only custom DSP code.
> So I thought to do an AUGraph Like this:
>
>
> FileAu-1 - -  - — - - > | |
> … | |
> | Mixer unit |
> FileAU-n - - -  - — - > | |
> RenderCallBack - - -  > | |
>
>
> Where:
> - the 'FileAu’ would be my source generator from audio on disk and use
> ExtAudioFile (and properties 'kAudioUnitProperty_ScheduledFileIDs', '
> kAudioUnitProperty_ScheduledFileRegion’, 
> 'kAudioUnitProperty_ScheduledFilePrime'
> and ’kAudioUnitProperty_ScheduleStartTimeStamp')
>
> - the render callback would be my custom DSP processing (I use a personal
> lib where I can link several DSP units so only one render call back with
> the right DSP unit initialization could do the job).
>
>
> My questions are:
>
> 1°) Does AUGraph, ExtAudioFileRef, AudioUnit and all the low level will be
> deprecated and do I need to tuse AV stuff like AVAudioEngine, AVAudioUnit
> and so on instead? If no, what are the difference between AUgraph and
> AVAudioEngine because they look like the same.
>
> 2°) How can I perform a seek in this situation? I cannot manage to do
> something as simple as a seek. Is it possible to do a global seek using
> only AUGraph? Or must I use a ringbuffer with rendercallback like few
> examples I saw even if I don’t use one?
> ___
> Do not post admin requests to the list. They will be ignored.
> Coreaudio-api mailing list  (Coreaudio-api@lists.apple.com)
> Help/Unsubscribe/Update your Subscription:
> https://lists.apple.com/mailman/options/coreaudio-api/
> jmccartney%40apple.com
>
> This email sent to jmccart...@apple.com
>
>
> James McCartney
> Apple CoreAudio
> jmccart...@apple.com
>
>
>
>
 ___
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list  (Coreaudio-api@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/coreaudio-api/archive%40mail-archive.com

This email sent to arch...@mail-archive.com


How to move a playhead (seek) in an AUGraph

2017-06-13 Thread Laurent Noudohounsi
Hi every Core Audio expert!

I’m currently developing a professional audio application (research
oriented) but I’ve questions about moving the playhead (seek function).

My application would be a Swift application but regarding the advices of
Chris Adamson, I decided to build all the core audio in C/C++.

My application would be one ore more source the user could mixe and only
one output.
And I would use only custom DSP code.
So I thought to do an AUGraph Like this:


FileAu-1 - -  - — - - > | |
… | |
| Mixer unit |
FileAU-n - - -  - — - > | |
RenderCallBack - - -  > | |


Where:
- the 'FileAu’ would be my source generator from audio on disk and use
ExtAudioFile (and properties
'kAudioUnitProperty_ScheduledFileIDs',
'kAudioUnitProperty_ScheduledFileRegion’,
'kAudioUnitProperty_ScheduledFilePrime' and
’kAudioUnitProperty_ScheduleStartTimeStamp')

- the render callback would be my custom DSP processing (I use a personal
lib where I can link several DSP units so only one render call back with
the right DSP unit initialization could do the job).


My questions are:

1°) Does AUGraph, ExtAudioFileRef, AudioUnit and all the low level will be
deprecated and do I need to tuse AV stuff like AVAudioEngine, AVAudioUnit
and so on instead? If no, what are the difference between AUgraph and
AVAudioEngine because they look like the same.

2°) How can I perform a seek in this situation? I cannot manage to do
something as simple as a seek. Is it possible to do a global seek using
only AUGraph? Or must I use a ringbuffer with rendercallback like few
examples I saw even if I don’t use one?
 ___
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list  (Coreaudio-api@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/coreaudio-api/archive%40mail-archive.com

This email sent to arch...@mail-archive.com