GetLatency/GetTail are used to specify how much data you are going to buffer, the units are seconds, so you need to return a float value equal to your buffer size in samples divided by samplerate. Say you need 4096 samples for your algorithm and the host sends you data in frames of arbitrary size from 512 to 2048 samples. You need to have a circular (ring) buffer large enough to keep input data and put all input data into it in Render function. When the ring buffer has 4096 samples you give them to your FFT algorithm and fill output buffers with the result. Until there is no 4096 samples in it you fill the output buffers with zeroes. When you get your first 4096 output samples you might need to fill output buffers partially with zeroes and partially with your data to make sure that you output exactly 4096 samples of silence.


30.11.2015 23:34, Daniel Wilson пишет:
Thank you Roman! How do I generate a buffer from the GetLatency function? I 
have it declared in my default template and it is default to zero. Fortunately 
the DSP isn't my issue, I just can't figure out how to get the buffer to do the 
actual FFT on :(

Sent from my iPhone.

On Nov 30, 2015, at 2:00 PM, [email protected] wrote:

Send Coreaudio-api mailing list submissions to
    [email protected]

To subscribe or unsubscribe via the World Wide Web, visit
    https://lists.apple.com/mailman/listinfo/coreaudio-api
or, via email, send a message with subject or body 'help' to
    [email protected]

You can reach the person managing the list at
    [email protected]

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Coreaudio-api digest..."


Today's Topics:

   1. Frame Size for Audio Unit Rendering (ex. FFT/IFFT) (Daniel Wilson)
   2. Re: Frame Size for Audio Unit Rendering (ex. FFT/IFFT) (Roman)
   3. Re: Frame Size for Audio Unit Rendering (ex. FFT/IFFT)
      (Paul Davis)
   4. Re: Frame Size for Audio Unit Rendering (ex. FFT/IFFT)
      (Daniel Wilson)
   5. Re: Frame Size for Audio Unit Rendering (ex. FFT/IFFT)
      (Paul Davis)


----------------------------------------------------------------------

Message: 1
Date: Sun, 29 Nov 2015 23:08:01 -0600
From: Daniel Wilson <[email protected]>
To: [email protected]
Subject: Frame Size for Audio Unit Rendering (ex. FFT/IFFT)
Message-ID: <[email protected]>
Content-Type: text/plain; charset=windows-1252

Does anyone know how to change the frame size when doing the digital signal 
processing on an audio unit? Currently my audio unit is set up so that it 
receives a single sample, does the signal processing, outputs the sample, and 
repeats the process for each sample of the audio signal. I have created quite a 
few audio units with this set up but now I want to process multiple samples at 
the same time to do the FFT/IFFT, etc. Does anyone know how to do this? It 
seems like most people are using audio units for iiOS, but my audio units are 
for OS X to be used in programs like Logic Pro. Don’t know if that makes a 
difference.

-Daniel


------------------------------

Message: 2
Date: Mon, 30 Nov 2015 13:54:04 +0300
From: Roman <[email protected]>
To: Daniel Wilson <[email protected]>,
    [email protected]
Subject: Re: Frame Size for Audio Unit Rendering (ex. FFT/IFFT)
Message-ID: <[email protected]>
Content-Type: text/plain; charset=utf-8; format=flowed

Hi Daniel,

You need to implement buffering and output silence while you don't have
enough audio samples for your FFT/IFFT transformation. It is necessary
to output the correct value for GetLatency/GetTail functions.

30.11.2015 08:08, Daniel Wilson пишет:
Does anyone know how to change the frame size when doing the digital signal 
processing on an audio unit? Currently my audio unit is set up so that it 
receives a single sample, does the signal processing, outputs the sample, and 
repeats the process for each sample of the audio signal. I have created quite a 
few audio units with this set up but now I want to process multiple samples at 
the same time to do the FFT/IFFT, etc. Does anyone know how to do this? It 
seems like most people are using audio units for iiOS, but my audio units are 
for OS X to be used in programs like Logic Pro. Don’t know if that makes a 
difference.

-Daniel
  _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      ([email protected])
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/coreaudio-api/krandv%40rbcmail.ru

This email sent to [email protected]
--
С уважением,
Роман



------------------------------

Message: 3
Date: Mon, 30 Nov 2015 08:43:48 -0500
From: Paul Davis <[email protected]>
To: Daniel Wilson <[email protected]>
Cc: CoreAudio API <[email protected]>
Subject: Re: Frame Size for Audio Unit Rendering (ex. FFT/IFFT)
Message-ID:
    <CAFa_cKk0PEaVFzw3Uv2jFAJ=z4yfyvg0rxlc2uyg4pwxq+k...@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

AudioUnits do not get to control the buffer size delivered via a render
call. The host decides this.

On Mon, Nov 30, 2015 at 12:08 AM, Daniel Wilson <[email protected]>
wrote:

Does anyone know how to change the frame size when doing the digital
signal processing on an audio unit? Currently my audio unit is set up so
that it receives a single sample, does the signal processing, outputs the
sample, and repeats the process for each sample of the audio signal. I have
created quite a few audio units with this set up but now I want to process
multiple samples at the same time to do the FFT/IFFT, etc. Does anyone know
how to do this? It seems like most people are using audio units for iiOS,
but my audio units are for OS X to be used in programs like Logic Pro.
Don’t know if that makes a difference.

-Daniel
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      ([email protected])
Help/Unsubscribe/Update your Subscription:

https://lists.apple.com/mailman/options/coreaudio-api/paul%40linuxaudiosystems.com

This email sent to [email protected]
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<https://lists.apple.com/mailman/private/coreaudio-api/attachments/20151130/a79c93c2/attachment.html>

------------------------------

Message: 4
Date: Mon, 30 Nov 2015 07:52:26 -0600
From: Daniel Wilson <[email protected]>
To: Paul Davis <[email protected]>
Cc: CoreAudio API <[email protected]>
Subject: Re: Frame Size for Audio Unit Rendering (ex. FFT/IFFT)
Message-ID: <[email protected]>
Content-Type: text/plain; charset="utf-8"

Paul thank you. That makes perfect sense. How do I switch my processing to 
process the entire buffer at once and not just one sample at a time?

Sent from my iPhone.

On Nov 30, 2015, at 7:43 AM, Paul Davis <[email protected]> wrote:

AudioUnits do not get to control the buffer size delivered via a render call. 
The host decides this.

On Mon, Nov 30, 2015 at 12:08 AM, Daniel Wilson <[email protected]> 
wrote:
Does anyone know how to change the frame size when doing the digital signal 
processing on an audio unit? Currently my audio unit is set up so that it 
receives a single sample, does the signal processing, outputs the sample, and 
repeats the process for each sample of the audio signal. I have created quite a 
few audio units with this set up but now I want to process multiple samples at 
the same time to do the FFT/IFFT, etc. Does anyone know how to do this? It 
seems like most people are using audio units for iiOS, but my audio units are 
for OS X to be used in programs like Logic Pro. Don’t know if that makes a 
difference.

-Daniel
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      ([email protected])
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/coreaudio-api/paul%40linuxaudiosystems.com

This email sent to [email protected]
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<https://lists.apple.com/mailman/private/coreaudio-api/attachments/20151130/b2bf093b/attachment.html>

------------------------------

Message: 5
Date: Mon, 30 Nov 2015 09:07:14 -0500
From: Paul Davis <[email protected]>
To: Daniel Wilson <[email protected]>
Cc: CoreAudio API <[email protected]>
Subject: Re: Frame Size for Audio Unit Rendering (ex. FFT/IFFT)
Message-ID:
    <cafa_ckn7dd3hzscfrxwhp7g9gbj-nt+zbjg4urcmjirfdud...@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

Sorry, no idea. I'm a host author (Ardour / Mixbus / Tracks Live), not a
plugin writer. A host just gives you a block of samples, with the size of
its own choosing. What you do with them is up to you. As Roman mentioned,
you need to plan on buffering them and running your FFT periodically.

On Mon, Nov 30, 2015 at 8:52 AM, Daniel Wilson <[email protected]>
wrote:

Paul thank you. That makes perfect sense. How do I switch my processing to
process the entire buffer at once and not just one sample at a time?

Sent from my iPhone.

On Nov 30, 2015, at 7:43 AM, Paul Davis <[email protected]>
wrote:

AudioUnits do not get to control the buffer size delivered via a render
call. The host decides this.

On Mon, Nov 30, 2015 at 12:08 AM, Daniel Wilson <[email protected]
wrote:
Does anyone know how to change the frame size when doing the digital
signal processing on an audio unit? Currently my audio unit is set up so
that it receives a single sample, does the signal processing, outputs the
sample, and repeats the process for each sample of the audio signal. I have
created quite a few audio units with this set up but now I want to process
multiple samples at the same time to do the FFT/IFFT, etc. Does anyone know
how to do this? It seems like most people are using audio units for iiOS,
but my audio units are for OS X to be used in programs like Logic Pro.
Don’t know if that makes a difference.

-Daniel
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      ([email protected])
Help/Unsubscribe/Update your Subscription:

https://lists.apple.com/mailman/options/coreaudio-api/paul%40linuxaudiosystems.com

This email sent to [email protected]
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<https://lists.apple.com/mailman/private/coreaudio-api/attachments/20151130/afb37ab9/attachment.html>

------------------------------

_______________________________________________
Coreaudio-api mailing list
[email protected]
https://lists.apple.com/mailman/listinfo/coreaudio-api

End of Coreaudio-api Digest, Vol 12, Issue 198
**********************************************
  _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      ([email protected])
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/coreaudio-api/krandv%40rbcmail.ru

This email sent to [email protected]

--
С уважением,
Роман

_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      ([email protected])
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/coreaudio-api/archive%40mail-archive.com

This email sent to [email protected]

Reply via email to