Re: [music-dsp] Practical filter programming questions

2020-01-12 Thread Stefan Sullivan
My 2 cents on 1: yes we always pre-compute filter coefficients, especially
since they often involve trig functions which are expensive. I've rarely
seen them actually stored as a filter, but if your application is to have
many filters operating in parallel it's a good idea, but requires you to
use FIR filters and no IIR filters. I'm used to seeing implementations
where you instantiate objects that represent filters, and set their
coefficients in some sort of initialization phase, which doesn't
necessarily benefit from storing them in memory together or not. If you
_are_ using FIR filters, and you are computing many outputs from one or
more inputs, you can benefit from some matrix math libraries, but again
that's under pretty specific conditions.

To contribute to the conversation on 2:
In theory when you change the filter coefficients the states are relevant
to a different filter, but computing the new states is expensive and
non-trivial. I've never come across anything in the literature that
explains it. In my experience some topologies are worse than others. If you
have very large state variables, which happens when you compute the poles
before zeros, then you end up with higher probability of audible
clicks/pops. See this question where I asked about changing filter
parameters on stack exchange
https://dsp.stackexchange.com/questions/11242/how-can-i-prove-stability-of-a-biquad-filter-with-non-zero-initial-conditions/11267#11267
(especially
Hilmar's answer which gives a lot of practical advice).

Regarding SIMD and GPUs, the crux of what makes a DSP processor different
from a CPU is highly-optimized SIMD, so yes typically DSP engineers are
optimizing for SIMD, but not by using the GPU. Take this part with a grain
of salt, but if I understand correctly GPUs have much higher parallelism
than typical CPUs (something like 128 vs 4 operations). GPUs do, however,
have their own memory, and transferring between the CPU and GPU has always
been described as expensive. When you want low-latency operation (which you
always do for audio), that memory transfer is at least something to think
about. So those are the typical arguments against using GPUs for audio DSP,
but I'm not aware of anybody who's actually tried it. If you have the
skills I highly encourage trying it and sharing your results with the list.
Something tells me these are probably arguments that are highly outweighed
by the computation efficiency of a GPU, especially if you're not changing
filter coefficients very often.

Anecdotally, I've started to hear whispers from some audio DSP folks who
are starting to prefer MISD operations of SIMD operations, and reducing
buffer sizes as low as single-sample to get high performance out of their
CPUs with very low latency. I haven't heard yet if this is easier or not.
It might make implementing algorithms and optimizing them much more
decoupled for audio DSP, but I haven't really tried it out on anything I've
made yet. Again, if you have the skills for that type of optimization, I'd
highly encourage trying it and sharing your results with the list. Has
anybody else tested MISD vs SIMD optimization techniques on DSP?

-Stefan


On Sun, Jan 12, 2020, 16:33 Ross Bencina  wrote:

> On 12/01/2020 5:06 PM, Frank Sheeran wrote:
> > I have a couple audio programming books (Zolzer DAFX and Pirkle
> > Designing Audio Effect Plugins in C++).  All the filters they describe
> > were easy enough to program.
> >
> > However, they don't discuss having the frequency and resonance (or
> > whatever inputs a given filter has--parametric EQ etc.) CHANGE.
> >
> > I am doing the expensive thing of recalculating all the coefficients
> > every sample, but that uses a lot of CPU.
> >
> > My questions are:
> >
> > 1. Is there a cheaper way to do this?  For instance can one
> > pre-calculate a big matrix of filter coefficients, say 128 cutoffs
> > (about enough for each semitone of human hearing) and maybe 10
> > resonances, and simply interpolating between them?  Does that even work?
>
> It depends on the filter topology. Coefficient space is not the same as
> linear frequency or resonance space. Interpolating in coefficient space
> may or may not produce the desired results -- but like any other
> interpolation situation, the more pre-computed points that you have, the
> closer you get to the original function. One question that you need to
> resolve is whether all of the interpolated coefficient sets produce
> stable filters (e.g.. keep all the poles inside the unit circle).
>
>
> > 2. when filter coefficients change, are the t-1 and t-2 values in the
> > pipeline still good to use?
>
> Really there are two questions:
>
> - Are the filter states still valid after coefficient change (Not in
> general)
> - Is the filter unconditionally stable if you change the components at
> audio rate (maybe)
>
> To some extent it depends how frequently you intend to update the
> coefficients. Jean Laroche's paper is the one to read for an
> 

Re: [music-dsp] Practical filter programming questions

2020-01-12 Thread Ross Bencina

On 12/01/2020 5:06 PM, Frank Sheeran wrote:
I have a couple audio programming books (Zolzer DAFX and Pirkle 
Designing Audio Effect Plugins in C++).  All the filters they describe 
were easy enough to program.


However, they don't discuss having the frequency and resonance (or 
whatever inputs a given filter has--parametric EQ etc.) CHANGE.


I am doing the expensive thing of recalculating all the coefficients 
every sample, but that uses a lot of CPU.


My questions are:

1. Is there a cheaper way to do this?  For instance can one 
pre-calculate a big matrix of filter coefficients, say 128 cutoffs 
(about enough for each semitone of human hearing) and maybe 10 
resonances, and simply interpolating between them?  Does that even work?


It depends on the filter topology. Coefficient space is not the same as 
linear frequency or resonance space. Interpolating in coefficient space 
may or may not produce the desired results -- but like any other 
interpolation situation, the more pre-computed points that you have, the 
closer you get to the original function. One question that you need to 
resolve is whether all of the interpolated coefficient sets produce 
stable filters (e.g.. keep all the poles inside the unit circle).



2. when filter coefficients change, are the t-1 and t-2 values in the 
pipeline still good to use?


Really there are two questions:

- Are the filter states still valid after coefficient change (Not in 
general)
- Is the filter unconditionally stable if you change the components at 
audio rate (maybe)


To some extent it depends how frequently you intend to update the 
coefficients. Jean Laroche's paper is the one to read for an 
introduction "On the stability of time-varying recursive filters".


There is a more recent DAFx paper that adresses the stability of the 
trapezoidally integrated SVF. See the references linked here:


http://www.rossbencina.com/code/time-varying-bibo-stability-analysis-of-trapezoidal-integrated-optimised-svf-v2


3. Would you guess that most commercial software is using SIMD or GPU 
for this nowadays?  Can anyone confirm at least some implementations use 
SIMD or GPU?


I don't have an answer for this, by my guesses are that most commercial 
audio software doesn't use GPU, and that data parallelism (GPU and/or 
SIMD) is not very helpful to evaluate single IIR filters since there are 
tight data dependencies between each iteration of the filter. Multiple 
independent channels could be evaluated in parallel of course.


Ross.


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Practical filter programming questions

2020-01-12 Thread Nigel Redmon
Hi Frank—As far as I know, GPU processing of audio in plug-ins is pretty rare, 
and I don’t know that it would be a great fit for filters, but you can recall 
coefficients as needed (as Davide pointed out), and interpolation between 
points my be OK, it would take some analysis and dependent on the interpolation 
accuracy…mainly I wanted to say you might try kvraudio DSP forum, where 
practical synth filters have been discussed quite a bit over the years.

> On Jan 11, 2020, at 10:06 PM, Frank Sheeran  wrote:
> 
> I have a couple audio programming books (Zolzer DAFX and Pirkle Designing 
> Audio Effect Plugins in C++).  All the filters they describe were easy enough 
> to program.
> 
> However, they don't discuss having the frequency and resonance (or whatever 
> inputs a given filter has--parametric EQ etc.) CHANGE.
> 
> I am doing the expensive thing of recalculating all the coefficients every 
> sample, but that uses a lot of CPU.
> 
> My questions are:
> 
> 1. Is there a cheaper way to do this?  For instance can one pre-calculate a 
> big matrix of filter coefficients, say 128 cutoffs (about enough for each 
> semitone of human hearing) and maybe 10 resonances, and simply interpolating 
> between them?  Does that even work?
> 
> 2. when filter coefficients change, are the t-1 and t-2 values in the 
> pipeline still good to use?  I am using them and it SEEMS fine but now and 
> then the filters in rare cases go to infinity (maybe fast changes with high 
> resonance?) and I wonder if this is the cause.
> 
> 3. Would you guess that most commercial software is using SIMD or GPU for 
> this nowadays?  Can anyone confirm at least some implementations use SIMD or 
> GPU?
> 
> Frank

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] An example video: C to FPGA programming

2020-01-12 Thread raito
Andrew,

I think that the information at http://www.clifford.at/icestorm/ would be
useful to you. It describes the icestorm toolchain and uses Lattice's
development boards. I've done some musical things using both the smaller
and larger boards.

It's a pretty inexpensive way to get started.

Neil Gilmore
ra...@raito.com

On Fri, January 10, 2020 5:48 am, Andrew Luke Nesbit wrote:
> On 10/01/2020 10:18, Theo Verelst wrote:
>
>> Hi all
>>
>
> Hi Theo,
>
>
>> Maybe it's not everybody's cup of tea, but I recall some here are
>> (like me) interested in music applications of FPGA based signal
>> processing.
> Lately I have been researching exactly this topic.  It's one of the
> primary areas of DSP research that I am considering directing my career
> towards and making a significant investment of resources in learning.site
>
>
> There is a lot of meaningful context in all of this.  I'm looking
> forward to deploying my new website that should explain it.
>
> I have a strong backgroumd in music and audio signal processing.  Not
> with FPGA however.
>
>> I made a video showing a real time "Silicon Compile" and test program
>> run on a Zynq board using Xilinx's Vivado HLS to create an FPGA bit file
> I am overwhelmed by where to start in FPGA.  This includes finding a
> hardware recommendation for a beginnerdevelopment kit.
>
> Nevertheless I have yet to look up a vendor of this FPGA development kit
> and toolchain and then to find out what prices.
>
>> that initializes a 64k short integer fixed point sine lookup table
>> (-pi/2 .. pi/2) which can be used like a C function with argument
>> passing by a simple test program running on ARM processors.
> This is great!  It's simple, useful, and can be visualized with known
> expected results.  It seems like a perfect starting project.
>
>> The main point is the power the C compilation can provide the FPGA
>> with, and to see the use of the latest 2019.2 tools at work with the
>> board,
> Might I rephrase this as the following?
>
>
> -   It's an exercise in selecting an appropriate FPGA development kit.
> This kit would be a good investment and sufficienly repurposeable for
> future DSP projects.
>
> -   Setting up the toolchain; learning a workflow; and acquainting
> oneself with the ecosystems of:
>
> -   FPGA-based DSP;
>
>
> -   the Xilinx and FPGA support communities;
>
>
> -   edge computing; and...
>
>
> circling back to the beginning...
>
> perhaps even providing a basic introduction to FPGA for somebody (like
> me?).
>
> In this last case what would be an appropriate "Step 1. Introduction to
> FPGA"?
>
>
> I guess that Xilinx's own documentation for new users of FPGA technology
> would be a good place to start.
>
> If anybody has recommendations for additional books, blogs, forums, etc,
> please let me know.  Thank you!!
>
> In summary: Is Xilinx a good company to invest time into learning its
> ecosystem?  This obviously includes spending money on dev kits with the aim
> of FPGA-basd DSP.  For examples, is Xilinx's support good?  Is the
> community ecosystem healthy?
>
> Kind regards,
>
>
> Andrew
> --
> OpenPGP key: EB28 0338 28B7 19DA DAB0  B193 D21D 996E 883B E5B9
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
>
>

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Practical filter programming questions

2020-01-12 Thread Davide Busacca
Hi Frank,
I think that your main cpu issue is related to the fact that you don't need
to recalculate the coefficients on every sample. You can set a
variable/flag that can tell you when a change of the parameters is needed
and then use this to trigger the computation.
In addition, you can set this on a window/frame-based system. At the
beginning of each frame, you can check if a change is needed, and based on
that compute and update the parameters.

In the past, I implemented audio classes based on the base class shown in
http://www.redwoodaudio.net/Tutorials/juce_for_vst_development__intro6.html at
point 3 (end of the page). Maybe you can try to have a look at it.

Hope this helps,
Davide

On Sun, 12 Jan 2020 at 07:07, Frank Sheeran  wrote:

> I have a couple audio programming books (Zolzer DAFX and Pirkle Designing
> Audio Effect Plugins in C++).  All the filters they describe were easy
> enough to program.
>
> However, they don't discuss having the frequency and resonance (or
> whatever inputs a given filter has--parametric EQ etc.) CHANGE.
>
> I am doing the expensive thing of recalculating all the coefficients every
> sample, but that uses a lot of CPU.
>
> My questions are:
>
> 1. Is there a cheaper way to do this?  For instance can one pre-calculate
> a big matrix of filter coefficients, say 128 cutoffs (about enough for each
> semitone of human hearing) and maybe 10 resonances, and simply
> interpolating between them?  Does that even work?
>
> 2. when filter coefficients change, are the t-1 and t-2 values in the
> pipeline still good to use?  I am using them and it SEEMS fine but now and
> then the filters in rare cases go to infinity (maybe fast changes with high
> resonance?) and I wonder if this is the cause.
>
> 3. Would you guess that most commercial software is using SIMD or GPU for
> this nowadays?  Can anyone confirm at least some implementations use SIMD
> or GPU?
>
> Frank
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp