On 12/01/2020 5:06 PM, Frank Sheeran wrote:
I have a couple audio programming books (Zolzer DAFX and Pirkle Designing Audio Effect Plugins in C++).  All the filters they describe were easy enough to program.

However, they don't discuss having the frequency and resonance (or whatever inputs a given filter has--parametric EQ etc.) CHANGE.

I am doing the expensive thing of recalculating all the coefficients every sample, but that uses a lot of CPU.

My questions are:

1. Is there a cheaper way to do this?  For instance can one pre-calculate a big matrix of filter coefficients, say 128 cutoffs (about enough for each semitone of human hearing) and maybe 10 resonances, and simply interpolating between them?  Does that even work?

It depends on the filter topology. Coefficient space is not the same as linear frequency or resonance space. Interpolating in coefficient space may or may not produce the desired results -- but like any other interpolation situation, the more pre-computed points that you have, the closer you get to the original function. One question that you need to resolve is whether all of the interpolated coefficient sets produce stable filters (e.g.. keep all the poles inside the unit circle).


2. when filter coefficients change, are the t-1 and t-2 values in the pipeline still good to use?

Really there are two questions:

- Are the filter states still valid after coefficient change (Not in general) - Is the filter unconditionally stable if you change the components at audio rate (maybe)

To some extent it depends how frequently you intend to update the coefficients. Jean Laroche's paper is the one to read for an introduction "On the stability of time-varying recursive filters".

There is a more recent DAFx paper that adresses the stability of the trapezoidally integrated SVF. See the references linked here:

http://www.rossbencina.com/code/time-varying-bibo-stability-analysis-of-trapezoidal-integrated-optimised-svf-v2


3. Would you guess that most commercial software is using SIMD or GPU for this nowadays?  Can anyone confirm at least some implementations use SIMD or GPU?

I don't have an answer for this, by my guesses are that most commercial audio software doesn't use GPU, and that data parallelism (GPU and/or SIMD) is not very helpful to evaluate single IIR filters since there are tight data dependencies between each iteration of the filter. Multiple independent channels could be evaluated in parallel of course.

Ross.


_______________________________________________
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Reply via email to