Yeah zeroing out the state is going to lead to a transient, since the
filter has to ring up.

If you want to go that route, one possibility is to use two filters in
parallel: one that keeps the old state/coeffs but gets zero input, and
another that has zero state and gets the new input/coeffs. You then add
their outputs together, combining the ring-out of the old state/coeffs with
the ring-up of the new coeffs/input. This is the zero-state/zero-input
decomposition. However this can still result in transient artifacts if the
filter has changed a lot (i.e., the old coeffs might have a short ring-out
time, but the new ones have a long ring-up time, or vice versa). And if
your coeffs aren't changing much, then probably you can get away with more
direct methods. But a worthwhile exercise for theoretical edification, I
think.

Another thing to consider is how to interpret the state variables for
different filter topologies. If you use Direct Form I then the state
variables are simply the previous inputs and outputs to/from the filter,
which still make sense if you change the coeffs. Other topologies have
state variables that corresponds to history samples multiplied by the
coeffs, so when you change the coeffs they cease to make sense and you have
problems. If any of the coeffs are 0 you lose info and then can't "convert"
to the state corresponding to a different set of coeffs. Not that Direct
Form I is immune to artifacts when you change the coeffs, but they tend to
be much less severe than other topologies for a given set of coeffs/inputs.
This is because it's only a matter of the coefficient mismatch, and not the
additional factor of the state interpretation mismatch.

>Now if we're going to change to new filter coefficients, we have two
degrees of freedom
>in the state space. A reasonable goal seems to be to match the value and
derivative of
>the output after we change the coefficients.

I'm not sure quite how this would work for discrete time? Is the idea to
interpret them as continuous-time filters for the purposes of the state
update?

E

On Thu, Mar 3, 2016 at 11:34 AM, Ethan Fenn <et...@polyspectral.com> wrote:

> As a simple fix, I would also try just leaving the state alone rather than
> zeroing it out. I've done this plenty of times before and it's always
> sounded okay for moderate/gradual changes of the coefficients.
>
> As for doing it "correctly" -- I haven't read up on this but my thinking
> would go like so... let's suppose we have a biquad in a free ringing state,
> no longer receiving any input. Its output will be a sum of two complex
> exponentials. Given the current state variables, we can compute the
> weighting of these two exponentials, giving us the current value and first
> derivative of the output.
>
> Now if we're going to change to new filter coefficients, we have two
> degrees of freedom in the state space. A reasonable goal seems to be to
> match the value and derivative of the output after we change the
> coefficients. So we can perform the same analysis with the new
> coefficients, getting new exponents in the output. We can figure out what
> new state variables will give us the desired match. I haven't done the math
> yet but I don't see any complications and this would give us a matrix
> mapping old state variables to new ones.
>
> Is this the kind of reasoning I would find in the references, or is there
> a better way to think about this?
>
> -Ethan
>
>
>
> On Wed, Mar 2, 2016 at 12:49 PM, Theo Verelst <theo...@theover.org> wrote:
>
>> Paul Stoffregen wrote:
>>
>>> Does anyone have any suggestions or publications or references to best
>>> practices for what
>>> to do with the state variables of a biquad filter when changing the
>>> coefficients?
>>> ...
>>>
>>
>> I am not directly familiar with the programming of the particular biquad
>> filter variation, but some others, and like many have said, I suspect that
>> varying the external cutoff and/or resonance control, computing the effects
>> on the various filter coefficients, and essentially not making any changes
>> across some sort of singularity (if there is one), or overly rapid changes,
>> should leave the audio running through fine.
>>
>> The state variable filter theory and origins in the use of audio
>> equipment like the early analog synthesizers isn't exactly the same as the
>> digital implementations, so there are things to consider more accurately if
>> you want them wonderful sweeping resonant filter sounds on a sound source:
>> there are non-linearities for instance in a Moog ladder filter, and the
>> "state" which is remembered in an analog filter in the capacitors, gets
>> warped when the voltage controls change. A mostly linearized digital filter
>> simulation will probably not automatically exhibit the same behavior as the
>> well known analog filters. Also, all kinds signal subtleties are lost in
>> the approximation by sampling the time axis, which may or may not be a
>> problem.
>>
>> I looked at the code quickly, couldn't find the "definition"
>> datastructure definition in the .h file and didn't look much further, but I
>> suppose you should use the part where you initially compute all the (5?)
>> coefficients you use from the biquad filter again to gradually change the
>> coefficient of the actual filter code. I do not know what the relation is
>> between the biquad coefficients (at sufficiently high sampling frequency)
>> and the equivalent analog "state" of the filter. Maybe someone else knows
>> how the biquads behave in that comparison.
>>
>> T.
>>
>>
>> _______________________________________________
>> dupswapdrop: music-dsp mailing list
>> music-dsp@music.columbia.edu
>> https://lists.columbia.edu/mailman/listinfo/music-dsp
>>
>>
>
> _______________________________________________
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
_______________________________________________
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Reply via email to