Hi Dario,

Those numbers look good except for the NaNs in the last case.

- Julius

On Sun, Dec 20, 2020 at 7:12 PM Dario Sanfilippo <sanfilippo.da...@gmail.com>
wrote:

> Hi, Julius and Oleg.
>
>
>> The question remains as to which filter form is more accurate.
>>
>> Another thing worth mentioning, by the way, is that tf2snp can be
>> modulated, even using white noise for its coefficients, without affecting
>> signal energy.
>> That's the usual and original principal benefit of going to the
>> normalized ladder form - "power-invariant modulatability".
>> The superior numerical robustness for closely spaced poles was observed
>> later and documented in the literature by Gray and Markel, as I recall.
>>
>> - Julius
>>
>
> Below we have a simple test to see how much the filters diverge from the
> expected 1/sqrt(2) attenuation at cut-off. The filters are tested with 12
> frequencies, the first 12 octaves starting at ~15.85 Hz.
>
> The values show the attenuation error in dBs. I'm calculating the RMS with
> an.rms_envelope_tau(10) for a steady RMS calculation and I'm taking the
> values after a thousand seconds to be sure that the RMS filters are fully
> charged. I did the test at 96kHz SR, double precision.
>
>
> We have: highpass(2), snp, svf_oleg, svf_zava. At least for the audible
> spectrum, they all seem to perform pretty well. Hard to tell which one is
> best for the time-invariant case, isn't it?
>
> Ciao,
> Dario
>
> 15.8489323,
>
> 0.0035875327635620513,
>
> 0.0035875324371487105,
>
> 0.0035875327635620513,
>
> 0.0035875327636175625,
>
> 31.622776,
>
> 0.0013189248524392294,
>
> 0.0013189248481326743,
>
> 0.0013189248529810182,
>
> 0.0013189248528688857,
>
> 63.0957336,
>
> -0.00014836658605854591,
>
> -0.00014836659250561102,
>
> -0.00014836658607297881,
>
> -0.00014836658600858588,
>
> 125.89254,
>
> -0.00029188057887152841,
>
> -0.00029188057380447052,
>
> -0.00029188057895590536,
>
> -0.00029188057892148844,
>
> 251.188644,
>
> 0.00010734229401176965,
>
> 0.00010734229400843898,
>
> 0.00010734229401621054,
>
> 0.00010734229399067541,
>
> 501.187225,
>
> -0.0001285532826483804,
>
> -0.00012855328314131942,
>
> -0.00012855328263283727,
>
> -0.00012855328265393151,
>
> 1000,
>
> -2.7386120134975656e-05,
>
> -2.7386120134975656e-05,
>
> -2.7386120134975656e-05,
>
> -2.7386120134975656e-05,
>
> 1995.26233,
>
> 5.9325435514123726e-06,
>
> 5.9325435203261279e-06,
>
> 5.9325435536328186e-06,
>
> 5.9325435569634877e-06,
>
> 3981.07178,
>
> 1.4404616551222382e-05,
>
> 1.4404616547891713e-05,
>
> 1.4404616545671267e-05,
>
> 1.4404616547891713e-05,
>
> 7943.28223,
>
> -1.0056763050103612e-06,
>
> -1.0056763061205842e-06,
>
> -1.0056763072308073e-06,
>
> -1.0056763061205842e-06,
>
> 15848.9316,
>
> -1.1356025741982023e-05,
>
> -1.1356025744202469e-05,
>
> -1.13560257408718e-05,
>
> -1.1356025743092246e-05,
>
> 31622.7773
>
> nan,
>
> 173.00064800561148,
>
> nan,
>
> nan
>
>
>>
>>
>> On Sun, Dec 20, 2020 at 12:07 PM Oleg Nesterov <o...@redhat.com> wrote:
>>
>>> On 12/20, Dario Sanfilippo wrote:
>>> >
>>> > > --- a/filters.lib
>>> > > +++ b/filters.lib
>>> > > @@ -1004,7 +1004,7 @@ declare tf2np copyright "Copyright (C)
>>> 2003-2019 by
>>> > > Julius O. Smith III <jos@ccr
>>> > >  declare tf2np license "MIT-style STK-4.3 license";
>>> > >  tf2np(b0,b1,b2,a1,a2) = allpassnnlt(M,sv) : sum(i,M+1,*(tghr(i)))
>>> > >  with {
>>> > > -  smax = 0.9999; // maximum reflection-coefficient magnitude allowed
>>> > > +  smax = 0.999999999; // maximum reflection-coefficient magnitude
>>> allowed
>>> > >    s2 = max(-smax, min(smax,a2)); // Project both
>>> reflection-coefficients
>>> > >    s1 = max(-smax, min(smax,a1/(1+a2))); // into the defined
>>> > > stability-region.
>>> > >    sv = (s1,s2); // vector of sin(theta) reflection coefficients
>>> > >
>>> > >
>>> > If I'm not wrong, anything above 0.9999999 would be rounded to 1 in
>>> single
>>> > precision, right?
>>>
>>> Quite possibly, I didn't even bother to check.
>>>
>>> In case it was not clear, I didn't try to propose a fix, I just tried to
>>> identify where does the problem come from.
>>>
>>> > Would it be possible to choose different constants based on different
>>> > options given to the compiler?
>>>
>>> Yes, perhaps we should use singleprecision/doubleprecision I dunno.
>>> (Can't
>>> resist I think this feature was a mistake but this is offtopic ;)
>>>
>>> Even if we forget about single precision, I simply do not know how much
>>> we can enlarge this limit. The 0.999999999 value I used is just the
>>> "random
>>> number closer to 1".
>>>
>>> > If not, a philosophical question (not really) for these situations
>>> might
>>> > be: should we prioritise single precision or double precision
>>> > performance/stability?
>>>
>>> Good question! please inform me when you know the answer? ;)
>>>
>>> Oleg.
>>>
>>>
>>
>> --
>> "Anybody who knows all about nothing knows everything" -- Leonard Susskind
>>
>
>
> --
> Dr Dario Sanfilippo
> http://dariosanfilippo.com
>


-- 
"Anybody who knows all about nothing knows everything" -- Leonard Susskind
_______________________________________________
Faudiostream-users mailing list
Faudiostream-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/faudiostream-users

Reply via email to