Re: [music-dsp] FIR blog post & interactive demo

2020-03-12 Thread robert bristow-johnson



> On March 12, 2020 5:35 PM Ethan Duni  wrote:
> 
> 
> Hi Robert
> 
> 
> On Wed, Mar 11, 2020 at 4:19 PM robert bristow-johnson 
>  wrote:
> > 
> >  i don't think it's too generic for "STFT processing". step #4 is pretty 
> > generic.
> 
> I think the part that chafes my intuition is more that the windows in steps 
> #2 and #6 should "match" in some way, and obey an appropriate perfect 
> reconstruction condition.

i think we're supposed to multiply the analysis window with the synthesis 
window to get a net effective window, but i am not always persuaded that the 
analysis window is preserved in the frequency-domain modification operation.  
if it's a phase vocoder and you do the Miller Puckette thing and apply the same 
phase to a entire spectral peak, then supposedly the window shape is preserved 
on each sinusoidal component.  then i would use no synthesis window.  that's 
sorta what i was thinking in that Stack Exchange thing that i pointed to, but 
in a more general STFT process, i might want to use the Gaussian window, 
because the result of each sinusoidal component is a single peak with the 
Gaussian shape.  we can even measure the rate of change of frequency related to 
each peak.  being Gaussian, there shouldn't be side lobes to worry about.

> I think of STFT as intentionally wiping out any spill-over effects between 
> frames with synthesis windowing, to impose a particular time-frequency tiling.

yup.  and unlike wavelet, the tiles all have the same widths in time and 
frequency.

> Whereas fast convolution is defined by how it explicitly accounts for 
> spill-over between frames.

yup.  you don't even think of "windowing effects", even with overlap-add in 
which you are multiplying by 1 or 0, which is a rectangular window. but we 
consider the operation in the time domain, confirm linearity and can treat each 
frame as it's own linear-time output and add the FIR output from frame m to 
that of frame m+1.  and the end tail of frame end adds the beginning tail of 
frame m+1.  doesn't get affected by the windowing effects. 

> 
> My intuition isn't definitive, but that's what comes to mind. In any case, 
> "STFT processing" is a very generic term.

i think of it as the series of DFTs of windowed frames of audio.

> > 
> >  here is my attempt to quantitatively define and describe the STFT:
> >  
> >  
> > https://dsp.stackexchange.com/questions/45625/is-windowed-fourier-transform-a-synonym-for-stft/45631#45631
> >  
> 
> 
> Cool, that's a helpful reference for this stuff.

but i didn't account for both analysis and synthesis windows.  but whatever is 
the resultant window for the output grain y_m[n], you just add them up, 
properly positioned in time.


> In terms of "what even is STFT", it seems there is more consensus on the 
> analysis part. Many STFT applications don't involve any synthesis or 
> filtering, but only frequency domain parameter estimation.

that's right.

> For analysis-only, probably everyone agrees that STFT consists of some 
> Constant OverLap Add (COLA) window scheme, followed by DFT.

well, no, i don't agree.  for analysis-only, i don't know why you need 
complementary windows (which is what i think you mean by COLA).  it's in the 
synthesis where overlap-adding is done.  assuming the sinusoidal components in 
your output are phase aligned between adjacent frames, if you don't want a dip 
in the amplitude of each sinusoidal component you want their windows to add to 
1.  but for analysis, there might be other properties of the window that is 
more important than being complementary.

again, i like the Gaussian window for analysis (because it has smooth Gaussian 
pulses for each sinusoidal component in the frequency domain), but it's not 
complementary.  if, after analysis, i am modifying each Gaussian pulse and 
inverse DFT back to the time domain, i will have a Gaussian window effectively 
on the output frame.  by multiplying by a Hann window and dividing by the 
original Gaussian window, the result has a Hann window shape and that should be 
complementary in the overlap-add.

> Rectangular windows are a perfectly valid choice here, albeit one with poor 
> sidelobe suppression.

but it doesn't matter with overlap-add fast convolution.  somehow, the sidelobe 
effects come out in the wash, because we can insure (to finite precision) the 
correctness of the output with a time-domain analysis.

> Note that there are two potential layers of oversampling available: one from 
> overlapped windows, and another from zero-padding.
> 
> To summarize my understanding of your earlier remarks, the situation gets 
> fuzzier for synthesis. Broadly, there are two basic approaches. One is to 
> keep the COLA analysis and use raw (unwindowed) overlap-add for synthesis. 
> The other is to add synthesis windows, in which case the PR condition becomes 
> COLA on the product of the analysis and synthesis windows (I'd call this 
> "STFT filter bank" or maybe "FFT phase vocoder" depending on 

Re: [music-dsp] FIR blog post & interactive demo

2020-03-12 Thread Ethan Duni
Hi Robert

On Wed, Mar 11, 2020 at 4:19 PM robert bristow-johnson <
r...@audioimagination.com> wrote:

>
> i don't think it's too generic for "STFT processing".  step #4 is pretty
> generic.
>

I think the part that chafes my intuition is more that the windows in steps
#2 and #6 should "match" in some way, and obey an appropriate perfect
reconstruction condition. I think of STFT as intentionally wiping out any
spill-over effects between frames with synthesis windowing, to impose a
particular time-frequency tiling. Whereas fast convolution is defined by
how it explicitly accounts for spill-over between frames.

My intuition isn't definitive, but that's what comes to mind. In any case,
"STFT processing" is a very generic term.


>
> here is my attempt to quantitatively define and describe the STFT:
>
>
> https://dsp.stackexchange.com/questions/45625/is-windowed-fourier-transform-a-synonym-for-stft/45631#45631
>


Cool, that's a helpful reference for this stuff.

In terms of "what even is STFT", it seems there is more consensus on the
analysis part. Many STFT applications don't involve any synthesis or
filtering, but only frequency domain parameter estimation. For
analysis-only, probably everyone agrees that STFT consists of some Constant
OverLap Add (COLA) window scheme, followed by DFT. Rectangular windows are
a perfectly valid choice here, albeit one with poor sidelobe suppression.
Note that there are two potential layers of oversampling available: one
from overlapped windows, and another from zero-padding.

To summarize my understanding of your earlier remarks, the situation gets
fuzzier for synthesis. Broadly, there are two basic approaches. One is to
keep the COLA analysis and use raw (unwindowed) overlap-add for synthesis.
The other is to add synthesis windows, in which case the PR condition
becomes COLA on the product of the analysis and synthesis windows (I'd call
this "STFT filter bank" or maybe "FFT phase vocoder" depending on the
audience/application).

The first approach has immediate problems if the DFT values are modified,
because the COLA condition is not enforced on the output. For the special
case that the modification is multiplication by a DFT kernel that
corresponds to a length-K FIR filter, this can be accommodated by
zero-padding type oversampling, which results in the Overlap-Add flavor of
fast convolution to account for the inter frame effects. Note that this
implicitly extends the (raw) overlap-add region in synthesis accordingly -
the analysis windows obey COLA, but the synthesis "windows" have different
support and are not part of the PR condition.

As you point out, this works for any COLA analysis window scheme, not just
rectangular, although the efficiency is correspondingly reduced with
overlap. This system is equivalent to a SISO FIR, up to finite word length
effects. Note that this equivalence happens because we are adding an
additional time-variant stage (zero-padding/raw OLA), to explicitly correct
for the time-variant effects of the underlying DFT operation. This is the
block processing analog of upsampling a scalar signal by K so that we can
apply an order-K polynomial nonlinearity without aliasing.

The synthesis window approach is more general in the types of modifications
that can be accommodated (spectral subtraction, nonlinear operations,
etc.). This is because it allows time domain aliasing to occur, but
explicitly suppresses it by attenuating the frame edges. This is also
throwing oversampling at the problem, but of the overlap type instead of
the zero-padding type.

You can also apply zero-padding on top of synthesis windows to further
increase the margin for circular aliasing. However unlike fast convolution
you would still apply the synthesis windows to remove spill-over between
frames and not use raw OLA. This is required for the filterbank PR
condition. There is no equivalent SISO system in this case. The level of
aliasing is determined by how hard you push on the response, and how much
overlap/zero-padding you can afford. I.e., it's ultimately engineered/tuned
rather than designed out explicitly as in fast convolution.

We're all on the same page on this stuff, I hope?

Ethan
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Virtual Analog Models of Audio Circuitry (Stefano D'Angelo)

2020-03-12 Thread Stefano D'Angelo
Thank you Kurt for your kind words.

Anyway I need to give credit to Rafael Cauduro Dias de Paiva for the
original idea and the modeling of the opamp-based topology. My contribution
in that paper was mostly related to diode modeling and usage of the Lambert
W function.

And since we're here, I want to say that I completely agree on all you've
said and suggest everybody to take a deep look at your excellent work.

Stefano

Il giorno gio 12 mar 2020 alle ore 16:43 Kurt James Werner <
kurt.james.wer...@gmail.com> ha scritto:

> I agree that Stefano's dissertation is a very good introduction to WDFs.
> Much of my own doctoral work was inspired by his excellent paper on WDF
> diode and op-amp modeling, which is as far as I know the first time active
> devices (op-amp) and complex non-series/parallel topologies (the op-amp's
> feedback structure) had been modeling using WDFs—a very exciting
> development because historically (Fettweis era) they had been associated
> with passive circuits only.
>
> Most of my own papers and dissertation relate to expanding the class of
> circuits that can be modeled using WDFs, building on Stefano's work
> to enable the WDF formalism to handle complex topologies ~in general~ and
> circuits involving active devices ~in general~ in a systematic fashion. In
> my opinion, this work brings WDFs up to the same level of suitability as
> State Space and MNA modeling. In my opinion, WDFs are not the clear winner:
> each one has advantages and disadvantages which come into play in different
> ways in different circuits, meaning none of the formalisms is the best for
> ~all~ circuits.
>
> As a side effect of my topological findings, I also came up with a new
> technique for modeling circuits with multiple nonlinearities. Again, I
> would not say that the technique is ~superior~ to state-space or MNA
> modeling, just that it allows WDFs to be structurally compatible with
> multiple NLs and brings them up to a similar level of potential usefulness.
> It still requires a Newton solver or table lookup, just like all the other
> formalisms. There are also some even articles by Alberto Bernardini and
> Timothey Schwertdfeger that take a similar approach to handling multiple
> NLs, which seem to have some advantages over my technique.
>
> Here are links to some of my work on WDFs for your reference:
> —PhD diss, w/ most results up to 2016, TR808 bass drum as overarching case
> study: https://searchworks.stanford.edu/view/11891203
> —IEEE article with most up-to-date and general formulation of handling
> complex topologies with generalized wave definition:
> https://www.researchgate.net/publication/32555_Modeling_Circuits_With_Arbitrary_Topologies_and_Active_Linear_Multiports_Using_Wave_Digital_Filters
> —DAFx proceedings (many WDF papers including mine):
> https://www.dafx.de/paper-archive/
>
> All the best,
> Kurt James Werner
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Virtual Analog Models of Audio Circuitry (Stefano D'Angelo)

2020-03-12 Thread Kurt James Werner
I agree that Stefano's dissertation is a very good introduction to WDFs.
Much of my own doctoral work was inspired by his excellent paper on WDF
diode and op-amp modeling, which is as far as I know the first time active
devices (op-amp) and complex non-series/parallel topologies (the op-amp's
feedback structure) had been modeling using WDFs—a very exciting
development because historically (Fettweis era) they had been associated
with passive circuits only.

Most of my own papers and dissertation relate to expanding the class of
circuits that can be modeled using WDFs, building on Stefano's work
to enable the WDF formalism to handle complex topologies ~in general~ and
circuits involving active devices ~in general~ in a systematic fashion. In
my opinion, this work brings WDFs up to the same level of suitability as
State Space and MNA modeling. In my opinion, WDFs are not the clear winner:
each one has advantages and disadvantages which come into play in different
ways in different circuits, meaning none of the formalisms is the best for
~all~ circuits.

As a side effect of my topological findings, I also came up with a new
technique for modeling circuits with multiple nonlinearities. Again, I
would not say that the technique is ~superior~ to state-space or MNA
modeling, just that it allows WDFs to be structurally compatible with
multiple NLs and brings them up to a similar level of potential usefulness.
It still requires a Newton solver or table lookup, just like all the other
formalisms. There are also some even articles by Alberto Bernardini and
Timothey Schwertdfeger that take a similar approach to handling multiple
NLs, which seem to have some advantages over my technique.

Here are links to some of my work on WDFs for your reference:
—PhD diss, w/ most results up to 2016, TR808 bass drum as overarching case
study: https://searchworks.stanford.edu/view/11891203
—IEEE article with most up-to-date and general formulation of handling
complex topologies with generalized wave definition:
https://www.researchgate.net/publication/32555_Modeling_Circuits_With_Arbitrary_Topologies_and_Active_Linear_Multiports_Using_Wave_Digital_Filters
—DAFx proceedings (many WDF papers including mine):
https://www.dafx.de/paper-archive/

All the best,
Kurt James Werner
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Virtual Analog Models of Audio Circuitry

2020-03-12 Thread Andrew Simper
I concur with Seffan. Wave digital works great for linear circuits, but as
soon as you start adding non-linearties things get awkward. It is much
easier to using either direct MNA for larger circuits, or for smaller ones
you can do it all with manually solving the system of equations. The next
step is to look into things like the DK-Method, either directly using MNA
(which I favour) or via the regular State Space formulation.

Andy

On Wed, 11 Mar 2020 at 19:59, STEFFAN DIEDRICHSEN 
wrote:

> The method being teached in that workshop is the wave-digital-filter
> approach, developed by Fettweiss.
> I saw a tutorial at the dafx 2019 given by Kurt James Werner and I have to
> admit, that this method is quite awkward to apply and the results are
> somehow underwhelming. Well, they’re OK, but there’s still a matching error.
>
> Just search for papers, that cite A.Fettweis,“Wavedigitalfilters:Theory
> and Practice,”.
>
> Best,
>
> Steffan
>
>
>
> > On 11.03.2020|KW11, at 12:27, Jerry Evans  wrote:
> >
> > In 2017 CCRMA ran a short workshop:
> > https://ccrma.stanford.edu/workshops/virtualanalogmodeling-2017.
> >
> > Are there any papers or examples etc. that are generally available?
> >
> > TIA
> >
> > Jerry.
> >
> > ___
> > dupswapdrop: music-dsp mailing list
> > music-dsp@music.columbia.edu
> > https://lists.columbia.edu/mailman/listinfo/music-dsp
> >
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Virtual Analog Models of Audio Circuitry

2020-03-12 Thread Stefano D'Angelo
If I may add a shameless plug, I have dedicated much of my academic and
industrial career to the topic and I'm giving free access to (almost) all
my papers and related code at https://www.dangelo.audio/ - in particular I
believe that my doctoral dissertation is sufficiently short and up-to-date
to be used as an introduction.

Stefano

Il giorno gio 12 mar 2020 alle ore 09:38 Ross Bencina <
rossb-li...@audiomulch.com> ha scritto:

> I am not familiar with the workshop, but maybe these:
>
> https://ccrma.stanford.edu/~stilti/papers/Welcome.html
> https://ccrma.stanford.edu/~dtyeh/papers/pubs.html
>
> I always thought this was a good place to start:
>
> "Simulation of the diode limiter in guitar distortion circuits by
> numerical solution of ordinary differential equations."
> https://ccrma.stanford.edu/~dtyeh/papers/yeh07_dafx_clipode.pdf
>
> I'm sure there's many more in the user pages linked off:
> https://ccrma.stanford.edu/people
>
> Not specific to the CCRMA workshop, but  there are plenty of papers on
> this topic in DAFx proceedings:
>
> https://www.dafx.de/paper-archive/
>
> And maybe even some in ICMC proceedings since mid-90s:
>
> https://quod.lib.umich.edu/i/icmc
>
> Ross.
>
>
> On 11/03/2020 10:27 PM, Jerry Evans wrote:
> > In 2017 CCRMA ran a short workshop:
> > https://ccrma.stanford.edu/workshops/virtualanalogmodeling-2017.
> >
> > Are there any papers or examples etc. that are generally available?
> >
> > TIA
> >
> > Jerry.
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
>

-- 
Stefano D'Angelo
http://www.dangelo.audio/
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Virtual Analog Models of Audio Circuitry

2020-03-12 Thread Ross Bencina

I am not familiar with the workshop, but maybe these:

https://ccrma.stanford.edu/~stilti/papers/Welcome.html
https://ccrma.stanford.edu/~dtyeh/papers/pubs.html

I always thought this was a good place to start:

"Simulation of the diode limiter in guitar distortion circuits by 
numerical solution of ordinary differential equations."

https://ccrma.stanford.edu/~dtyeh/papers/yeh07_dafx_clipode.pdf

I'm sure there's many more in the user pages linked off:
https://ccrma.stanford.edu/people

Not specific to the CCRMA workshop, but  there are plenty of papers on 
this topic in DAFx proceedings:


https://www.dafx.de/paper-archive/

And maybe even some in ICMC proceedings since mid-90s:

https://quod.lib.umich.edu/i/icmc

Ross.


On 11/03/2020 10:27 PM, Jerry Evans wrote:

In 2017 CCRMA ran a short workshop:
https://ccrma.stanford.edu/workshops/virtualanalogmodeling-2017.

Are there any papers or examples etc. that are generally available?

TIA

Jerry.

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp