On 6/22/14 2:19 AM, Andrew Simper wrote:
the underlying physics: Kirchoff's current law, Kirchoff's voltage law, and
the volt-amp characteristics of each element *is* applicable to any "lumped
element" circuit, one with any combination of linear or nonlinear elements
or memoryless or non-memoryless elements.  but the Loop-current (sometimes
called "mesh-current" analysis) and Node-voltage methods allow one to write
the matrix of linear equations governing the circuit directly from
inspection of the circuit.  it's an analysis discipline we learn in EE
class.  write the matrix equation down on a big piece of paper, then go
about solving the system of linear equations.  but Loop-current and
Node-voltage does not work with nonlinear volt-amp characteristics.
An example of non-linear nodal analysis using linear equivalent
components and and newton raphson to solve the implicit equations:
http://www.ecircuitcenter.com/SpiceTopics/Non-Linear%20Analysis/Non-Linear%20Analysis.htm


they're not using the loop-current method nor the node-voltage method with components modeled as nonlinear. they can't be.


when you use the term "nodal analysis" you do not mean what the first six hits from google say it is. (i stopped at "Modified Nodal Analysis", since i dunno what it is.)

evidently, your semantic of "nodal analysis" is using Kirchoff's laws and V-I characteristics. it's hard to take issue with Kirchoff's laws.


But of course - in a linear filter, such as one composed out of RC
networks. But what about voltage controlled filters?
This is a guitar amp eq we are talking about, there is no voltage
control, only pots that someones hand has to turn


of course a VCF driven by a constantly changing LFO waveform (or its digital
model) is a different thing.  i was responding to the case where there is an
otherwise-stable filter connected to a knob.  sometimes the knob gets
adjusted before the song or the set or gig starts and never gets moved after
that for the evening.  for filter applications like that, i am not as
worried myself about the "right" time-varying behavior (whatever "right"
is).
I agree that if the knobs aren't touched and their is either double
precision or some error correction feedback to make a fixed point
implementation work properly then a DF1 or other abstract filter
realisation work well and have a low cpu overhead.


for the case of modulated filters, we gotta worry about it and a few
different papers have made that case (i remember a good one from Laroche).
in that case usually my worry ends when i implement a lattice or normalized
ladder filter topology (with cos(w0) = 1 - 2*(sin(w0/2))^2 substituted to
deal with the "cosine problem").  the filter work i have seen of Andrew's
is, as best as i remember, substituting finite difference equations (perhaps
Euler's forward difference? can't remember, Andrew) in for the derivatives
we get at the capacitors and inductors of existing analog circuits.
I just apply direct numerical integration to the circuit equations,
each different type of integration (explicit / implicit / backwards,
forwards, trap, gear, BDF2, ...) leads to a different "equivalent
current" definition for the capacitor, which is updated slightly
differently, but in the end the thing being solved in the implicit
case is always gc*vc + iceq.




for
circuits with nonlinear elements *and* elements with memory, i don't see any
decent alternative, except i wouldn't model every goddamn part, i would
model a few *specific* nodes in the stages of the amp and the parts
connected between them, and i might lump some nonlinear things together, i
think even if you don't and model ever part, some (but not all) this lumping
comes out in the wash.

perhaps, the only decent way to model a CryBaby is to model each circuit
part and apply Euler's forward differences (and have a high sample rate, so
that "dt" is sufficiently modeled by "Delta t") to the capacitors.
Sure, if you oversample a lot then a basic forward euler
implementation will be both easy to implement, but you are still stuck
with implicit equations to solve for every transistor / diode in the
circuit as per the example I posted a link to before (which is
actually a memoryless waveshaper, but the cry baby has capacitors /
inductors as well).


i know.  this goes back a few years when i first saw Andy's hand-written
work emailed to me by a third-party (i think it was legit with
authorization, and since i've seen this in the public domain, maybe here, so
i think we can talk about Andy's work without fear of exposing any secrets)
but i will not identify the third party, if that's okay.  this is the only
"caveat" or issue i thought at the time, and it's not a big nasty criticism,
it's just this:  when modeling an op-amp circuit with resistors and
capacitors, that's not clipping (so it's an "H(s)"), with difference
equations replacing the derivatives at each capacitor, *if* the knob's ain't
getting twisted, it's LTI and the analysis boils down to an H(z).  now
Euler's forward difference (or backward differences or predictor-corrector
or whatever) does quite well at low frequencies because the "dt" and "delta
t" are very close to each other.  but up by Nyquist, maybe less so.  but
that's fine, other methods like bilinear screw up at high frequencies close
to Nyquist.  but there are different kinds of screwups.
I think the important thing to note here as well is the phase.
Trapezoidal keeps the phase and amplitude correct at dc, cutoff, and
nyquist.



Nyquist?  are you sure about that?

since you seem to concede that trapezoidal is equivalent to BLT for the 1-pole (you're wrong about the limitation), check to see if it's the same at nyquist. in fact, consider a sinusoid at Nyquist and integrate that trapezoidally. what's it gonna be?

(and since it's equivalent to BLT, if the cutoff is close enough to Nyquist, you get errors there also, unless you prewarp.)

emulating with finite differences may come up with a frequency response that
deviates from the analog target in ways that are, how shall i put it, less
predictable.  on the other hand, bilinear transform guarantees that every
single bump and ripple in the analog frequency response has a corresponding
bump or ripple in the digital frequency response, but perhaps at a slightly
different frequency.  and with pre-warping, for every degree of freedom in
the filter specs, you can make sure that one frequency is *exactly* mapped
from analog to digital.  with other methods, you might have a 2 dB bump and
it comes out a little different than 2 dB.  or, because you don't have
frequency warping you might not be able to map the resonant frequency spot
on like you do with bilinear (or you have to derive a *specific* frequency
warping for that circuit, sorta like Hal Chamberlin did with the SVF, which
mitigates my concern but i couldn't tell that Andrew did that in his
analysis)
You can look at the workings and plots of using trapezoidal
integration on standard filter topologies here:
www.cytomic.com/technical-papers




again, i've looked. i've seen this from you years ago. doesn't change the mathematics.

selling a product doesn't change the mathematics. something Edison had to learn more than a century ago.

however, when things are modulated, the sound of the modulated filters
really depends on the topology.  but i am not sure that the best analog
topology is a Sallen-Key circuit (i can't remember what Andy modeled
anymore) and i dunno if i would bother modeling some analog circuits just to
get a static filter somewhere.  but maybe i would model the CryBaby circuit
to make sure the emulation sounds the same.
Sallen key, SVF, and Cascade all sound great when modulated, I'm not
sure about other topologies.


i remember we were discussing this a while ago.  i hadn't really understood
the difference between "trapazoidal integration" and the bilinear transform
without any prewarping.   consider emulating a capacitor.

Trapezoidal integration is drawing a straight line between the two
points on the curve you are integrating, which can be applied directly
in the time domain. The bilinear transform is one that does not
preserve the topology, instead you transform a filter into the laplace
space and then apply trapezoidal integration to this abstract filter
via a transform, so you don't preserve the memory storage components
of the original circuit.



    Fs = 1/T

                         t
    v(t)  =  1/C  integral{ i(u) du }
                      -inf

in the s-domain it's

    V(s)  =  1/C *  1/s  * I(s)
yep, laplace space, then apply a transform to that. For the 1 pole
case I think it comes out the same, so no point in this example. Try
this again for a sallen key.


doesn't matter. unless you use trapezoid rule for one integrator and something different than the trapezoid rule for the other integrator.


so trapazoidal integration at discrete times is:

                               n
    v(nT) =approx   1/C * T * SUM {  1/2 * ( i((k-1)T) + i(kT) )
                             k=-inf


          =  v((n-1)T)  +   1/C * T * 1/2 * ( i((n-1)T) + i(nT) )


or as discrete-time sample values

    v[n]  =  v[n-1]  +   T/(2C) * (i[n] + i[n-1])

applying the Z transform

    V(z)  =  z^(-1)*V(z)  +  T/(2C) * (I(z) + z^(-1)*I(z))

solving for V


    V(z)  =  T/(2C) * (1 + z^(-1))/(1 - z^(-1)) * I(z)

looks like we're substituting

    1/s<---  T/2 * (1 + z^(-1))/(1 - z^(-1))

or

    s<---  2/T * (z-1)/(z+1)


how is that any different from the bilinear transform without prewarping?
if it's an LTI circuit with an H(s), then what's the difference?
It is different for a circuit that isn't a 1 pole RC.

no, it's whenever an integrator (1/s in the s universe) is implemented numerically with the trapezoid rule. doesn't matter whether it's a C or anything else.


  Also you can
pre-warp the resistor values with direct trapezoidal integration, just
use tan.

and again, how is that different from pre-warping bilinear transform?

--

r b-j                  r...@audioimagination.com

"Imagination is more important than knowledge."



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Reply via email to