Hello everybody !

I may be able to clarify a little the confusion here...

--------------------------------------------------------------------

First, the MNA is a tool which may allow you to write and solve the equations of any analog circuit made of resistances, capacitances, inductors, mutual inductors, voltage and current sources. If any of these elements is nonlinear, you can still write and solve them, using roots finding algorithms, such as Newton-Raphson algorithms, and dependant resistances/capacitances/inductances/sources. It's a fact. Every version of SPICE uses this. The DK-method from David Yeh and others is based on that too.

--------------------------------------------------------------------

Second, I think a lot of people doesn't understand the difference between the bilinear transform and the trapezoidal rule of integration... Let's consider we don't care about pre-warping here.

The bilinear transform is a tool allowing people to get from a Laplace transfer function H(s) an equivalent Z transfer function H(z). It is based on the fact that z is precisely equivalent to exp(sT) with T the sampling period. So, s equals 1/T * ln(z), which may be approximated with its first Taylor series term, 2/T * (z - 1) / (z + 1).

The trapezoidal rule of integration is a numerical integration method which allows to discretize and solve the differential equation dX/dt = f(X, U), with X a state variable, and U an input. With this method, you can write that :

X[n+1] = X[n] + T/2 (f(X[n], U[n]) + f(X[n+1], U[n+1]))

Using the state-space representation of any equation system, for the LTI case, it is possible to see that we get in both case the same equivalent H(z) transfer function, which is why we think often that the bilinear transform and the trapezoidal rule are the same thing. But it's not, since they are not used in the same context. For example, the function f(X,U) we have just seen may be a nonlinear function of X and U, and we may still use the trapezoidal rule there, whereas using the bilinear transform on non linear stuff does not have any sense at all...

Moreover, as Urs, Andy and Vadim has discussed a lot in the past, reducing the behaviour of a linear system to its transfer function H(z) may be a mistake, since there are a lot of ways to simulate such a system. For example, simulating it with the Direct Form 1 or the Transposed Direct Form 2 may not give the same result, when one parameter in the filter is modulated. And when we use the TPT paradigm of Vadim, where the simulation structure displays explicitly the integrators to preserve the time-modulated behavior, the equivalent transfer function H(z) of the system is no more apparent in the simulation structure. But it may be calculated if you want, and give you the same exact result than the one you got for other ways of simulating the linear system. And when we don't care about the modulation of parameters, everything here may be considered as the same thing without too much commotion...

--------------------------------------------------------------------

Third, the "iterative processes" are used when there is a nonlinearity in an equation, yielding to the impossibility to solve explicitely one of the wanted variables. We talk here about implicit equations or transcendental equations. For example, the equation x + 5 + exp(3x) = 0 is transcendental, since you can't write explicitely the solution for x. To get the solution, you need an iterative roots finding method, such as the bisection method, or the class of Newton-Raphson methods which needs to calculate at each iteration the local derivative of the equation to solve.

Moreover, even when the differential equation does not display any implicit stuff, you may have to solve an implicit equation when you apply the integration method. Some integration methods are called explicit, such as Euler forward or Runge-Kutta, because the discretization makes appear X[n] only equation terms at the right of the equation :

Continuous : dX(t)/dt = f(X(t), U(t))

Euler forward : X[n+1] = X[n] + T f(X[n], U[n])
Runge-Kutta 2 : k1 = T f(X[n], U[n]), and X[n+1] = X[n] + T f(X[n] + k1/2, (U[n] + U[n+1])/2)

But see what happens with "implicit integration" methods :

Euler backward : X[n+1] = X[n] + T f(X[n+1], U[n+1])
Trapezoidal : X[n+1] = X[n] + T/2 f(X[n], U[n]) + T/2 f(X[n+1], U[n+1])

If I take a dumb example for f, such as f(X,U) = exp(X+U), then we get here an implicit equation again...

So, how do we solve that ? Let's say we want to solve the equation x + 5 + exp(3x) = 0 = g(x) with the Newton-Raphson method. To do that, we need an initial value x0, and a condition to stop the iterations (which may be a condition on the convergence, or a fixed number of iterations to get). That works this way at the iteration k :

x_k = x_(k-1) - (dg(x_(k-1))/dx)^-1 * g(x_(k-1))

The derivative of the function g(x) is dg(x)/dx = 3 exp(3x) + 1. So the Newton-Raphson calculus can be written here this way :

x_k = x_(k-1) - (x_(k-1) + 5 + exp(3 x_(k-1))) / (3 exp (3 x_(k-1)) + 1)

If I use x_0 = 2, here are the results I got :

iteration 1 / 10 : 1.6611628760
iteration 2 / 10 : 1.3134136554
iteration 3 / 10 : 0.9415719466
iteration 4 / 10 : 0.4994853512
iteration 5 / 10 : -0.1920035354
iteration 6 / 10 : -2.1910037064
iteration 7 / 10 : -4.9896635065
iteration 8 / 10 : -5.0000003058
iteration 9 / 10 : -5.0000003059
iteration 10 / 10 : -5.0000003059

This works very well if the initial value is close enough to the solution to find. It may not work in some special cases, for example when the function g is not monotonic in general, or if the function dg(x)/dx may be null. These issues may be very problematic for analog circuit simulation. The initial value we tend to use is the value of the variable at the previous sample (if we want to get x[n+1], then x[n+1]_0 is chosen as x[n]). So we can see that when the sampling frequency is too low and that the amplitude of the signal changes quickly (for example with a square oscillator having 20 V as an amplitude), it may be easy to get a bad initial value, and having convergence issues. Even with an infinite number of iterations, you may not be able to go to the solution. That's why other methods may be used, even if they may converge slower, like the bisection method which does not need the derivative calculus...

Now I would like to answer a question : is it possible to write the equivalent solution of the Newton-Raphson process if the iteration number is fixed (and low) ? The answer is yes ! I don't know if it may be very useful in general. For example, with the previous function g, here are the multiple expressions of the solution for various iterations :

iteration 0 : x = x_0 = 2
iteration 1 : x = x_0 - (exp(3*x_0) + x_0 + 5) / (3 exp(3 x_0) + 1)
iteration 2 : x = x_0 - (exp(3*x_0) + x_0 + 5) / (3 exp(3 x_0) + 1) - (exp(3*x_0 - (exp(3*(x_0) + x_0 + 5) / (3 exp(3 x_0) + 1))) + (x_0) + x_0 + 5) / (3 exp(3 x_0) + 1)) + 5) / (3 * exp( 3 (x_0) + x_0 + 5) / (3 exp(3 x_0) + 1))) + 1)
iteration 3 : ...

An observation here is that is cool to keep the iterative way of things, instead of wanting to unroll everything, if the number of iterations needed is important... And this is a very simple example ! Very often, g(x) isn't a one variable function...

--------------------------------------------------------------------

And last point : the reasons why I tend to use implicit trapezoidal integration schemes instead of the simple forward Euler are the following. First, the forward Euler integration scheme is not very accurate. You need to have a very high sampling rate to have acceptable results. And if you really want to use an explicit scheme, you should use Runge-Kutta methods instead in my opinion... Second, the stability issue. If you look into numerical methods bibliography, you will be able to find definitions of "A-stable" methods and stability domains of numerical methods. In short, knowing the eigenvalues of df(X(t), U(t)) / X(t), the sampling frequency, and the numerical method, it is possible to predict/know if the equation will be "stiff" or not. If it's the case, you won't be able to solve the system using the numerical method and the sampling frequency given previously, and you will have nonsense results very quickly. You will have to change any of these parameters to succeed. Since it is not possible to change the capacitance in the circuit, you will need to take an implicit integration scheme, or to upsample like hell. For example, I calculated one time that I would need Fe = 500 kHz to simulate a simple common-cathode triode amplifier circuit with the Miller capacitance (2 pF), if I want to use an explicit integration method. But that works like a charm with the trapezoidal method...


You can find more information about this stuff on the articles I have written during my thesis, or on Jaromir Macak / David Yeh / Kristjan Dempworf etc. extensive bibliography ;) This is stuff well known by people who have worked on guitar amplifier simulation for a while. Anyway, I have appreciated a lot to be able to compare this stuff with what "synth filter guys" do, thanks to KVR and "zero delay feedback filters" discussions in general, and I'm sure that all the approaches will be done better by mixing all their best elements.

Tell me if I have made a mistake somewhere.

Cordially,

Ivan COHEN
http://musicalentropy.wordpress.com

Le 23/06/2014 06:37, robert bristow-johnson a écrit :
On 6/23/14 12:16 AM, Andrew Simper wrote:
you
have a function of two variables that you can explicitly evaluate
using your favourite route finding mechanism, and then use an
approximation to avoid evaluating this at run time. This 2D
approximation is pretty efficient and will be enough to solve this
very basic case. But each non-linearity that is added increases the
space by at least one dimension, so your function gets big very
quickly and you have to start using a non-linear mapping into the
space to keep things under control.

i haven't been able to decode what you just wrote.
This is important to understand if you want to use massive tables to
model this stuff. Perhaps read Yeh's thesis paper? I have already
posted this to another thread, but here it is again:

"... but missed David Yeh's dissertation
https://ccrma.stanford.edu/~dtyeh/papers/DavidYehThesissinglesided.pdf
which contains a great description of MNA and how it relates to the
DK-method. I highly recommend everyone read it, thanks David!!

I really hope that an improved DK-method that handles multiple
nonlinearities more elegantly that it currently does. A couple of
things to note here, in general, this method uses multi-dimensional
tables to pre-calculate the difficult implicit equations to solve the
non-linearities, but as the number of non-linearities increases so
does the size of your table as noted in 6.2.2:

"The dimension of the table lookup for the stored nonlinearity in
K-method grows with the number of nonlinear devices in the circuit. A
straightforward table lookup is thus impractical for circuits with
more than two transistors or vacuum tubes. However, function
approximation approaches such as neural networks or nonlinear
regression may hold promise for efficiently providing means to
implement these high-dimensional lookup functions."
"

As you add more non-linearities to the circuit it becomes less
practical to pre-calculate tables to handle the situation. Also
changes caused by things like potentiometers add extra dimensions, and
you will spend all your time doing very high dimensional table
interpolation and completely blow the cache.

<sigh> it's a function. given his parameters, g and s, then x[n] goes in,
iterate the thing 50 times, and an unambiguous y[n] comes out.  doesn't
matter what the initial guess is (start with 0, what the hell). i am saying that *this* net function is just as deserving a candidate for modeling as is the original tanh() or whatever. just run an offline program using MATLAB
or python or C or the language of your delight.  get the points of that
function defined with a dense look-up-table. then consider ways of modeling *that* directly. maybe leave it as a table lookup. whatever. but at least you can see what you're dealing with and use that net function to help you
decide how much you need to upsample.
<sigh>  <sigh>  <sigh>  please at least try and understand what I wrote
before sighing at me! Yes, I agree that for low dimensional cases this
is a good approach, but for any realistic circuit things get
complicated and inefficient really quickly and you are better off with
other methods.

Andy and Urs, i have been making consistent and clear points and challenges and the response is not addressing these squarely.

let's do the Sallen-Key challenge, Andy.  that's pretty concrete.

you pick the circuit (i suggested the one at wikipedia) so we have a common reference. then you pick an R1, R2, C1, C2 (or an f0 and Q, i don't care). let's leave the DC gain at 0 dB. then we'll have a common and unambiguous H(s).

then let's agree on a common sampling rate, Fs.

then you do your trapezoid rule to both of the integrators (C1 and C2) in the circuit and solve for your discrete-time model. then spell out to the rest of us the difference equations.

i will take those difference equations and turn them into an unambiguous H(z) in the z-plane. then the question is if it will be the same H(z) we get from the original H(s) and applying the bilinear transform without any prewarping of anything. i am saying they will be the same, you had been saying they will not be the same. this is a falsifiable experiment, and we can see who's correct.

now that's one thing. it's about applying the trapezoid-rule integration or the bilinear transform (without prewarping) to the same continuous-time LTI circuit.

the other thing Urs brought up for discussion is an iterative and recursive process that converges on a result value, given an input. i am saying that this can be rolled out into a non-recursive equivalent, if the number of iterations needed for convergence is finite. this is a totally different issue.

i'm happy to address either (or both) issues directly without deferring to any other distraction. i can state clearly what my claim is and can we can test that claim. likewise, you can state clearly what your claim is, and we can test that.


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Reply via email to