since you opened this topic, I thought I'd try to share the intermediate
results my findings, as much as I can remember them (that was a few
years back). Most of them concern the continuous time case.
First note regarding the continuous time case is that cutoff modulations
do not affect the BIBO stability at all. More rigorously:
- if the cutoff modulation is done by varying the gains *in front*
(rather than behind) of *all* integrators in the system
- if the cutoff function w(t) is always positive
- if the system is BIBO stable for some cutoff function w(t)
then the system is also BIBO stable for any other positive cutoff function
Particularly, if a linear system is BIBO-stable in time-invariant case
(for the constant cutoff function), then it's also stable for varying
This is very easy to obtain from the state-space equation:
where u(t) is the state vector, x(t) is the input vector, w(t) is the
cutoff scalar function and F(u,x,t) is the nonlinear time-varying
version of A*u+B*x. Without reduction of generality we can assume w(t)=1
for the given stable case. Then, we simply rewrite the equation as
and substitute the time parameter:
d tau = w*dt
Now in "tau" time coordinates the modulated system is exactly the same
as the unmodulated one in the "t" coordinates.
The same doesn't seem to hold for the TPT discrete-time version, though.
In a more general case for *linear* continuous time, IIRC, we have a
sufficient (but it seems, not necessary) time-varying stability
criterion: all eigenvalues of the matrix A+A^T must be "uniformly
negative", that is they must be bounded by some negative number from
above. It is essential to require this uniform negativity, otherwise the
eigenvalues can get arbitrarily close to the self-oscillation case. This
condition is simply obtained from the fact that in the absence of the
input signal you want the absolute value of the state to decay with a
relative speed, which is uniformly less that 1. This will make sure,
that, whatever the bound of the input signal is, a large enough state
will decay sufficiently fast, to win over the input vector B(t)*x(t).
Indeed, ignoring the B*x term, we have
where lambda_i are the eigenvalues of A+A^T.
Now on the other hand
Obviously, you don't have to satisfy the condition in the original
state-space coordinates. Instead, you can satisfy it in any other
coordinates, which corresponds to using P^T*A*P instead of A for some
nonsingular matrix P.
Now I didn't manage to get this condition satisfied for the
continuous-time SVF. Reading your post, I admit, that I could have made
a mistake there, but FWIW... First, I discarded the consideration of
varying cutoff, as explained above and concentrated on the varying
damping. Not managing to find a matrix P, I constructed an input signal,
requiring the maximum possible growth of the state vector. The signal,
IIRC was either sgn(s_1) or -sgn(s_1), where s_1 is the first of the
state components (or it could have been s_2). Then I noticed that for
low damping the state vector is moving in almost a circle, while for
higher damping (but still with complex poles) is turns into an ellipse.
This was exactly the problem: "in principle" the circle is having a
bigger size, than the ellipse, but by switching the damping from low to
high you could "shoot" the state point into a much "higher orbit". Much
worse, in certain cases the system state can increase even in the full
absence of the input signal!!! However, IIRC, I managed to show, that
for a sufficiently large "elliptic" orbit (with high damping),
(d/dt)|u|^2<=0 regardless of the current damping. Since we are already
considering the "worst possible" input signal, the system state can't
cross this boundary "orbit" to the outside.
For the discrete-time case the situation is more complicated, because we
can't use the continuity of the state vector function. IIRC, I also
didn't manage to build the "worst-case" signal, but there was the same
problem of the state vector becoming larger in the absence of the input
signal. That's why I was somewhat surprised that you simply managed to
restrict the eigenvalues of the system matrix in some coordinates.
Particularly suspicious is that your coordinate transformation matrix is
"built for the smallest damping", while the more problematic case seems
to occur "at the larger damping". But, as I said, I didn't finish that
research and I could have been wrong. So just take my input FWIW.
Reaktor Application Architect
Native Instruments GmbH
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp