On 2022-01-30, Ethan Duni wrote:

IMO the physics part is the realization that (wavefunctions of) certain physical quantities (position/momentum, time/energy) can be related as Fourier transform pairs (and the associated scaling with Planck's constant). Once you make that assumption, uncertainty relationships follow immediately from the properties of the FT, as you say. 

Yes, and the whole idea of position/momentum pairs being related via Fourier machinery (furthermore the theory of noncommutative operators) kind of begs the question. Where's the pairing?


Well, it actually kind of suggests itself from the empirical evidence, from the start. When you do the double-slit experiment with different kinds of photons, and then electrons, the only tunable parameter is the energy (or, separately, in case of massive particles, the momentum). And the only natural way to analyze the experiment is in a planewave basis, leading to a natural connecting wavelength to momentum. It took its time for Planck and Einstein to make the connection, via de Broglie's intuitionistic ideas, but the connection is still natural once you understand wave interference, yet see single electrons on a plate. I think it first was thought of in the context of matrix mechanics, with wave mechanics coming later into the fray when the two were mathematically shown to be equivalent.

Only after that came the idea that actually what we're talking about are "phased probability waves". And much after that quantum field theory and the standard model, which posits a couple of dozens of independent fields, with mutual interaction, all of which propagate as phased probability waves in local Minkowski-space (that idea coming from the (first order, complex) Dirac equation, against Schrödinger's), all of them having local gauge symmetry.

You know, the Schrödinger and Dirac equations are both highly dispersive. They really cannot even in theory describe a point-like particle. The only way you can interpret them is as is being evolutors of a stochastic, phased, wavefunction/complex distribution. And you cannot really interpret the results coming from them as anything real, having to do with particles, unless you make an interpretive assumption: 1) the Copenhagen interpretation, where there's a separate measurement operator (a mathematical so called projection operator), and the idea that it somehow magically affects the state of the observable (because it does; you can't e.g. see the state of an electron's spin without affecting it, empirically), or for instance going with Everett's relative state interpretation, which naturally leads to a multiple worlds model of quantum physics. (And there are many others, like Bohm's, whatnot.)

This is a very helpful perspective, thanks for this.

I neglected to show you what happens here, by the way. So...

What happens when you sample a continuous time signal by projecting it onto a basis consisting of equispaced polynomial splines is that you get a well-invertible problem, true, but only at the discretely spaced sampling instants. What happens in-between the points is rather different from the usual framework. Bandlimitation no longer automatically applies, even if you *can* easily shift your sampling basis via linear operations. (E.g. shifting the first order term f(x)=x some time t forward really *is* just about adding a constant term and a phase shift, so that f becomes y(x+a)-b. The same analysis carries over to the higher order terms, via a centered Taylor series, which always converges for any series of compact support in either time or frequency.)

So, you *can* invert, and as long as your analysis basis is nice enough, your biorthogonal reconstruction assumes a not-too-nasty-form as well. It might even be orthonormal itself, and repetitive. But in general it won't be bandlimited, and it won't lead to perfect reconstruction of bandlimited signals. That's because your basis is now composed of things which have infinite bandwidth, so that they can *only* cancel to finite bandwidth via periodic negative interference; only at sampling instants, and not between. The the whole sampling apparatus necessarily becomes time-invariant, and of uncompact spectral support. While you can fully invert what happens at the sampling instants, you cannot control what happens *between*, and what happens there, does not happen linearly, so that intermodulation products and the like could be controlled.

       (since a product of minimum phase filters
      is minimum phase, and the phase response of a minimum phase filter is
      wholly determined by amplitude response, multiplying two all-pass
      minimum phase filters has to be idempotent). 


Wait, what? Isn't a minimum-phase allpass necessarily trivial? Or was that your 
point?

No, it is not. I thought as much in my yoot, but really, the teorem is that any stable linear filter can be decomposed into a pair of stable linear phase and stable minimum phase filtesr. Crucially, both of the filters in the decomposition can be arbitrary, and when composing systems, both of the filters' system functions multiply independently on the two axes. Minimum phase is minimum phase even after compounding. Then if you do a real-valued minimum-phase filter, its amplitude response uniquely determines its phase response, and vice-versa; as such, if you do a minimum phase all-pass, the problem is *extremely* rigid.

The way you best see how you can trade off stuff is by doing pole-zero analysis. A minimum-phase discrete-time filter is analyzed by how its pole-zero graph goes around in the Z-plot. It ought to have all of its poles inside the unit circle, and to have only poles. In real arithmetic, even the poles ought to come 1) on the real line, or 2) in complex conjugate pairs.

An exact allpass on the other hand necessarily comes in a form where where there are poles on top of zeroes, to compensate for each other. Though not exactly: via circular reflection symmmetry (Möbius transformation), you *can* compensate exactly for an outside pole via a strategically placed inside zero, and vice versa. So, in the design of your filters the allpass side, via the theorem, is much laxer than the minimum-phase side: you can add all kinds of terms to your system function without going any more off from from the allpass characteristic, while at the same time the minimum phase side of the decomposition necessarily stays the same.

This is how we do reverb networks, and this is also why they are hard to do on the real arithmetic side of thigs: their necessary buildup is difficult to do without complex phasing.

I find the translation invariance appealing intuitively, but is there a way to quantify its importance in audio terms?

I do not know. To me that comes from my early lectures which I attended at Helsinki University, while attempting to do my Masters'. It was about advanced linear algebra, functional analysis, and measure theory, then.

The theorem I alluded to would be found around this Wikipedia article -- which of course says the general problem in Banach spaces is as of yet unsolved. But the L² and L¹ linear ones music-dsp guys worry about *have been*. They're of undergraduate complexity, and have been taught to me as such, a decade ago. https://en.wikipedia.org/wiki/Invariant_subspace_problem

I enjoyed this story a great deal, reminds me of my undergrad intern days :P

If you don't mind me asking, what was your area of expertise, and where do you move now?

Here, long term unemployed. ;)
--
Sampo Syreeni, aka decoy - [email protected], http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2

Reply via email to