�
okay, i can't resist jumping back in here.
i have O&S but i don't have O&W although i once taught an elective class of 
audio and signal processing in which we used O&W as a text (but i was using the 
department copy).

i've always considered the "canonical EE approach to the subject" is 
multiplying by the uniformly-spaced impulse train (a.k.a. dirac comb) and 
recognizing that this is multiplying by a periodic function, so it has a 
Fourier series (which is covered pedagogically before the Fourier
Transform). �this was the approach i put in Wikipedia that has survived until 
this 
revision:�https://en.wikipedia.org/w/index.php?title=Nyquist%E2%80%93Shannon_sampling_theorem&oldid=234842277#Mathematical_basis_for_the_theorem�and
 i was just to tired to try to defend it as an anonymous IP (i think that BobK 
and DickLyons knew who it was
anyway).
the�"canonical EE approach" does not worry about the dirac impulse not being a 
function. �unlike a Real Analysis course in a mathematics curriculum, EEs are 
fine with this "function" that is zero almost everywhere, yet it integrates to 
1. what i am not
happy with the "canonical EE approach" is that they do not normally scale this 
impulse train with T as they should. �then all of the Fourier coefficients are 
1/T (instead of 1) and then end up putting in this necessary T factor in the 
passband gain of the ideal brickwall
reconstruction LPF. �then the neophytes ask us how to build a LPF with passband 
gain of T and people start asking "T in what units".
so, other than suggesting a look at that older Wikipedia version cited above, 
the other responses that are motivated from the comments are
are:
1. resampling is LTI **if**, for the TI portion, one appropriately scales time. 
�it is **not** TI if the unit time is a sample period and one does not scale 
that to invariant physical time.
2. unless someone is resampling using an IIR for the reconstruction filter, no 
one
*really* zero-stuffs samples into the stream. �even if it's upsampling by an 
integer ratio. �that's a sorta wasteful way to do it. �it is unlikely that the 
phase-nonlinear IIR will be as good as the 16 or 32-tap FIR (non-zero taps) 
unless you make the IIR order so high that, along
with the zero-stuffing (and processing those zero samples with the IIR) that 
this approach costs more than basic polyphase resampling.
3. remember that, even if it is not continuous, the dirac comb and the ideally 
sampled function (obtained by multiplying by the dirac comb) are *both*
continuous-time functions (if you can accept that the dirac impulse is a 
"function"). �multiplying x(t) by the dirac comb does not *immediately* give 
you a discrete-time function. �but it *does* throw away all of the information 
between sampling instances leaving only the
information that can be perfectly represented by a discrete-time sequence.
4. discrete-time "functions" (a.k.a. "sequences") denoted x[n] are, in and of 
themselves, *undefined* between samples. �there is no �meaning to x[5/2], no 
x[pi]. �only x[n] for real
integer n.
5. for discrete-time functions there is no meaning to delays (or advances) 
other than an integer number of samples delay. �but the ideally sampled 
function (which is the input times the dirac comb) can be delayed by however 
much delay as you want. �**if** you delay by an
integer number of samples, you can consider the sequence delayed by that 
integer multiplying the dirac comb which is undelayed (but only if that is the 
assumption). �i.e. (using LaTeX)
� � x(t-mT) \sum_n \delta(t-nT-mT) = x(t-mT) \sum_n \delta(t-nT) = \sum_n x[n-m]
\delta(t-nT)
*only* if m is an integer.
6. if one wants to be totally anal about this and rejects "naked dirac delta 
functions" (i.e. dirac impulses must only exist in integrals to have meaning 
and a dirac delta "function" is *not* a function), then, as best as i can
tell, the only way to derive and understand the Sampling Theorem is the way 
that Shannon originally presented it which is a nothing other than the Poisson 
Summation Formula. �but that has a restriction that no delta functions exist in 
the frequency domain (x(t) has to be finite energy) so
sinusoids, even those below Nyquist, are not allowed in the derivation. �i like 
the "canonical EE approach" better and i am happy to treat the dirac impulse 
function just like a function which is a limit of nascent delta functions even 
if the formal mathematicians object. �i
still think that pedagogically, that is the best and "correct" way to present 
it to EE and DSP students.
7. and i disagree with the statement: "The other big pedagogical problem with 
impulse train representation is that it can't be graphed in a useful way." 
�graphing
functions is an abstract representation to begin with, so we can use these 
abstract vertical arrows to represent impulses. �and we all know that dirac 
impulses are idealizations of physical impulses that are very thin, finite 
pulses.
8. lastly, i would only approach the very real problem
of bandlimited interpolation, whether it's for sample-rate-conversion or for 
precision-delay (two different applications), as a practical application of the 
Sampling and Reconstruction Theorem. �that is the way you can quantify the 
strength of the images and get a handle of the processing error
resulting from them potential foldback of the images and get a S/N ratio for 
the operation. �Duane Wise and i did a paper trying to demonstrate this 
thinking in the 1990s. �you can get it from my researchgate.
if linear interpolation is done between the subsamples, i have found that
upsampling by a factor of 512 (and, again, one need not insert 511 zeros to do 
this) followed by linear interpolation between those teeny-little upsampled 
samples, that this will result in 120 dB S/N. �if a 32-tap FIR is used (that's 
half the number of taps an ADI ASRC chip uses), that means
(taking advantage of symmetry) 8K coefficients needed in a table, 64 MAC 
instructions, and one linear interpolation per output sample. �doesn't matter 
what the sample-rate conversion ratio is (as long as we don't worry about 
aliasing in downsampling).
�
bestest,
r
b-j
�
---------------------------- Original Message ----------------------------
Subject: Re: [music-dsp] Sampling theory "best" explanation
From: "Ethan Fenn" <et...@polyspectral.com>

Date: Mon, September 4, 2017 3:14 pm

To: music-dsp@music.columbia.edu

--------------------------------------------------------------------------



>>

>> First, I want to be clear that I don&rsquo;t think people are crippled by

>> certain viewpoint&mdash;I&rsquo;ve said this elsewhere before, maybe not it 
>> this thread

>> or the article so much.

>

>

> In that case I'd suggest some more editing is in order, since the article

> stated this pretty overtly at least a couple times.

>

> It&rsquo;s more than some things that come up as questions become trivially

>> obvious when you understand that samples represent impulses (this is not so

>> much a viewpoint as the basis of sampling).

>

>

> Here's the way I see it. There are three classes of interesting objects

> here:

>

> 1) Discrete time signals, which are sequences of numbers.

> 2) Scaled, equally-spaced ideal impulse trains, which are a sort of

> generalized function of a real number.

> 3) Appropriately bandlimited functions of a real number.

>

> None of these are exactly identical, as sequences of numbers are not the

> same sort of beast as functions of a real number. But obviously there is a

> one-to-one correspondence between objects in classes 1 and 2. Less

> obviously -- but more interestingly and importantly! -- there is a

> one-to-one correspondence between objects in classes 1 and 3. So any

> operation on any of these three classes will have a corresponding operation

> in the other two.

>

> This is what the math tells us. It does not tell us that any of these

> classes are identical to each other or that thinking of one correspondence

> is more correct than the other.

>

> The fact that 5,17,-12,2 at sample rate 1X and

> 5,0,0,0,17,0,0,0,-12,0,0,0,2,0,0,0

>> at sample rate 4X are identical is obvious only for samples representing

>> impulses.

>

>

> I agree that the zero-stuff-then-lowpass technique is much more obvious

> when we you consider the impulse train corresponding to the signal. But I

> find it peculiar to assert that these two sequences are "identical." If

> they're identical in any meaningful sense, why don't we just stop there and

> call it a resampler? The reason is that what we actually care about in the

> end is what the corresponding bandlimited functions look like, and

> zero-stuffing is far from being an identity operation in this domain. We're

> instead done constructing a resampler when we end up with an operation that

> preserves the bandlimited function -- or preserves as much of it as

> possible in the case of downsampling.

>

> This is why it is more natural for me to think of the discrete signal and

> the bandlimited function as being more closely identified. The impulse

> train is a related mathematical entity which is useful to pull out of the

> toolbox on some occasions.

>

> I'm not really interested in arguing that the way I think about things is

> superior -- as I've stated above I think the math is neutral on this point,

> and what mental model works best is different from person to person. It can

> be a bit like arguing what shoe size is best. But I do think it's

> counterproductive to discourage people from thinking about the discrete

> signal <-> bandlimited function correspondence. I think real insight and

> intuition in DSP is built up by comparing what basic operations look like

> in each of these different universes (as well as in their frequency domain

> equivalents).

>

> -Ethan

>

>

>

> On Mon, Sep 4, 2017 at 2:14 PM, Ethan Fenn <et...@polyspectral.com> wrote:

>

>> Time variance is a bit subtle in the multi-rate context. For integer

>>> downsampling, as you point out, it might make more sense to replace the

>>> classic n-shift-in/n-shift-out definition of time invariance with one that

>>> works in terms of the common real time represented by the different

>>> sampling rates. So an integer shift into a 2x downsampler should be a

>>> half-sample shift in the output. In ideal terms (brickwall filters/sinc

>>> functions) this all clearly works out.

>>

>>

>> I think the thing to say about integer downsampling with respect to time

>>> variance is that it's that partitions the space of input shifts, where if

>>> you restrict yourself to shifts from a given partition you will see time

>>> invariance (in a certain sense).

>>

>>

>> So this to me is a good example of how thinking of discrete time signals

>> as representing bandlimited functions is useful. Because if we're thinking

>> of things this way, we can simply define an operation in the space of

>> discrete signals as being LTI iff the corresponding operation in the space

>> of bandlimited functions is LTI. This generalizes the usual definition, and

>> your partitioned-shift concept, in exactly the way we want, and we find

>> that ideal resamplers (of any ratio, integer/rational/irrational) are in

>> fact LTI as our intuition suggests they should be.

>>

>> -Ethan F

>>

>>

>>

>> On Mon, Sep 4, 2017 at 1:00 AM, Ethan Duni <ethan.d...@gmail.com> wrote:

>>

>>> Hmm this is quite a few discussions of LTI with respect to resampling

>>> that have gone badly on the list over the years...

>>>

>>> Time variance is a bit subtle in the multi-rate context. For integer

>>> downsampling, as you point out, it might make more sense to replace the

>>> classic n-shift-in/n-shift-out definition of time invariance with one that

>>> works in terms of the common real time represented by the different

>>> sampling rates. So an integer shift into a 2x downsampler should be a

>>> half-sample shift in the output. In ideal terms (brickwall filters/sinc

>>> functions) this all clearly works out.

>>>

>>> On the other hand, I hesitate to say "resampling is LTI" because that

>>> seems to imply that resampling doesn't produce aliasing. And of course

>>> aliasing is a central concern in the design of resamplers. So I can see how

>>> this rubs people the wrong way.

>>> .

>>> It's not clear to me that a realizable downsampler (i.e., with non-zero

>>> aliasing) passes the "real time" definition of LTI?

>>>

>>> I think the thing to say about integer downsampling with respect to time

>>> variance is that it's that partitions the space of input shifts, where if

>>> you restrict yourself to shifts from a given partition you will see time

>>> invariance (in a certain sense).

>>>

>>> More generally, resampling is kind of an edge case with respect to time

>>> invariance, in the sense that resamplers are time-variant systems that are

>>> trying as hard as they can to act like time invariant systems. As opposed

>>> to, say, modulators or envelopes or such,

>>>

>>> Ethan D

>>>

>>>

>>> On Fri, Sep 1, 2017 at 10:09 PM, Nigel Redmon <earle...@earlevel.com>

>>> wrote:

>>>

>>>> Interesting comments, Ethan.

>>>>

>>>> Somewhat related to your points, I also had a situation on this board

>>>> years ago where I said that sample rate conversion was LTI. It was a

>>>> specific context, regarding downsampling, so a number of people, one by

>>>> one, basically quoted back the reason I was wrong. That is, basically that

>>>> for downsampling 2:1, you&rsquo;d get a different result depending on 
>>>> which set

>>>> of points you discard (decimation), and that alone meant it isn&rsquo;t 
>>>> LTI. Of

>>>> course, the fact that the sample values are different doesn&rsquo;t mean 
>>>> what

>>>> they represent is different&mdash;one is just a half-sample delay of the 
>>>> other. I

>>>> was surprised a bit that they accepted so easily that SRC couldn&rsquo;t 
>>>> be used

>>>> in a system that required LTI, just because it seemed to violate the

>>>> definition of LTI they were taught.

>>>>

>>>> On Sep 1, 2017, at 3:46 PM, Ethan Duni <ethan.d...@gmail.com> wrote:

>>>>

>>>> Ethan F wrote:

>>>> >I see your nitpick and raise you. :o) Surely there are uncountably many

>>>> such functions,

>>>> >as the power at any apparent frequency can be distributed arbitrarily

>>>> among the bands.

>>>>

>>>> Ah, good point. Uncountable it is!

>>>>

>>>> Nigel R wrote:

>>>> >But I think there are good reasons to understand the fact that samples

>>>> represent a

>>>> >modulated impulse train.

>>>>

>>>> I entirely agree, and this is exactly how sampling was introduced to me

>>>> back in college (we used Oppenheim and Willsky's book "Signals and

>>>> Systems"). I've always considered it the canonical EE approach to the

>>>> subject, and am surprised to learn that anyone thinks otherwise.

>>>>

>>>> Nigel R wrote:

>>>> >That sounds like a dumb observation, but I once had an argument on this

>>>> board:

>>>> >After I explained why we stuff zeros of integer SRC, a guy said my

>>>> explanation was BS.

>>>>

>>>> I dunno, this can work the other way as well. There was a guy a while

>>>> back who was arguing that the zero-stuffing used in integer upsampling is

>>>> actually not a time-variant operation, on the basis that the zeros "are

>>>> already there" in the impulse train representation (so it's a "null

>>>> operation" basically). He could not explain how this putatively-LTI system

>>>> was introducing aliasing into the output. Or was this the same guy?

>>>>

>>>> So that's one drawback to the impulse train representation - you need

>>>> the sample rate metadata to do *any* meaningful processing on such a

>>>> signal. Otherwise you don't know which locations are "real" zeros and which

>>>> are just "filler." Of course knowledge of sample rate is always required to

>>>> make final sense of a discrete-time audio signal, but in the usual sequence

>>>> representation we don't need it just to do basic operations, only for

>>>> converting back to analog or interpreting discrete time operations in

>>>> analog terms (i.e., what physical frequency is the filter cut-off at,

>>>> etc.).

>>>>

>>>> The other big pedagogical problem with impulse train representation is

>>>> that it can't be graphed in a useful way.

>>>>

>>>> People will also complain that it is poorly defined mathematically (and

>>>> indeed the usual treatments handwave these concerns), but my rejoinder

>>>> would be that it can all be made rigorous by adopting non-standard

>>>> analysis/hyperreal numbers. So, no harm no foul, as far as "correctness" is

>>>> concerned, although it does hobble the subject as a gateway into "real

>>>> math."

>>>>

>>>> Ethan D

>>>>

>>>> On Fri, Sep 1, 2017 at 2:38 PM, Ethan Fenn <et...@polyspectral.com>

>>>> wrote:

>>>>

>>>>> This needs an additional qualifier, something about the bandlimited

>>>>>> function with the lowest possible bandwidth, or containing DC, or

>>>>>> "baseband," or such.

>>>>>

>>>>>

>>>>> Yes, by bandlimited here I mean bandlimited to [-Nyquist, Nyquist].

>>>>>

>>>>> Otherwise, there are a countably infinite number of bandlimited

>>>>>> functions that interpolate any given set of samples. These get used in

>>>>>> "bandpass sampling," which is uncommon in audio but commonplace in radio

>>>>>> applications.

>>>>>

>>>>>

>>>>> I see your nitpick and raise you. :o) Surely there are uncountably many

>>>>> such functions, as the power at any apparent frequency can be distributed

>>>>> arbitrarily among the bands.

>>>>>

>>>>> -Ethan F

>>>>>

>>>>>

>>>>> On Fri, Sep 1, 2017 at 5:30 PM, Ethan Duni <ethan.d...@gmail.com>

>>>>> wrote:

>>>>>

>>>>>> >I'm one of those people who prefer to think of a discrete-time

>>>>>> signal as

>>>>>> >representing the unique bandlimited function interpolating its

>>>>>> samples.

>>>>>>

>>>>>> This needs an additional qualifier, something about the bandlimited

>>>>>> function with the lowest possible bandwidth, or containing DC, or

>>>>>> "baseband," or such.

>>>>>>

>>>>>> Otherwise, there are a countably infinite number of bandlimited

>>>>>> functions that interpolate any given set of samples. These get used in

>>>>>> "bandpass sampling," which is uncommon in audio but commonplace in radio

>>>>>> applications.

>>>>>>

>>>>>> Ethan D

>>>>>>

>>>>>> On Fri, Sep 1, 2017 at 1:31 PM, Ethan Fenn <et...@polyspectral.com>

>>>>>> wrote:

>>>>>>

>>>>>>> Thanks for posting this! It's always interesting to get such a good

>>>>>>> glimpse at someone else's mental model.

>>>>>>>

>>>>>>> I'm one of those people who prefer to think of a discrete-time signal

>>>>>>> as representing the unique bandlimited function interpolating its 
>>>>>>> samples.

>>>>>>> And I don't think this point of view has crippled my understanding of

>>>>>>> resampling or any other DSP techniques!

>>>>>>>

>>>>>>> I'm curious -- from the impulse train point of view, how do you

>>>>>>> understand fractional delays? Or taking the derivative of a signal? Do 
>>>>>>> you

>>>>>>> have to pass into the frequency domain in order to understand these?

>>>>>>> Thinking of a signal as a bandlimited function, I find it pretty easy to

>>>>>>> understand both of these processes from first principles in the time

>>>>>>> domain, which is one reason I like to think about things this way.

>>>>>>>

>>>>>>> -Ethan

>>>>>>>

>>>>>>>

>>>>>>>

>>>>>>>

>>>>>>> On Mon, Aug 28, 2017 at 12:15 PM, Nigel Redmon <earle...@earlevel.com

>>>>>>> > wrote:

>>>>>>>

>>>>>>>> Hi Remy,

>>>>>>>>

>>>>>>>> On Aug 28, 2017, at 2:16 AM, Remy Muller <muller.r...@gmail.com>

>>>>>>>> wrote:

>>>>>>>>

>>>>>>>> I second Sampo about giving some more hints about Hilbert spaces,

>>>>>>>> shift-invariance, Riesz representation theorem&hellip; etc

>>>>>>>>

>>>>>>>>

>>>>>>>> I think you&rsquo;ve hit upon precisely what my blog isn&rsquo;t, and 
>>>>>>>> why it

>>>>>>>> exists at all. ;-)

>>>>>>>>

>>>>>>>> Correct me if you said it somewhere and I didn't saw it, but an

>>>>>>>> important *implicit* assumption in your explanation is that you are

>>>>>>>> talking about "uniform bandlimited sampling&rdquo;.

>>>>>>>>

>>>>>>>>

>>>>>>>> Sure, like the tag line in the upper right says, it&rsquo;s a blog 
>>>>>>>> about

>>>>>>>> "practical digital audio signal processing".

>>>>>>>>

>>>>>>>> Personnally, my biggest enlighting moment regarding sampling where

>>>>>>>> when I read these 2 articles:

>>>>>>>>

>>>>>>>>

>>>>>>>> Nice, thanks for sharing.

>>>>>>>>

>>>>>>>> "Sampling&mdash;50 Years After Shannon"

>>>>>>>> http://bigwww.epfl.ch/publications/unser0001.pdf

>>>>>>>>

>>>>>>>> and

>>>>>>>>

>>>>>>>> "Sampling Moments and Reconstructing Signals of Finite Rate of

>>>>>>>> Innovation: Shannon Meets Strang&ndash;Fix"

>>>>>>>> https://infoscience.epfl.ch/record/104246/files/DragottiVB07.pdf

>>>>>>>>

>>>>>>>> I wish I had discovered them much earlier during my signal

>>>>>>>> processing classes.

>>>>>>>>

>>>>>>>> Talking about generalized sampling, may seem abstract and beyond

>>>>>>>> what you are trying to explain. However, in my personal experience,

>>>>>>>> sampling seen through the lense of approximation theory as 'just a

>>>>>>>> projection' onto a signal subspace made everything clearer by giving 
>>>>>>>> more

>>>>>>>> perspective:

>>>>>>>>

>>>>>>>> - The choice of basis functions and norms is wide. The sinc

>>>>>>>> function being just one of them and not a causal realizable one 
>>>>>>>> (infinite

>>>>>>>> temporal support).

>>>>>>>> - Analysis and synthesis functions don't have to be the same (cf

>>>>>>>> wavelets bi-orthogonal filterbanks)

>>>>>>>> - Perfect reconstruction is possible without requiring

>>>>>>>> bandlimitedness!

>>>>>>>> - The key concept is 'consistent sampling': *one seeks a signal

>>>>>>>> approximation that is such that it would yield exactly the same

>>>>>>>> measurements if it was reinjected into the system*.

>>>>>>>> - All that is required is a "finite rate of innovation" (in the

>>>>>>>> statistical sense).

>>>>>>>> - Finite support kernels are easier to deal with in real-life

>>>>>>>> because they can be realized (FIR) (reminder: time-limited <=>

>>>>>>>> non-bandlimited)

>>>>>>>> - Using the L2 norm is convenient because we can reason about

>>>>>>>> best approximations in the least-squares sense and solve the projection

>>>>>>>> problem using Linear Algebra using the standard L2 inner product.

>>>>>>>> - Shift-invariance is even nicer since it enables *efficient*

>>>>>>>> signal processing.

>>>>>>>> - Using sparser norms like the L1 norm enables sparse sampling

>>>>>>>> and the whole field of compressed sensing. But it comes at a price: we 
>>>>>>>> have

>>>>>>>> to use iterative projections to get there.

>>>>>>>>

>>>>>>>> All of this is beyond your original purpose, but from a pedagocial

>>>>>>>> viewpoint, I wish these 2 articles were systematically cited in a 
>>>>>>>> "Further

>>>>>>>> Reading" section at the end of any explanation regarding the sampling

>>>>>>>> theorem(s).

>>>>>>>>

>>>>>>>> At least the wikipedia page cites the first article and has a

>>>>>>>> section about non-uniform and sub-nyquist sampling but it's easy to 
>>>>>>>> miss

>>>>>>>> the big picture for a newcomer.

>>>>>>>>

>>>>>>>> Here's a condensed presentation by Michael Unser for those who would

>>>>>>>> like to have a quick historical overview:

>>>>>>>> http://bigwww.epfl.ch/tutorials/unser0906.pdf

>>>>>>>>

>>>>>>>>

>>>>>>>> On 27/08/17 08:20, Sampo Syreeni wrote:

>>>>>>>>

>>>>>>>> On 2017-08-25, Nigel Redmon wrote:

>>>>>>>>

>>>>>>>> http://www.earlevel.com/main/tag/sampling-theory-series/?order=asc

>>>>>>>>

>>>>>>>>

>>>>>>>> Personally I'd make it much simpler at the top. Just tell them

>>>>>>>> sampling is what it is: taking an instantaneous value of a signal at

>>>>>>>> regular intervals. Then tell them that is all it takes to reconstruct 
>>>>>>>> the

>>>>>>>> waveform under the assumption of bandlimitation -- a high-falutin term 
>>>>>>>> for

>>>>>>>> "doesn't change too fast between your samples".

>>>>>>>>

>>>>>>>> Even a simpleton can grasp that idea.

>>>>>>>>

>>>>>>>> Then if somebody wants to go into the nitty-gritty of it, start

>>>>>>>> talking about shift-invariant spaces, eigenfunctions, harmonical 
>>>>>>>> analysis,

>>>>>>>> and the rest of the cool stuff.

>>>>>>>>

>>>>>>>>

�
_______________________________________________
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Reply via email to