Re: [music-dsp] Sampling theory "best" explanation

2017-09-12 Thread Ethan Duni
Thanks for the reference and explanation Robert. Now that you jog my memory
I realize you covered this before, some time back. If I can find a few
minutes I will try to work out a version including the images from the
resampling. The trade-off between resampling and interpolation is not
entirely clear to me in this analysis.

So linearly interpolating two adjacent FIR polyphases is equivalent to a
single FIR with interpolated coefficients. I.e., we're using an affine
approximation of the underlying resampling kernel. And at high resampling
ratios this will be a good approximation. IIRC this has been covered on the
list before as well. However, it costs twice as much CPU as running a
single phase - so isn't the fair comparison to an FIR of double the order
(and so, double the resampling ratio)?

Ethan D

On Wed, Sep 6, 2017 at 9:57 PM, robert bristow-johnson <
r...@audioimagination.com> wrote:

>
>
>  Original Message ----
> Subject: Re: [music-dsp] Sampling theory "best" explanation
> From: "Ethan Duni" <ethan.d...@gmail.com>
> Date: Wed, September 6, 2017 4:49 pm
> To: "robert bristow-johnson" <r...@audioimagination.com>
> "A discussion list for music-related DSP" <music-dsp@music.columbia.edu>
> --
>
> > rbj wrote:
> >>what do you mean be "non-ideal"? that it's not an ideal brick wall LPF?
> > it's still LTI if it's some other filter **unless** you're meaning that
> > the possible aliasing.
> >
> > Yes, that is exactly what I am talking about. LTI systems cannot produce
> > aliasing.
> >
> > Without an ideal bandlimiting filter, resampling doesn't fulfill either
> > definition of time invariance. Not the classic one in terms of sample
> > shifts, and not the "common real time" one suggested for multirate cases.
> >
> > It's easy to demonstrate this by constructing a counterexample. Consider
> > downsampling by 2, and an input signal that contains only a single
> sinusoid
> > with frequency above half the (input) Nyquist rate, and at a frequency
> that
> > the non-ideal bandlimiting filter fails to completely suppress. To be
> LTI,
> > shifting the input by one sample should result in a half-sample shift in
> > the output (i.e., bandlimited interpolation). But this doesn't happen,
> due
> > to aliasing. This becomes obvious if you push the frequency of the input
> > sinusoid close to the (input) Nyquist frequency - instead of a
> half-sample
> > shift in the output, you get negation!
> >
> >>we draw the little arrows with different heights and we draw the impulses
> > scaled with samples of negative value as arrows pointing down
> >
> > But that's just a graph of the discrete time sequence.
>
> well, even if the *information* necessary is the same, a graph of x[n]
> need only be little dots, one per sample.  or discrete lines (without
> arrowheads).
>
> but the use of the symbol of an arrow for an impulse is a symbol of
> something difficult to graph for a continuous-time function (not to be
> confused with a continuous function).  if the impulse heights and
> directions (up or down) are analog to the sample value magnitude and
> polarity, those graphing object suffice to depict these *hypothetical*
> impulses in the continuous-time domain.
>
>
> >
> >>you could do SRC without linear interpolation (ZOH a.k.a. "drop-sample")
> > but you would need a much larger table
> >>(if i recall correctly, 1024 times larger, so it would be 512Kx
> > oversampling) to get the same S/N. if you use 512x
> >>oversampling and ZOH interpolation, you'll only get about 55 dB S/N for
> an
> > arbitrary conversion ratio.
> >
> > Interesting stuff, it didn't occur to me that the SNR would be that low.
> > How do you estimate SNR for a particular configuration (i.e., target
> > resampling ratio, fixed upsampling factor, etc)? Is that for ideal 512x
> > resampling, or does it include the effects of a particular filter design
> > choice?
>
> this is what Duane Wise and i ( https://www.researchgate.net/
> publication/266675823_Performance_of_Low-Order_
> Polynomial_Interpolators_in_the_Presence_of_Oversampled_Input ) were
> trying to show and Olli Niemitalo (in his pink elephant paper
> http://yehar.com/blog/wp-content/uploads/2009/08/deip.pdf ).
>
> so let's say that you're oversampling by a factor of R.  if the sample
> rate is 96 kHz and the audio is limited to 20 kHz, the oversampling ratio
> is 2.4 . but now imagine it's *highly* oversampled (which we can

Re: [music-dsp] Sampling theory "best" explanation

2017-09-08 Thread Theo Verelst

So when is a "system", or better put, it's mathematical model, a linear system ?

In a way the scalar high school definition suffices, when the inputs are added, the result 
must be the same as when the inputs are processed separately, and a mulitplied input gives 
a multiplied output. Simple enough, though I'd prefer the integral definition, because 
then it would be a proper definition for the question at hand.


The problem is: is your re-sampling and filtering going to be "linear" when the samples 
have shifted a bit. If you take a resample by having a second AD converter running on the 
same input signal, but say half a sample shifted, there isn't an easy way to correlate 
that sampled signal, unless you do a lengthy sinc based resampling (preferably with error 
analysis).


From that point it becomes a difficult process of explaining theory, it seems. Then, 
there's the idea of linearity in the digital signal processing domain: are two filters 
applied to a set a samples dignifying the "law" for LTI ?


T.V.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Sampling theory "best" explanation

2017-09-07 Thread Richard Dobson
It is a most fascinating thread. The more one looks into it, the more 
one has to marvel that the process works at all.


Richard Dobson

On 07/09/2017 07:16, Nigel Redmon wrote:
Somehow, combining the term "rat's ass" with a clear and concise 
explanation of your viewpoint makes it especially satisfying.



...
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Sampling theory "best" explanation

2017-09-07 Thread Nigel Redmon
Somehow, combining the term "rat's ass" with a clear and concise explanation of 
your viewpoint makes it especially satisfying.

> On Sep 7, 2017, at 11:57 AM, robert bristow-johnson 
> <r...@audioimagination.com> wrote:
> 
> 
> 
>  Original Message ----------------
> Subject: Re: [music-dsp] Sampling theory "best" explanation
> From: "Ethan Duni" <ethan.d...@gmail.com>
> Date: Wed, September 6, 2017 4:49 pm
> To: "robert bristow-johnson" <r...@audioimagination.com>
> "A discussion list for music-related DSP" <music-dsp@music.columbia.edu>
> --
> 
> > rbj wrote:
> >>what do you mean be "non-ideal"? that it's not an ideal brick wall LPF?
> > it's still LTI if it's some other filter **unless** you're meaning that
> > the possible aliasing.
> >
> > Yes, that is exactly what I am talking about. LTI systems cannot produce
> > aliasing.
> >
> > Without an ideal bandlimiting filter, resampling doesn't fulfill either
> > definition of time invariance. Not the classic one in terms of sample
> > shifts, and not the "common real time" one suggested for multirate cases.
> >
> > It's easy to demonstrate this by constructing a counterexample. Consider
> > downsampling by 2, and an input signal that contains only a single sinusoid
> > with frequency above half the (input) Nyquist rate, and at a frequency that
> > the non-ideal bandlimiting filter fails to completely suppress. To be LTI,
> > shifting the input by one sample should result in a half-sample shift in
> > the output (i.e., bandlimited interpolation). But this doesn't happen, due
> > to aliasing. This becomes obvious if you push the frequency of the input
> > sinusoid close to the (input) Nyquist frequency - instead of a half-sample
> > shift in the output, you get negation!
> >
> >>we draw the little arrows with different heights and we draw the impulses
> > scaled with samples of negative value as arrows pointing down
> >
> > But that's just a graph of the discrete time sequence.
> 
> well, even if the *information* necessary is the same, a graph of x[n] need 
> only be little dots, one per sample.  or discrete lines (without arrowheads).
> 
> but the use of the symbol of an arrow for an impulse is a symbol of something 
> difficult to graph for a continuous-time function (not to be confused with a 
> continuous function).  if the impulse heights and directions (up or down) are 
> analog to the sample value magnitude and polarity, those graphing object 
> suffice to depict these *hypothetical* impulses in the continuous-time domain.
> 
> 
> >
> >>you could do SRC without linear interpolation (ZOH a.k.a. "drop-sample")
> > but you would need a much larger table
> >>(if i recall correctly, 1024 times larger, so it would be 512Kx
> > oversampling) to get the same S/N. if you use 512x
> >>oversampling and ZOH interpolation, you'll only get about 55 dB S/N for an
> > arbitrary conversion ratio.
> >
> > Interesting stuff, it didn't occur to me that the SNR would be that low.
> > How do you estimate SNR for a particular configuration (i.e., target
> > resampling ratio, fixed upsampling factor, etc)? Is that for ideal 512x
> > resampling, or does it include the effects of a particular filter design
> > choice?
> 
> this is what Duane Wise and i ( 
> https://www.researchgate.net/publication/266675823_Performance_of_Low-Order_Polynomial_Interpolators_in_the_Presence_of_Oversampled_Input
>  ) were trying to show and Olli Niemitalo (in his pink elephant paper 
> http://yehar.com/blog/wp-content/uploads/2009/08/deip.pdf ).
> 
> so let's say that you're oversampling by a factor of R.  if the sample rate 
> is 96 kHz and the audio is limited to 20 kHz, the oversampling ratio is 2.4 . 
> but now imagine it's *highly* oversampled (which we can get from polyphase 
> FIR resampling) like R=32 or R=512 or R=512K.
> 
> when it's upsampled (like hypothetically stuffing 31 zeros or 511 zeros or 
> (512K-1) zeros into the stream and brick-wall low-pass filtering) then the 
> spectrum has energy at the baseband (from -Nyquist to +Nyquist of the 
> original sample rate, Fs) and is empty for 31 (or 511 or (512K-1)) image 
> widths of Nyquist, and the first non-zero image is at 32 or 512 or 512K x Fs.
> 
> now if you're drop-sample or ZOH interpolating it's convolving the train of 
> weighted impulses with a rect() pulse function and in the frequency domain 
> you are multiplying by a sinc() function

Re: [music-dsp] Sampling theory "best" explanation

2017-09-06 Thread robert bristow-johnson







 Original Message 

Subject: Re: [music-dsp] Sampling theory "best" explanation

From: "Ethan Duni" <ethan.d...@gmail.com>

Date: Wed, September 6, 2017 4:49 pm

To: "robert bristow-johnson" <r...@audioimagination.com>

"A discussion list for music-related DSP" <music-dsp@music.columbia.edu>

--



> rbj wrote:

>>what do you mean be "non-ideal"? that it's not an ideal brick wall LPF?

> it's still LTI if it's some other filter **unless** you're meaning that

> the possible aliasing.

>

> Yes, that is exactly what I am talking about. LTI systems cannot produce

> aliasing.

>

> Without an ideal bandlimiting filter, resampling doesn't fulfill either

> definition of time invariance. Not the classic one in terms of sample

> shifts, and not the "common real time" one suggested for multirate cases.

>

> It's easy to demonstrate this by constructing a counterexample. Consider

> downsampling by 2, and an input signal that contains only a single sinusoid

> with frequency above half the (input) Nyquist rate, and at a frequency that

> the non-ideal bandlimiting filter fails to completely suppress. To be LTI,

> shifting the input by one sample should result in a half-sample shift in

> the output (i.e., bandlimited interpolation). But this doesn't happen, due

> to aliasing. This becomes obvious if you push the frequency of the input

> sinusoid close to the (input) Nyquist frequency - instead of a half-sample

> shift in the output, you get negation!

>

>>we draw the little arrows with different heights and we draw the impulses

> scaled with samples of negative value as arrows pointing down

>

> But that's just a graph of the discrete time sequence.
well, even if the *information* necessary is the same, a graph of x[n] need 
only be little dots, one per sample. �or discrete lines (without arrowheads).
but the use of the symbol of an arrow for an impulse is a symbol of
something difficult to graph for a continuous-time function (not to be confused 
with a continuous function). �if the impulse heights and directions (up or 
down) are analog to the sample value magnitude and polarity, those graphing 
object suffice to depict these *hypothetical* impulses in the
continuous-time domain.

>

>>you could do SRC without linear interpolation (ZOH a.k.a. "drop-sample")

> but you would need a much larger table

>>(if i recall correctly, 1024 times larger, so it would be 512Kx

> oversampling) to get the same S/N. if you use 512x

>>oversampling and ZOH interpolation, you'll only get about 55 dB S/N for an

> arbitrary conversion ratio.

>

> Interesting stuff, it didn't occur to me that the SNR would be that low.

> How do you estimate SNR for a particular configuration (i.e., target

> resampling ratio, fixed upsampling factor, etc)? Is that for ideal 512x

> resampling, or does it include the effects of a particular filter design

> choice?
this is what Duane Wise and i ( 
https://www.researchgate.net/publication/266675823_Performance_of_Low-Order_Polynomial_Interpolators_in_the_Presence_of_Oversampled_Input
 ) were trying to show and Olli Niemitalo (in his pink elephant
paper�http://yehar.com/blog/wp-content/uploads/2009/08/deip.pdf ).
so let's say that you're oversampling by a factor of R. �if the sample rate is 
96 kHz and the audio is limited to 20 kHz, the oversampling ratio is 2.4 . but 
now imagine it's *highly* oversampled (which we can get
from polyphase FIR resampling) like R=32 or R=512 or R=512K.
when it's upsampled (like hypothetically stuffing 31 zeros or 511 zeros or 
(512K-1) zeros into the stream and brick-wall low-pass filtering) then the 
spectrum has energy at the baseband (from -Nyquist to +Nyquist of the original
sample rate, Fs) and is empty for 31 (or 511 or (512K-1)) image widths of 
Nyquist, and the first non-zero image is at 32 or 512 or 512K x Fs.
now if you're drop-sample or ZOH interpolating it's convolving the train of 
weighted impulses with a rect() pulse function and in the frequency domain
you are multiplying by a sinc() function with zeros through every integer times 
R x Fs except for the one at 0 x Fs (the baseband, where the sinc multiplies by 
virtually 1) �those reduce your image by a known amount. �multiplying the 
magnitude by sinc() is the same as multiplying the power
spectrum by sinc^2().
with linear interpolation, you're convolving with a triangular pulse, 
multiplying the sample values by the triangular pulse function and in the 
frequency domain you're multiplying by a sinc^2() function and in the power 
spectrum you're multiplying by a sinc^4()
function.
now that sinc^2 or sinc^4 functions really puts a hole in t

Re: [music-dsp] Sampling theory "best" explanation

2017-09-06 Thread Ethan Duni
Okay, no big deal. It's easy to come off the wrong way in complicated, fast
moving email threads.

Ethan D

On Wed, Sep 6, 2017 at 6:37 PM, Nigel Redmon  wrote:

> Ethan, I wasn't taking a swipe at you, by any stretch. In fact, I wasn't
> even addressing your ADC comment. It was actually about things like the
> idea of making DACs with impulses. As I noted, we don't because there are
> ways that are easier and accomplish the same goal, but it is feasible. I've
> had people say in the past to me it's absurd, and I've assured them that a
> reasonable and practical approximation of it would indeed produce a
> reasonable approximation of a decent DAC. That's a pretty relative
> statement because the quality depends on how hard you want try, but I
> subsequently saw that Julius Smith make the same assertion on his website.
>
> Sorry you misinterpreted it.
>
> On Sep 7, 2017, at 5:34 AM, Ethan Duni  wrote:
>
> Nigel Redmon wrote:
> >As an electrical engineer, we find great humor when people say we can't
> do impulses.
>
> I'm the electrical engineer who pointed out that impulses don't exist and
> are not found in actual ADCs. If you have some issue with anything I've
> posted, I'll thank you to address it to me directly and respectfully.
>
> Taking oblique swipes at fellow list members, impugning their standing as
> engineers, etc. is poisonous to the list community.
>
> >What constitutes an impulse depends on the context—nano seconds,
> milliseconds...
>
> If it has non-zero pulse width, it isn't an impulse in the relevant sense:
> multiplying by such a function would not model the sampling process. You
> would need to introduce additional operations to describe how this finite
> region of non-zero signal around each sample time is translated into a
> unique sample value.
>
> >For ADC, we effectively measure an instantaneous voltage and store it as
> an impulse.
>
> I don't know of any ADC design that stores voltages as "impulse" signals,
> even approximately. The measured voltage is represented through modulation
> schemes such as PDM, PWM, PCM, etc.
>
> Impulse trains are a convenient pedagogical model for understanding
> aliasing, reconstruction filters, etc., but there is a considerable gap
> between that model and what actually goes on in a real ADC.
>
> >If you can make a downsampler that has no audible aliasing (and you
> can), I think the process has to be called linear, even if you can make a
> poor quality one that isn't.
>
> I'm not sure how you got onto linearity, but the subject is
> time-invariance.
>
> I have no objection to calling resamplers "approximately time-invariant"
> or "asymptotically time-invariant" or somesuch, in the sense that you can
> get as close to time-invariant behavior as you like by throwing resources
> at the bandlimiting filter. This is qualitatively different from other
> archetypical examples of time-variant systems (modulation, envelopes, etc.)
> where explicitly time-variant behavior is the goal, even in the ideal case.
> Moreover, I agree that this distinction is important and worth
> highlighting.
>
> However, there needs to be *some* qualifier - the bare statement
> "(re)sampling is LTI" is incorrect and misleading. It obscures that fact
> that addressing the aliasing caused by the system's time-variance is the
> principle concern in the design of resamplers. The fact that a given design
> does a good job is great and all - but that only happens because the
> designer recognizes that the system is time-invariant, and dedicates
> resources to mitigating the impact of aliasing.
>
> >If you get too picky and call something non-linear, when for practical
> decision-making purposes it clearly is, it seem you've defeated the purpose.
>
> If you insist on labelling all resamplers as "time-invariant," without any
> further qualification, then it will mess up practical decision making.
> There will be no reason to consider the effects of aliasing - LTI systems
> cannot produce aliasing - when making practical system design decisions.
> You only end up with approximately-LTI behavior because you recognize at
> the outset that the system is *not* LTI, and make appropriate design
> decisions to limit the impact of aliasing. So this is putting the cart
> before the horse.
>
> The appropriate way to deal with this is not to get hung up on the label
> "LTI" (or any specialized variations thereof), but to simply quote the
> actual performance of the system (SNR, spurious-free dynamic range, etc.).
> In that way, everything is clear to the designers and clients: the system
> is fundamentally non-LTI, and deviation from LTI behavior is bounded by the
> performance figures. Then the client can look at that, and make an
> informed, practical decision about whether they need to worry about
> aliasing in their specific context. If not, they are free to say to
> themselves "close enough to LTI for me!" If so, they can dig into the
> non-LTI 

Re: [music-dsp] Sampling theory "best" explanation

2017-09-06 Thread Nigel Redmon
Of course I mean that we store a representation of an impulse. I've said many 
times that the sample values "represent" impulses.

> On Sep 7, 2017, at 5:34 AM, Ethan Duni  wrote:
> 
> >For ADC, we effectively measure an instantaneous voltage and store it as an 
> >impulse.
> 
> I don't know of any ADC design that stores voltages as "impulse" signals, 
> even approximately. The measured voltage is represented through modulation 
> schemes such as PDM, PWM, PCM, etc. 
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Sampling theory "best" explanation

2017-09-06 Thread Ethan Duni
Nigel Redmon wrote:
>As an electrical engineer, we find great humor when people say we can't do
impulses.

I'm the electrical engineer who pointed out that impulses don't exist and
are not found in actual ADCs. If you have some issue with anything I've
posted, I'll thank you to address it to me directly and respectfully.

Taking oblique swipes at fellow list members, impugning their standing as
engineers, etc. is poisonous to the list community.

>What constitutes an impulse depends on the context—nano seconds,
milliseconds...

If it has non-zero pulse width, it isn't an impulse in the relevant sense:
multiplying by such a function would not model the sampling process. You
would need to introduce additional operations to describe how this finite
region of non-zero signal around each sample time is translated into a
unique sample value.

>For ADC, we effectively measure an instantaneous voltage and store it as
an impulse.

I don't know of any ADC design that stores voltages as "impulse" signals,
even approximately. The measured voltage is represented through modulation
schemes such as PDM, PWM, PCM, etc.

Impulse trains are a convenient pedagogical model for understanding
aliasing, reconstruction filters, etc., but there is a considerable gap
between that model and what actually goes on in a real ADC.

>If you can make a downsampler that has no audible aliasing (and you can),
I think the process has to be called linear, even if you can make a poor
quality one that isn't.

I'm not sure how you got onto linearity, but the subject is
time-invariance.

I have no objection to calling resamplers "approximately time-invariant" or
"asymptotically time-invariant" or somesuch, in the sense that you can get
as close to time-invariant behavior as you like by throwing resources at
the bandlimiting filter. This is qualitatively different from other
archetypical examples of time-variant systems (modulation, envelopes, etc.)
where explicitly time-variant behavior is the goal, even in the ideal case.
Moreover, I agree that this distinction is important and worth
highlighting.

However, there needs to be *some* qualifier - the bare statement
"(re)sampling is LTI" is incorrect and misleading. It obscures that fact
that addressing the aliasing caused by the system's time-variance is the
principle concern in the design of resamplers. The fact that a given design
does a good job is great and all - but that only happens because the
designer recognizes that the system is time-invariant, and dedicates
resources to mitigating the impact of aliasing.

>If you get too picky and call something non-linear, when for practical
decision-making purposes it clearly is, it seem you've defeated the purpose.

If you insist on labelling all resamplers as "time-invariant," without any
further qualification, then it will mess up practical decision making.
There will be no reason to consider the effects of aliasing - LTI systems
cannot produce aliasing - when making practical system design decisions.
You only end up with approximately-LTI behavior because you recognize at
the outset that the system is *not* LTI, and make appropriate design
decisions to limit the impact of aliasing. So this is putting the cart
before the horse.

The appropriate way to deal with this is not to get hung up on the label
"LTI" (or any specialized variations thereof), but to simply quote the
actual performance of the system (SNR, spurious-free dynamic range, etc.).
In that way, everything is clear to the designers and clients: the system
is fundamentally non-LTI, and deviation from LTI behavior is bounded by the
performance figures. Then the client can look at that, and make an
informed, practical decision about whether they need to worry about
aliasing in their specific context. If not, they are free to say to
themselves "close enough to LTI for me!" If so, they can dig into the
non-LTI behavior and figure out how to deal with it. Insisting that
everyone mislabel time-variant systems as LTI short-circuits that whole
process and so undermines practical decision-making.

Ethan D

On Tue, Sep 5, 2017 at 1:05 AM, Nigel Redmon  wrote:

> As an electrical engineer, we find great humor when people say we can't do
> impulses. What constitutes an impulse depends on the context—nano seconds,
> milliseconds...
>
> For ADC, we effectively measure an instantaneous voltage and store it as
> an impulse. Arguing that we don't really do that...well, Amazon didn't
> really ship that Chinese garlic press to me, because they really relayed an
> order to some warehouse, the shipper did some crazy thing like send it in
> the wrong direction to a hub, to be more efficient...and it was on my
> doorstep when I checked the mail. What's the diff...
>
> Well, that's the most important detail (ADC), because that defined what
> we're dealing with when we do "music-dsp". But as far as DAC not using
> impulses, it's only because the shortcut is trivial. Like I said, audio
> 

Re: [music-dsp] Sampling theory "best" explanation

2017-09-06 Thread Nigel Redmon
Ooo, I like that, better than being vague...

I was implying that what constitutes a impulse depends on the context, but I 
like your idea.

Btw, interesting that when the LTI topic with downsampling came up years ago, 
several people shot down the TI part, and this time the discussion has been 
around L.

However, if you take L too literally, even a fixed point butterworth lowpass 
fails to be "linear". I think we have to limit ourselves to practicality on a 
mailing list called "music-dsp". If you can make a downsampler that has no 
audible aliasing (and you can), I think the process has to be called linear, 
even if you can make a poor quality one that isn't.

Linear and Tim Invariant are classifications, and we use them to help make 
decisions about how we might use a process. No? If you get too picky and call 
something non-linear, when for practical decision-making purposes it clearly 
is, it seem you've defeated the purpose.

> On Sep 5, 2017, at 11:57 PM, robert bristow-johnson 
> <r...@audioimagination.com> wrote:
> 
> 
> 
>  Original Message ------------
> Subject: Re: [music-dsp] Sampling theory "best" explanation
> From: "Nigel Redmon" <earle...@earlevel.com>
> Date: Tue, September 5, 2017 4:05 am
> To: music-dsp@music.columbia.edu
> --
> 
> > As an electrical engineer, we find great humor when people say we can't do 
> > impulses. What constitutes an impulse depends on the context—nano seconds, 
> > milliseconds...
>  
> 
> how 'bout a Planck Time.  i will define *my* rbj-dirac-impulse as a nascent 
> impulse that has area of 1 and a width of 1 Planck time.  Is that close 
> enough?  and the math guys cannot deny it's a real "function".
> 
> 
> --
> 
> r b-j  r...@audioimagination.com
> 
> "Imagination is more important than knowledge."
> 
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Sampling theory "best" explanation

2017-09-05 Thread robert bristow-johnson







 Original Message 

Subject: Re: [music-dsp] Sampling theory "best" explanation

From: "Nigel Redmon" <earle...@earlevel.com>

Date: Tue, September 5, 2017 4:05 am

To: music-dsp@music.columbia.edu

--



> As an electrical engineer, we find great humor when people say we can't do 
> impulses. What constitutes an impulse depends on the contextnano 
> seconds, milliseconds...

�
how 'bout a Planck Time. �i will define *my* rbj-dirac-impulse as a nascent 
impulse that has area of 1 and a width of 1 Planck time. �Is that close enough? 
�and the math guys cannot deny it's a real "function".


--
r b-j � � � � � � � � �r...@audioimagination.com
"Imagination is more important than knowledge."
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Sampling theory "best" explanation

2017-09-05 Thread robert bristow-johnson







 Original Message 

Subject: Re: [music-dsp] Sampling theory "best" explanation

From: "Ethan Duni" <ethan.d...@gmail.com>

Date: Tue, September 5, 2017 1:07 am

To: "A discussion list for music-related DSP" <music-dsp@music.columbia.edu>

--



> rbj wrote:

>

>>1. resampling is LTI **if**, for the TI portion, one appropriately scales

> time.

>

> Have we established that this holds for non-ideal resampling? It doesn't

> seem like it does, in general.
what do you mean be "non-ideal"? �that it's not an ideal brick wall LPF? �it's 
still LTI if it's some other filter **unless** you're meaning that the possible 
aliasing.


> If not, then the phrase "resampling is LTI" - without some kind of "ideal"

> qualifier - seems misleading. If it's LTI then what are all these aliases

> doing in my outputs?

>

>>no one *really* zero-stuffs samples into the stream

>

> Nobody does it *explicitly*
people using an IIR filter for reconstruction might be putting in the zeros 
explicitly.
> but it seems misleading to say we don't

> *really* do it. We employ optimizations to handle this part implicitly, but

> the starting point for that is exactly to *really* stuff zeroes into the

> stream. This is true in the same sense that the FFT *really* computes the

> DFT.

>

> Contrast that with pedagogical abstractions like the impulse train model of

> sampling. Nobody has ever *really* sampled a signal this way, because

> impulses do not exist in reality.
it's the only direct way i can think of to demonstrate that we are discarding 
all of the information between samples, yet keeping the information at the 
sampling instances. it's what dirac impulses are for the "sampling" or "sifting"
property (but the math guys are unhappy if we don't immediately surround that 
with an integral, they don't like naked dirac impulse functions).

>

>>7. and i disagree with the statement: "The other big pedagogical problem

> with impulse train representation is that it can't be graphed in a >useful

> way." graphing functions is an abstract representation to begin with, so

> we can use these abstract vertical arrows to represent >impulses.

>

> That is my statement, so I'll clarify: you can graph an impulse train with

> a particular period. But how do you graph the product of the impulse train

> with a continuous-time function (i.e., the sampling operation)? Draw a

> graph of a generic impulse train, with the scaling of each impulse written

> out next to it? That's not useful.
and that's not how we do it, of course. �we draw the little arrows with 
different heights and we draw the impulses scaled with samples of negative 
value as arrows pointing down. �just as it might look if you had nascent deltas 
of *fixed* width
in time, and multiplied those times your continuous input signal.
>
>>if linear interpolation is done between the subsamples, i have found that

> upsampling by a factor of 512 followed by linear interpolation >between

> those teeny-little upsampled samples, that this will result in 120 dB S/N

>

> What is the audio use case wherein 512x upsampling is not already

> sufficient time resolution? I'm curious why you'd need additional

> interpolation at that point.

>
asynchronous sample rate conversion. �SRC with a conversion ratio of 1.0001 
(this is the ASRC case when connecting two devices each with their own master 
clocks, no one is a slave) or a ratio of 48000/44056.01 or a ratio of pi (dunno 
why anyone would want that). just an arbitrary
SRC ratio that is not k/512 where k is some integer. �if k *is* known to be an 
integer, you would not need to compute *two* polyphase output subsamples and 
linearly interpolate.
oh, and also for an arbitrary precision delay that is not expressed as k/512 
samples delay where k is some
integer.
see, the deal is that any of the polynomial interpolations; Lagrange, Hermite, 
B-spline (all of which includes linear interpolation as their 1st-order case) 
has infinite resolution, but the polyphase FIR lookup table does not. �but you 
can combine the two to get an
optimally-designed brickwall LPF (using Parks-McClellan or using firls()) *and* 
get arbitrarily fine resolution.
you could do SRC without linear interpolation (ZOH a.k.a. "drop-sample") but 
you would need a much larger table (if i recall correctly, 1024 times larger, 
so it would be
512Kx oversampling) to get the same S/N. �if you use 512x oversampling and ZOH 
interpolation, you'll only get about 55 dB S/N for an arbitrary conversion 
ratio. �and you can do it with a *smaller* unsample ratio than 512 if you use 
3rd o

Re: [music-dsp] Sampling theory "best" explanation

2017-09-05 Thread Ethan Fenn
>
> If not, then the phrase "resampling is LTI" - without some kind of "ideal"
> qualifier - seems misleading. If it's LTI then what are all these aliases
> doing in my outputs?
>
Yeah, I think you had it right when you pointed out that the existence of
aliasing shows that resampling is not LTI if it's not ideal. Resampling is
only approximately LTI in practice.

Nobody does it *explicitly* but it seems misleading to say we don't
> *really* do it. We employ optimizations to handle this part implicitly, but
> the starting point for that is exactly to *really* stuff zeroes into the
> stream. This is true in the same sense that the FFT *really* computes the
> DFT.
>
That's one starting point. My preferred starting point, and ending point,
is that resampling is bandlimited interpolation. No mention or
justification of zero stuffing is necessary -- not to imply it isn't a
tremendously useful way to think about it, especially when trying to
quantify the error. But I like the interpolation point of view because it
immediately and obviously generalizes to any sampling ratio.

(And of course when downsampling it's not just bandlimited interpolation.
We first have to lowpass to remove frequencies that won't be baseband at
the new sample rate. )

-Ethan F



On Tue, Sep 5, 2017 at 1:07 AM, Ethan Duni  wrote:

> rbj wrote:
>
> >1. resampling is LTI **if**, for the TI portion, one appropriately scales
> time.
>
> Have we established that this holds for non-ideal resampling? It doesn't
> seem like it does, in general.
>
> If not, then the phrase "resampling is LTI" - without some kind of "ideal"
> qualifier - seems misleading. If it's LTI then what are all these aliases
> doing in my outputs?
>
> >no one *really* zero-stuffs samples into the stream
>
> Nobody does it *explicitly* but it seems misleading to say we don't
> *really* do it. We employ optimizations to handle this part implicitly, but
> the starting point for that is exactly to *really* stuff zeroes into the
> stream. This is true in the same sense that the FFT *really* computes the
> DFT.
>
> Contrast that with pedagogical abstractions like the impulse train model
> of sampling. Nobody has ever *really* sampled a signal this way, because
> impulses do not exist in reality.
>
> >7. and i disagree with the statement: "The other big pedagogical problem
> with impulse train representation is that it can't be graphed in a >useful
> way."  graphing functions is an abstract representation to begin with, so
> we can use these abstract vertical arrows to represent >impulses.
>
> That is my statement, so I'll clarify: you can graph an impulse train with
> a particular period. But how do you graph the product of the impulse train
> with a continuous-time function (i.e., the sampling operation)? Draw a
> graph of a generic impulse train, with the scaling of each impulse written
> out next to it? That's not useful. That's just a generic impulse train
> graph and a print-out of the sequence values. The useful graph here is of
> the sample sequence itself.
>
> >if linear interpolation is done between the subsamples, i have found that
> upsampling by a factor of 512 followed by linear interpolation >between
> those teeny-little upsampled samples, that this will result in 120 dB S/N
>
> What is the audio use case wherein 512x upsampling is not already
> sufficient time resolution? I'm curious why you'd need additional
> interpolation at that point.
>
> Ethan D
>
> On Mon, Sep 4, 2017 at 1:49 PM, Nigel Redmon 
> wrote:
>
>> The fact that 5,17,-12,2 at sample rate 1X and
>>> 5,0,0,0,17,0,0,0,-12,0,0,0,2,0,0,0 at sample rate 4X are identical is
>>> obvious only for samples representing impulses.
>>
>>
>> I agree that the zero-stuff-then-lowpass technique is much more obvious
>> when we you consider the impulse train corresponding to the signal. But I
>> find it peculiar to assert that these two sequences are "identical." If
>> they're identical in any meaningful sense, why don't we just stop there and
>> call it a resampler? The reason is that what we actually care about in the
>> end is what the corresponding bandlimited functions look like, and
>> zero-stuffing is far from being an identity operation in this domain. We're
>> instead done constructing a resampler when we end up with an operation that
>> preserves the bandlimited function -- or preserves as much of it as
>> possible in the case of downsampling.
>>
>>
>> Well, when I say they are identical, the spectrum is identical. In other
>> words, they represent the same signal. The fact that it doesn’t make it
>> a resampler is a different thing—an additional constraint. We only have
>> changed the data rate (not the signal) when we insert zeros. Most of the
>> time, we want to also change the signal (by getting rid of the aliases,
>> that were above half the sample rate and now below). That’s why my article
>> made a big deal  (point #3) of pointing out that the digital 

Re: [music-dsp] Sampling theory "best" explanation

2017-09-05 Thread Nigel Redmon
As an electrical engineer, we find great humor when people say we can't do 
impulses. What constitutes an impulse depends on the context—nano seconds, 
milliseconds...

For ADC, we effectively measure an instantaneous voltage and store it as an 
impulse. Arguing that we don't really do that...well, Amazon didn't really ship 
that Chinese garlic press to me, because they really relayed an order to some 
warehouse, the shipper did some crazy thing like send it in the wrong direction 
to a hub, to be more efficient...and it was on my doorstep when I checked the 
mail. What's the diff...

Well, that's the most important detail (ADC), because that defined what we're 
dealing with when we do "music-dsp". But as far as DAC not using impulses, it's 
only because the shortcut is trivial. Like I said, audio sample rates are slow, 
not that hard to do a good enough job for demonstration with "close enough" 
impulses.

Don't anyone get mad at me, please. Just sitting on a plane at LAX at 1AM, 
waiting to fly 14 hours...on the first leg...amusing myself before going 
offline for a while

;-)


> On Sep 4, 2017, at 10:07 PM, Ethan Duni  wrote:
> 
> rbj wrote:
> >1. resampling is LTI **if**, for the TI portion, one appropriately scales 
> >time. 
> 
> Have we established that this holds for non-ideal resampling? It doesn't seem 
> like it does, in general. 
> 
> If not, then the phrase "resampling is LTI" - without some kind of "ideal" 
> qualifier - seems misleading. If it's LTI then what are all these aliases 
> doing in my outputs?
> 
> >no one *really* zero-stuffs samples into the stream
> 
> Nobody does it *explicitly* but it seems misleading to say we don't *really* 
> do it. We employ optimizations to handle this part implicitly, but the 
> starting point for that is exactly to *really* stuff zeroes into the stream. 
> This is true in the same sense that the FFT *really* computes the DFT. 
> 
> Contrast that with pedagogical abstractions like the impulse train model of 
> sampling. Nobody has ever *really* sampled a signal this way, because 
> impulses do not exist in reality. 
> 
> >7. and i disagree with the statement: "The other big pedagogical problem 
> >with impulse train representation is that it can't be graphed in a >useful 
> >way."  graphing functions is an abstract representation to begin with, so we 
> >can use these abstract vertical arrows to represent >impulses.  
> 
> That is my statement, so I'll clarify: you can graph an impulse train with a 
> particular period. But how do you graph the product of the impulse train with 
> a continuous-time function (i.e., the sampling operation)? Draw a graph of a 
> generic impulse train, with the scaling of each impulse written out next to 
> it? That's not useful. That's just a generic impulse train graph and a 
> print-out of the sequence values. The useful graph here is of the sample 
> sequence itself.
> 
> >if linear interpolation is done between the subsamples, i have found that 
> >upsampling by a factor of 512 followed by linear interpolation >between 
> >those teeny-little upsampled samples, that this will result in 120 dB S/N
> 
> What is the audio use case wherein 512x upsampling is not already sufficient 
> time resolution? I'm curious why you'd need additional interpolation at that 
> point. 
> 
> Ethan D
> 
> 
> On Mon, Sep 4, 2017 at 1:49 PM, Nigel Redmon  wrote:
 The fact that 5,17,-12,2 at sample rate 1X and 
 5,0,0,0,17,0,0,0,-12,0,0,0,2,0,0,0 at sample rate 4X are identical is 
 obvious only for samples representing impulses.
>>> 
>>> 
>>> I agree that the zero-stuff-then-lowpass technique is much more obvious 
>>> when we you consider the impulse train corresponding to the signal. But I 
>>> find it peculiar to assert that these two sequences are "identical." If 
>>> they're identical in any meaningful sense, why don't we just stop there and 
>>> call it a resampler? The reason is that what we actually care about in the 
>>> end is what the corresponding bandlimited functions look like, and 
>>> zero-stuffing is far from being an identity operation in this domain. We're 
>>> instead done constructing a resampler when we end up with an operation that 
>>> preserves the bandlimited function -- or preserves as much of it as 
>>> possible in the case of downsampling.
>> 
>> 
>> Well, when I say they are identical, the spectrum is identical. In other 
>> words, they represent the same signal. The fact that it doesn’t make it a 
>> resampler is a different thing—an additional constraint. We only have 
>> changed the data rate (not the signal) when we insert zeros. Most of the 
>> time, we want to also change the signal (by getting rid of the aliases, that 
>> were above half the sample rate and now below). That’s why my article made a 
>> big deal  (point #3) of pointing out that the digital samples represent not 
>> the original analog signal, but a modulated version of it.
>> 
>> Of 

Re: [music-dsp] Sampling theory "best" explanation

2017-09-04 Thread Ethan Duni
rbj wrote:

>1. resampling is LTI **if**, for the TI portion, one appropriately scales
time.

Have we established that this holds for non-ideal resampling? It doesn't
seem like it does, in general.

If not, then the phrase "resampling is LTI" - without some kind of "ideal"
qualifier - seems misleading. If it's LTI then what are all these aliases
doing in my outputs?

>no one *really* zero-stuffs samples into the stream

Nobody does it *explicitly* but it seems misleading to say we don't
*really* do it. We employ optimizations to handle this part implicitly, but
the starting point for that is exactly to *really* stuff zeroes into the
stream. This is true in the same sense that the FFT *really* computes the
DFT.

Contrast that with pedagogical abstractions like the impulse train model of
sampling. Nobody has ever *really* sampled a signal this way, because
impulses do not exist in reality.

>7. and i disagree with the statement: "The other big pedagogical problem
with impulse train representation is that it can't be graphed in a >useful
way."  graphing functions is an abstract representation to begin with, so
we can use these abstract vertical arrows to represent >impulses.

That is my statement, so I'll clarify: you can graph an impulse train with
a particular period. But how do you graph the product of the impulse train
with a continuous-time function (i.e., the sampling operation)? Draw a
graph of a generic impulse train, with the scaling of each impulse written
out next to it? That's not useful. That's just a generic impulse train
graph and a print-out of the sequence values. The useful graph here is of
the sample sequence itself.

>if linear interpolation is done between the subsamples, i have found that
upsampling by a factor of 512 followed by linear interpolation >between
those teeny-little upsampled samples, that this will result in 120 dB S/N

What is the audio use case wherein 512x upsampling is not already
sufficient time resolution? I'm curious why you'd need additional
interpolation at that point.

Ethan D

On Mon, Sep 4, 2017 at 1:49 PM, Nigel Redmon  wrote:

> The fact that 5,17,-12,2 at sample rate 1X and
>> 5,0,0,0,17,0,0,0,-12,0,0,0,2,0,0,0 at sample rate 4X are identical is
>> obvious only for samples representing impulses.
>
>
> I agree that the zero-stuff-then-lowpass technique is much more obvious
> when we you consider the impulse train corresponding to the signal. But I
> find it peculiar to assert that these two sequences are "identical." If
> they're identical in any meaningful sense, why don't we just stop there and
> call it a resampler? The reason is that what we actually care about in the
> end is what the corresponding bandlimited functions look like, and
> zero-stuffing is far from being an identity operation in this domain. We're
> instead done constructing a resampler when we end up with an operation that
> preserves the bandlimited function -- or preserves as much of it as
> possible in the case of downsampling.
>
>
> Well, when I say they are identical, the spectrum is identical. In other
> words, they represent the same signal. The fact that it doesn’t make it
> a resampler is a different thing—an additional constraint. We only have
> changed the data rate (not the signal) when we insert zeros. Most of the
> time, we want to also change the signal (by getting rid of the aliases,
> that were above half the sample rate and now below). That’s why my article
> made a big deal  (point #3) of pointing out that the digital samples
> represent not the original analog signal, but a modulated version of it.
>
> Of course, we differ only in semantics, just making mine clear. When I say
> they represent the same signal, I don’t just mean the portion of the
> spectrum in the audio band or below half the sample rate—I mean the whole
> thing.
>
>
> On Sep 4, 2017, at 12:14 PM, Ethan Fenn  wrote:
>
> First, I want to be clear that I don’t think people are crippled by
>> certain viewpoint—I’ve said this elsewhere before, maybe not it this thread
>> or the article so much.
>
>
> In that case I'd suggest some more editing is in order, since the article
> stated this pretty overtly at least a couple times.
>
> It’s more than some things that come up as questions become trivially
>> obvious when you understand that samples represent impulses (this is not so
>> much a viewpoint as the basis of sampling).
>
>
>  Here's the way I see it. There are three classes of interesting objects
> here:
>
> 1) Discrete time signals, which are sequences of numbers.
> 2) Scaled, equally-spaced ideal impulse trains, which are a sort of
> generalized function of a real number.
> 3) Appropriately bandlimited functions of a real number.
>
> None of these are exactly identical, as sequences of numbers are not the
> same sort of beast as functions of a real number. But obviously there is a
> one-to-one correspondence between objects in classes 1 and 2. Less
> 

Re: [music-dsp] Sampling theory "best" explanation

2017-09-04 Thread Nigel Redmon
> The fact that 5,17,-12,2 at sample rate 1X and 
> 5,0,0,0,17,0,0,0,-12,0,0,0,2,0,0,0 at sample rate 4X are identical is obvious 
> only for samples representing impulses.
> 
> I agree that the zero-stuff-then-lowpass technique is much more obvious when 
> we you consider the impulse train corresponding to the signal. But I find it 
> peculiar to assert that these two sequences are "identical." If they're 
> identical in any meaningful sense, why don't we just stop there and call it a 
> resampler? The reason is that what we actually care about in the end is what 
> the corresponding bandlimited functions look like, and zero-stuffing is far 
> from being an identity operation in this domain. We're instead done 
> constructing a resampler when we end up with an operation that preserves the 
> bandlimited function -- or preserves as much of it as possible in the case of 
> downsampling.

Well, when I say they are identical, the spectrum is identical. In other words, 
they represent the same signal. The fact that it doesn’t make it a resampler is 
a different thing—an additional constraint. We only have changed the data rate 
(not the signal) when we insert zeros. Most of the time, we want to also change 
the signal (by getting rid of the aliases, that were above half the sample rate 
and now below). That’s why my article made a big deal  (point #3) of pointing 
out that the digital samples represent not the original analog signal, but a 
modulated version of it.

Of course, we differ only in semantics, just making mine clear. When I say they 
represent the same signal, I don’t just mean the portion of the spectrum in the 
audio band or below half the sample rate—I mean the whole thing.


> On Sep 4, 2017, at 12:14 PM, Ethan Fenn  wrote:
> 
> First, I want to be clear that I don’t think people are crippled by certain 
> viewpoint—I’ve said this elsewhere before, maybe not it this thread or the 
> article so much.
> 
> In that case I'd suggest some more editing is in order, since the article 
> stated this pretty overtly at least a couple times.
> 
> It’s more than some things that come up as questions become trivially obvious 
> when you understand that samples represent impulses (this is not so much a 
> viewpoint as the basis of sampling).
>  
>  Here's the way I see it. There are three classes of interesting objects here:
> 
> 1) Discrete time signals, which are sequences of numbers.
> 2) Scaled, equally-spaced ideal impulse trains, which are a sort of 
> generalized function of a real number.
> 3) Appropriately bandlimited functions of a real number.
> 
> None of these are exactly identical, as sequences of numbers are not the same 
> sort of beast as functions of a real number. But obviously there is a 
> one-to-one correspondence between objects in classes 1 and 2. Less obviously 
> -- but more interestingly and importantly! -- there is a one-to-one 
> correspondence between objects in classes 1 and 3. So any operation on any of 
> these three classes will have a corresponding operation in the other two.
> 
> This is what the math tells us. It does not tell us that any of these classes 
> are identical to each other or that thinking of one correspondence is more 
> correct than the other.
> 
> The fact that 5,17,-12,2 at sample rate 1X and 
> 5,0,0,0,17,0,0,0,-12,0,0,0,2,0,0,0 at sample rate 4X are identical is obvious 
> only for samples representing impulses.
> 
> I agree that the zero-stuff-then-lowpass technique is much more obvious when 
> we you consider the impulse train corresponding to the signal. But I find it 
> peculiar to assert that these two sequences are "identical." If they're 
> identical in any meaningful sense, why don't we just stop there and call it a 
> resampler? The reason is that what we actually care about in the end is what 
> the corresponding bandlimited functions look like, and zero-stuffing is far 
> from being an identity operation in this domain. We're instead done 
> constructing a resampler when we end up with an operation that preserves the 
> bandlimited function -- or preserves as much of it as possible in the case of 
> downsampling.
> 
> This is why it is more natural for me to think of the discrete signal and the 
> bandlimited function as being more closely identified. The impulse train is a 
> related mathematical entity which is useful to pull out of the toolbox on 
> some occasions.
> 
> I'm not really interested in arguing that the way I think about things is 
> superior -- as I've stated above I think the math is neutral on this point, 
> and what mental model works best is different from person to person. It can 
> be a bit like arguing what shoe size is best. But I do think it's 
> counterproductive to discourage people from thinking about the discrete 
> signal <-> bandlimited function correspondence. I think real insight and 
> intuition in DSP is built up by comparing what basic operations look like in 
> each of these 

Re: [music-dsp] Sampling theory "best" explanation

2017-09-04 Thread Nigel Redmon
OTOH, just about everything we do with digital audio doesn’t exactly work. 
Start with sampling. Do we give up if we can’t ensure absolutely no signal at 
and above half the sample rate? Fortunately, our ears have limitations (whew!). 
;-) Anyway, the aliasing occurred to me as I wrote that, but it’s always about 
the ideal, the objective, and what we can achieve practically. And we can get 
awfully good at the rate conversion, good enough people can’t hear the 
difference in ABX under ideal conditions. At the other end, the Ensoniq Mirage, 
for goodness sake. But it still managed to make it on albums people bought. We 
have some wiggle room. :-D

> On Sep 3, 2017, at 10:00 PM, Ethan Duni  wrote:
> 
> Hmm this is quite a few discussions of LTI with respect to resampling that 
> have gone badly on the list over the years...
> 
> Time variance is a bit subtle in the multi-rate context. For integer 
> downsampling, as you point out, it might make more sense to replace the 
> classic n-shift-in/n-shift-out definition of time invariance with one that 
> works in terms of the common real time represented by the different sampling 
> rates. So an integer shift into a 2x downsampler should be a half-sample 
> shift in the output. In ideal terms (brickwall filters/sinc functions) this 
> all clearly works out. 
> 
> On the other hand, I hesitate to say "resampling is LTI" because that seems 
> to imply that resampling doesn't produce aliasing. And of course aliasing is 
> a central concern in the design of resamplers. So I can see how this rubs 
> people the wrong way. 
> .
> It's not clear to me that a realizable downsampler (i.e., with non-zero 
> aliasing) passes the "real time" definition of LTI?
> 
> I think the thing to say about integer downsampling with respect to time 
> variance is that it's that partitions the space of input shifts, where if you 
> restrict yourself to shifts from a given partition you will see time 
> invariance (in a certain sense). 
> 
> More generally, resampling is kind of an edge case with respect to time 
> invariance, in the sense that resamplers are time-variant systems that are 
> trying as hard as they can to act like time invariant systems. As opposed to, 
> say, modulators or envelopes or such, 
> 
> Ethan D
> 
> 
> On Fri, Sep 1, 2017 at 10:09 PM, Nigel Redmon  > wrote:
> Interesting comments, Ethan.
> 
> Somewhat related to your points, I also had a situation on this board years 
> ago where I said that sample rate conversion was LTI. It was a specific 
> context, regarding downsampling, so a number of people, one by one, basically 
> quoted back the reason I was wrong. That is, basically that for downsampling 
> 2:1, you’d get a different result depending on which set of points you 
> discard (decimation), and that alone meant it isn’t LTI. Of course, the fact 
> that the sample values are different doesn’t mean what they represent is 
> different—one is just a half-sample delay of the other. I was surprised a bit 
> that they accepted so easily that SRC couldn’t be used in a system that 
> required LTI, just because it seemed to violate the definition of LTI they 
> were taught.
> 
>> On Sep 1, 2017, at 3:46 PM, Ethan Duni > > wrote:
>> 
>> Ethan F wrote:
>> >I see your nitpick and raise you. :o) Surely there are uncountably many 
>> >such functions, 
>> >as the power at any apparent frequency can be distributed arbitrarily among 
>> >the bands.
>> 
>> Ah, good point. Uncountable it is! 
>> 
>> Nigel R wrote:
>> >But I think there are good reasons to understand the fact that samples 
>> >represent a 
>> >modulated impulse train.
>> 
>> I entirely agree, and this is exactly how sampling was introduced to me back 
>> in college (we used Oppenheim and Willsky's book "Signals and Systems"). 
>> I've always considered it the canonical EE approach to the subject, and am 
>> surprised to learn that anyone thinks otherwise. 
>> 
>> Nigel R wrote:
>> >That sounds like a dumb observation, but I once had an argument on this 
>> >board: 
>> >After I explained why we stuff zeros of integer SRC, a guy said my 
>> >explanation was BS.
>> 
>> I dunno, this can work the other way as well. There was a guy a while back 
>> who was arguing that the zero-stuffing used in integer upsampling is 
>> actually not a time-variant operation, on the basis that the zeros "are 
>> already there" in the impulse train representation (so it's a "null 
>> operation" basically). He could not explain how this putatively-LTI system 
>> was introducing aliasing into the output. Or was this the same guy?
>> 
>> So that's one drawback to the impulse train representation - you need the 
>> sample rate metadata to do *any* meaningful processing on such a signal. 
>> Otherwise you don't know which locations are "real" zeros and which are just 
>> "filler." Of course knowledge of sample 

Re: [music-dsp] Sampling theory "best" explanation

2017-09-04 Thread robert bristow-johnson
impulses are idealizations of physical impulses that are very thin, finite 
pulses.
8. lastly, i would only approach the very real problem
of bandlimited interpolation, whether it's for sample-rate-conversion or for 
precision-delay (two different applications), as a practical application of the 
Sampling and Reconstruction Theorem. �that is the way you can quantify the 
strength of the images and get a handle of the processing error
resulting from them potential foldback of the images and get a S/N ratio for 
the operation. �Duane Wise and i did a paper trying to demonstrate this 
thinking in the 1990s. �you can get it from my researchgate.
if linear interpolation is done between the subsamples, i have found that
upsampling by a factor of 512 (and, again, one need not insert 511 zeros to do 
this) followed by linear interpolation between those teeny-little upsampled 
samples, that this will result in 120 dB S/N. �if a 32-tap FIR is used (that's 
half the number of taps an ADI ASRC chip uses), that means
(taking advantage of symmetry) 8K coefficients needed in a table, 64 MAC 
instructions, and one linear interpolation per output sample. �doesn't matter 
what the sample-rate conversion ratio is (as long as we don't worry about 
aliasing in downsampling).
�
bestest,
r
b-j
�
 Original Message 
Subject: Re: [music-dsp] Sampling theory "best" explanation
From: "Ethan Fenn" <et...@polyspectral.com>

Date: Mon, September 4, 2017 3:14 pm

To: music-dsp@music.columbia.edu

--



>>

>> First, I want to be clear that I dont think people are crippled by

>> certain viewpointIve said this elsewhere before, maybe not it 
>> this thread

>> or the article so much.

>

>

> In that case I'd suggest some more editing is in order, since the article

> stated this pretty overtly at least a couple times.

>

> Its more than some things that come up as questions become trivially

>> obvious when you understand that samples represent impulses (this is not so

>> much a viewpoint as the basis of sampling).

>

>

> Here's the way I see it. There are three classes of interesting objects

> here:

>

> 1) Discrete time signals, which are sequences of numbers.

> 2) Scaled, equally-spaced ideal impulse trains, which are a sort of

> generalized function of a real number.

> 3) Appropriately bandlimited functions of a real number.

>

> None of these are exactly identical, as sequences of numbers are not the

> same sort of beast as functions of a real number. But obviously there is a

> one-to-one correspondence between objects in classes 1 and 2. Less

> obviously -- but more interestingly and importantly! -- there is a

> one-to-one correspondence between objects in classes 1 and 3. So any

> operation on any of these three classes will have a corresponding operation

> in the other two.

>

> This is what the math tells us. It does not tell us that any of these

> classes are identical to each other or that thinking of one correspondence

> is more correct than the other.

>

> The fact that 5,17,-12,2 at sample rate 1X and

> 5,0,0,0,17,0,0,0,-12,0,0,0,2,0,0,0

>> at sample rate 4X are identical is obvious only for samples representing

>> impulses.

>

>

> I agree that the zero-stuff-then-lowpass technique is much more obvious

> when we you consider the impulse train corresponding to the signal. But I

> find it peculiar to assert that these two sequences are "identical." If

> they're identical in any meaningful sense, why don't we just stop there and

> call it a resampler? The reason is that what we actually care about in the

> end is what the corresponding bandlimited functions look like, and

> zero-stuffing is far from being an identity operation in this domain. We're

> instead done constructing a resampler when we end up with an operation that

> preserves the bandlimited function -- or preserves as much of it as

> possible in the case of downsampling.

>

> This is why it is more natural for me to think of the discrete signal and

> the bandlimited function as being more closely identified. The impulse

> train is a related mathematical entity which is useful to pull out of the

> toolbox on some occasions.

>

> I'm not really interested in arguing that the way I think about things is

> superior -- as I've stated above I think the math is neutral on this point,

> and what mental model works best is different from person to person. It can

> be a bit like arguing what shoe size is best. But I do think it's

> counterproductive to discourage people from thinking about the discrete

> signal <-> bandlimited function correspondence. I 

Re: [music-dsp] Sampling theory "best" explanation

2017-09-04 Thread Ethan Fenn
>
> First, I want to be clear that I don’t think people are crippled by
> certain viewpoint—I’ve said this elsewhere before, maybe not it this thread
> or the article so much.


In that case I'd suggest some more editing is in order, since the article
stated this pretty overtly at least a couple times.

It’s more than some things that come up as questions become trivially
> obvious when you understand that samples represent impulses (this is not so
> much a viewpoint as the basis of sampling).


 Here's the way I see it. There are three classes of interesting objects
here:

1) Discrete time signals, which are sequences of numbers.
2) Scaled, equally-spaced ideal impulse trains, which are a sort of
generalized function of a real number.
3) Appropriately bandlimited functions of a real number.

None of these are exactly identical, as sequences of numbers are not the
same sort of beast as functions of a real number. But obviously there is a
one-to-one correspondence between objects in classes 1 and 2. Less
obviously -- but more interestingly and importantly! -- there is a
one-to-one correspondence between objects in classes 1 and 3. So any
operation on any of these three classes will have a corresponding operation
in the other two.

This is what the math tells us. It does not tell us that any of these
classes are identical to each other or that thinking of one correspondence
is more correct than the other.

The fact that 5,17,-12,2 at sample rate 1X and
5,0,0,0,17,0,0,0,-12,0,0,0,2,0,0,0
> at sample rate 4X are identical is obvious only for samples representing
> impulses.


I agree that the zero-stuff-then-lowpass technique is much more obvious
when we you consider the impulse train corresponding to the signal. But I
find it peculiar to assert that these two sequences are "identical." If
they're identical in any meaningful sense, why don't we just stop there and
call it a resampler? The reason is that what we actually care about in the
end is what the corresponding bandlimited functions look like, and
zero-stuffing is far from being an identity operation in this domain. We're
instead done constructing a resampler when we end up with an operation that
preserves the bandlimited function -- or preserves as much of it as
possible in the case of downsampling.

This is why it is more natural for me to think of the discrete signal and
the bandlimited function as being more closely identified. The impulse
train is a related mathematical entity which is useful to pull out of the
toolbox on some occasions.

I'm not really interested in arguing that the way I think about things is
superior -- as I've stated above I think the math is neutral on this point,
and what mental model works best is different from person to person. It can
be a bit like arguing what shoe size is best. But I do think it's
counterproductive to discourage people from thinking about the discrete
signal <-> bandlimited function correspondence. I think real insight and
intuition in DSP is built up by comparing what basic operations look like
in each of these different universes (as well as in their frequency domain
equivalents).

-Ethan



On Mon, Sep 4, 2017 at 2:14 PM, Ethan Fenn  wrote:

> Time variance is a bit subtle in the multi-rate context. For integer
>> downsampling, as you point out, it might make more sense to replace the
>> classic n-shift-in/n-shift-out definition of time invariance with one that
>> works in terms of the common real time represented by the different
>> sampling rates. So an integer shift into a 2x downsampler should be a
>> half-sample shift in the output. In ideal terms (brickwall filters/sinc
>> functions) this all clearly works out.
>
>
> I think the thing to say about integer downsampling with respect to time
>> variance is that it's that partitions the space of input shifts, where if
>> you restrict yourself to shifts from a given partition you will see time
>> invariance (in a certain sense).
>
>
> So this to me is a good example of how thinking of discrete time signals
> as representing bandlimited functions is useful. Because if we're thinking
> of things this way, we can simply define an operation in the space of
> discrete signals as being LTI iff the corresponding operation in the space
> of bandlimited functions is LTI. This generalizes the usual definition, and
> your partitioned-shift concept, in exactly the way we want, and we find
> that ideal resamplers (of any ratio, integer/rational/irrational) are in
> fact LTI as our intuition suggests they should be.
>
> -Ethan F
>
>
>
> On Mon, Sep 4, 2017 at 1:00 AM, Ethan Duni  wrote:
>
>> Hmm this is quite a few discussions of LTI with respect to resampling
>> that have gone badly on the list over the years...
>>
>> Time variance is a bit subtle in the multi-rate context. For integer
>> downsampling, as you point out, it might make more sense to replace the
>> classic n-shift-in/n-shift-out definition of time 

Re: [music-dsp] Sampling theory "best" explanation

2017-09-04 Thread Ethan Fenn
>
> Time variance is a bit subtle in the multi-rate context. For integer
> downsampling, as you point out, it might make more sense to replace the
> classic n-shift-in/n-shift-out definition of time invariance with one that
> works in terms of the common real time represented by the different
> sampling rates. So an integer shift into a 2x downsampler should be a
> half-sample shift in the output. In ideal terms (brickwall filters/sinc
> functions) this all clearly works out.


I think the thing to say about integer downsampling with respect to time
> variance is that it's that partitions the space of input shifts, where if
> you restrict yourself to shifts from a given partition you will see time
> invariance (in a certain sense).


So this to me is a good example of how thinking of discrete time signals as
representing bandlimited functions is useful. Because if we're thinking of
things this way, we can simply define an operation in the space of discrete
signals as being LTI iff the corresponding operation in the space of
bandlimited functions is LTI. This generalizes the usual definition, and
your partitioned-shift concept, in exactly the way we want, and we find
that ideal resamplers (of any ratio, integer/rational/irrational) are in
fact LTI as our intuition suggests they should be.

-Ethan F



On Mon, Sep 4, 2017 at 1:00 AM, Ethan Duni  wrote:

> Hmm this is quite a few discussions of LTI with respect to resampling that
> have gone badly on the list over the years...
>
> Time variance is a bit subtle in the multi-rate context. For integer
> downsampling, as you point out, it might make more sense to replace the
> classic n-shift-in/n-shift-out definition of time invariance with one that
> works in terms of the common real time represented by the different
> sampling rates. So an integer shift into a 2x downsampler should be a
> half-sample shift in the output. In ideal terms (brickwall filters/sinc
> functions) this all clearly works out.
>
> On the other hand, I hesitate to say "resampling is LTI" because that
> seems to imply that resampling doesn't produce aliasing. And of course
> aliasing is a central concern in the design of resamplers. So I can see how
> this rubs people the wrong way.
> .
> It's not clear to me that a realizable downsampler (i.e., with non-zero
> aliasing) passes the "real time" definition of LTI?
>
> I think the thing to say about integer downsampling with respect to time
> variance is that it's that partitions the space of input shifts, where if
> you restrict yourself to shifts from a given partition you will see time
> invariance (in a certain sense).
>
> More generally, resampling is kind of an edge case with respect to time
> invariance, in the sense that resamplers are time-variant systems that are
> trying as hard as they can to act like time invariant systems. As opposed
> to, say, modulators or envelopes or such,
>
> Ethan D
>
>
> On Fri, Sep 1, 2017 at 10:09 PM, Nigel Redmon 
> wrote:
>
>> Interesting comments, Ethan.
>>
>> Somewhat related to your points, I also had a situation on this board
>> years ago where I said that sample rate conversion was LTI. It was a
>> specific context, regarding downsampling, so a number of people, one by
>> one, basically quoted back the reason I was wrong. That is, basically that
>> for downsampling 2:1, you’d get a different result depending on which set
>> of points you discard (decimation), and that alone meant it isn’t LTI. Of
>> course, the fact that the sample values are different doesn’t mean what
>> they represent is different—one is just a half-sample delay of the other. I
>> was surprised a bit that they accepted so easily that SRC couldn’t be used
>> in a system that required LTI, just because it seemed to violate the
>> definition of LTI they were taught.
>>
>> On Sep 1, 2017, at 3:46 PM, Ethan Duni  wrote:
>>
>> Ethan F wrote:
>> >I see your nitpick and raise you. :o) Surely there are uncountably many
>> such functions,
>> >as the power at any apparent frequency can be distributed arbitrarily
>> among the bands.
>>
>> Ah, good point. Uncountable it is!
>>
>> Nigel R wrote:
>> >But I think there are good reasons to understand the fact that samples
>> represent a
>> >modulated impulse train.
>>
>> I entirely agree, and this is exactly how sampling was introduced to me
>> back in college (we used Oppenheim and Willsky's book "Signals and
>> Systems"). I've always considered it the canonical EE approach to the
>> subject, and am surprised to learn that anyone thinks otherwise.
>>
>> Nigel R wrote:
>> >That sounds like a dumb observation, but I once had an argument on this
>> board:
>> >After I explained why we stuff zeros of integer SRC, a guy said my
>> explanation was BS.
>>
>> I dunno, this can work the other way as well. There was a guy a while
>> back who was arguing that the zero-stuffing used in integer upsampling is
>> actually not a time-variant 

Re: [music-dsp] Sampling theory "best" explanation

2017-09-03 Thread Ethan Duni
Hmm this is quite a few discussions of LTI with respect to resampling that
have gone badly on the list over the years...

Time variance is a bit subtle in the multi-rate context. For integer
downsampling, as you point out, it might make more sense to replace the
classic n-shift-in/n-shift-out definition of time invariance with one that
works in terms of the common real time represented by the different
sampling rates. So an integer shift into a 2x downsampler should be a
half-sample shift in the output. In ideal terms (brickwall filters/sinc
functions) this all clearly works out.

On the other hand, I hesitate to say "resampling is LTI" because that seems
to imply that resampling doesn't produce aliasing. And of course aliasing
is a central concern in the design of resamplers. So I can see how this
rubs people the wrong way.
.
It's not clear to me that a realizable downsampler (i.e., with non-zero
aliasing) passes the "real time" definition of LTI?

I think the thing to say about integer downsampling with respect to time
variance is that it's that partitions the space of input shifts, where if
you restrict yourself to shifts from a given partition you will see time
invariance (in a certain sense).

More generally, resampling is kind of an edge case with respect to time
invariance, in the sense that resamplers are time-variant systems that are
trying as hard as they can to act like time invariant systems. As opposed
to, say, modulators or envelopes or such,

Ethan D


On Fri, Sep 1, 2017 at 10:09 PM, Nigel Redmon  wrote:

> Interesting comments, Ethan.
>
> Somewhat related to your points, I also had a situation on this board
> years ago where I said that sample rate conversion was LTI. It was a
> specific context, regarding downsampling, so a number of people, one by
> one, basically quoted back the reason I was wrong. That is, basically that
> for downsampling 2:1, you’d get a different result depending on which set
> of points you discard (decimation), and that alone meant it isn’t LTI. Of
> course, the fact that the sample values are different doesn’t mean what
> they represent is different—one is just a half-sample delay of the other. I
> was surprised a bit that they accepted so easily that SRC couldn’t be used
> in a system that required LTI, just because it seemed to violate the
> definition of LTI they were taught.
>
> On Sep 1, 2017, at 3:46 PM, Ethan Duni  wrote:
>
> Ethan F wrote:
> >I see your nitpick and raise you. :o) Surely there are uncountably many
> such functions,
> >as the power at any apparent frequency can be distributed arbitrarily
> among the bands.
>
> Ah, good point. Uncountable it is!
>
> Nigel R wrote:
> >But I think there are good reasons to understand the fact that samples
> represent a
> >modulated impulse train.
>
> I entirely agree, and this is exactly how sampling was introduced to me
> back in college (we used Oppenheim and Willsky's book "Signals and
> Systems"). I've always considered it the canonical EE approach to the
> subject, and am surprised to learn that anyone thinks otherwise.
>
> Nigel R wrote:
> >That sounds like a dumb observation, but I once had an argument on this
> board:
> >After I explained why we stuff zeros of integer SRC, a guy said my
> explanation was BS.
>
> I dunno, this can work the other way as well. There was a guy a while back
> who was arguing that the zero-stuffing used in integer upsampling is
> actually not a time-variant operation, on the basis that the zeros "are
> already there" in the impulse train representation (so it's a "null
> operation" basically). He could not explain how this putatively-LTI system
> was introducing aliasing into the output. Or was this the same guy?
>
> So that's one drawback to the impulse train representation - you need the
> sample rate metadata to do *any* meaningful processing on such a signal.
> Otherwise you don't know which locations are "real" zeros and which are
> just "filler." Of course knowledge of sample rate is always required to
> make final sense of a discrete-time audio signal, but in the usual sequence
> representation we don't need it just to do basic operations, only for
> converting back to analog or interpreting discrete time operations in
> analog terms (i.e., what physical frequency is the filter cut-off at,
> etc.).
>
> The other big pedagogical problem with impulse train representation is
> that it can't be graphed in a useful way.
>
> People will also complain that it is poorly defined mathematically (and
> indeed the usual treatments handwave these concerns), but my rejoinder
> would be that it can all be made rigorous by adopting non-standard
> analysis/hyperreal numbers. So, no harm no foul, as far as "correctness" is
> concerned, although it does hobble the subject as a gateway into "real
> math."
>
> Ethan D
>
> On Fri, Sep 1, 2017 at 2:38 PM, Ethan Fenn  wrote:
>
>> This needs an additional qualifier, 

Re: [music-dsp] Sampling theory "best" explanation

2017-09-01 Thread Nigel Redmon
Interesting comments, Ethan.

Somewhat related to your points, I also had a situation on this board years ago 
where I said that sample rate conversion was LTI. It was a specific context, 
regarding downsampling, so a number of people, one by one, basically quoted 
back the reason I was wrong. That is, basically that for downsampling 2:1, 
you’d get a different result depending on which set of points you discard 
(decimation), and that alone meant it isn’t LTI. Of course, the fact that the 
sample values are different doesn’t mean what they represent is different—one 
is just a half-sample delay of the other. I was surprised a bit that they 
accepted so easily that SRC couldn’t be used in a system that required LTI, 
just because it seemed to violate the definition of LTI they were taught.

> On Sep 1, 2017, at 3:46 PM, Ethan Duni  wrote:
> 
> Ethan F wrote:
> >I see your nitpick and raise you. :o) Surely there are uncountably many such 
> >functions, 
> >as the power at any apparent frequency can be distributed arbitrarily among 
> >the bands.
> 
> Ah, good point. Uncountable it is! 
> 
> Nigel R wrote:
> >But I think there are good reasons to understand the fact that samples 
> >represent a 
> >modulated impulse train.
> 
> I entirely agree, and this is exactly how sampling was introduced to me back 
> in college (we used Oppenheim and Willsky's book "Signals and Systems"). I've 
> always considered it the canonical EE approach to the subject, and am 
> surprised to learn that anyone thinks otherwise. 
> 
> Nigel R wrote:
> >That sounds like a dumb observation, but I once had an argument on this 
> >board: 
> >After I explained why we stuff zeros of integer SRC, a guy said my 
> >explanation was BS.
> 
> I dunno, this can work the other way as well. There was a guy a while back 
> who was arguing that the zero-stuffing used in integer upsampling is actually 
> not a time-variant operation, on the basis that the zeros "are already there" 
> in the impulse train representation (so it's a "null operation" basically). 
> He could not explain how this putatively-LTI system was introducing aliasing 
> into the output. Or was this the same guy?
> 
> So that's one drawback to the impulse train representation - you need the 
> sample rate metadata to do *any* meaningful processing on such a signal. 
> Otherwise you don't know which locations are "real" zeros and which are just 
> "filler." Of course knowledge of sample rate is always required to make final 
> sense of a discrete-time audio signal, but in the usual sequence 
> representation we don't need it just to do basic operations, only for 
> converting back to analog or interpreting discrete time operations in analog 
> terms (i.e., what physical frequency is the filter cut-off at, etc.). 
> 
> The other big pedagogical problem with impulse train representation is that 
> it can't be graphed in a useful way. 
> 
> People will also complain that it is poorly defined mathematically (and 
> indeed the usual treatments handwave these concerns), but my rejoinder would 
> be that it can all be made rigorous by adopting non-standard 
> analysis/hyperreal numbers. So, no harm no foul, as far as "correctness" is 
> concerned, although it does hobble the subject as a gateway into "real math."
> 
> Ethan D
> 
> On Fri, Sep 1, 2017 at 2:38 PM, Ethan Fenn  > wrote:
> This needs an additional qualifier, something about the bandlimited function 
> with the lowest possible bandwidth, or containing DC, or "baseband," or such. 
> 
> Yes, by bandlimited here I mean bandlimited to [-Nyquist, Nyquist].
> 
> Otherwise, there are a countably infinite number of bandlimited functions 
> that interpolate any given set of samples. These get used in "bandpass 
> sampling," which is uncommon in audio but commonplace in radio applications. 
> 
> I see your nitpick and raise you. :o) Surely there are uncountably many such 
> functions, as the power at any apparent frequency can be distributed 
> arbitrarily among the bands.
> 
> -Ethan F
> 
> 
> On Fri, Sep 1, 2017 at 5:30 PM, Ethan Duni  > wrote:
> >I'm one of those people who prefer to think of a discrete-time signal as 
> >representing the unique bandlimited function interpolating its samples.
> 
> This needs an additional qualifier, something about the bandlimited function 
> with the lowest possible bandwidth, or containing DC, or "baseband," or such. 
> 
> Otherwise, there are a countably infinite number of bandlimited functions 
> that interpolate any given set of samples. These get used in "bandpass 
> sampling," which is uncommon in audio but commonplace in radio applications. 
> 
> Ethan D
> 
> On Fri, Sep 1, 2017 at 1:31 PM, Ethan Fenn  > wrote:
> Thanks for posting this! It's always interesting to get such a good glimpse 
> at someone 

Re: [music-dsp] Sampling theory "best" explanation

2017-09-01 Thread Ethan Duni
Ethan F wrote:
>I see your nitpick and raise you. :o) Surely there are uncountably many
such functions,
>as the power at any apparent frequency can be distributed arbitrarily
among the bands.

Ah, good point. Uncountable it is!

Nigel R wrote:
>But I think there are good reasons to understand the fact that samples
represent a
>modulated impulse train.

I entirely agree, and this is exactly how sampling was introduced to me
back in college (we used Oppenheim and Willsky's book "Signals and
Systems"). I've always considered it the canonical EE approach to the
subject, and am surprised to learn that anyone thinks otherwise.

Nigel R wrote:
>That sounds like a dumb observation, but I once had an argument on this
board:
>After I explained why we stuff zeros of integer SRC, a guy said my
explanation was BS.

I dunno, this can work the other way as well. There was a guy a while back
who was arguing that the zero-stuffing used in integer upsampling is
actually not a time-variant operation, on the basis that the zeros "are
already there" in the impulse train representation (so it's a "null
operation" basically). He could not explain how this putatively-LTI system
was introducing aliasing into the output. Or was this the same guy?

So that's one drawback to the impulse train representation - you need the
sample rate metadata to do *any* meaningful processing on such a signal.
Otherwise you don't know which locations are "real" zeros and which are
just "filler." Of course knowledge of sample rate is always required to
make final sense of a discrete-time audio signal, but in the usual sequence
representation we don't need it just to do basic operations, only for
converting back to analog or interpreting discrete time operations in
analog terms (i.e., what physical frequency is the filter cut-off at,
etc.).

The other big pedagogical problem with impulse train representation is that
it can't be graphed in a useful way.

People will also complain that it is poorly defined mathematically (and
indeed the usual treatments handwave these concerns), but my rejoinder
would be that it can all be made rigorous by adopting non-standard
analysis/hyperreal numbers. So, no harm no foul, as far as "correctness" is
concerned, although it does hobble the subject as a gateway into "real
math."

Ethan D

On Fri, Sep 1, 2017 at 2:38 PM, Ethan Fenn  wrote:

> This needs an additional qualifier, something about the bandlimited
>> function with the lowest possible bandwidth, or containing DC, or
>> "baseband," or such.
>
>
> Yes, by bandlimited here I mean bandlimited to [-Nyquist, Nyquist].
>
> Otherwise, there are a countably infinite number of bandlimited functions
>> that interpolate any given set of samples. These get used in "bandpass
>> sampling," which is uncommon in audio but commonplace in radio
>> applications.
>
>
> I see your nitpick and raise you. :o) Surely there are uncountably many
> such functions, as the power at any apparent frequency can be distributed
> arbitrarily among the bands.
>
> -Ethan F
>
>
> On Fri, Sep 1, 2017 at 5:30 PM, Ethan Duni  wrote:
>
>> >I'm one of those people who prefer to think of a discrete-time signal
>> as
>> >representing the unique bandlimited function interpolating its samples.
>>
>> This needs an additional qualifier, something about the bandlimited
>> function with the lowest possible bandwidth, or containing DC, or
>> "baseband," or such.
>>
>> Otherwise, there are a countably infinite number of bandlimited functions
>> that interpolate any given set of samples. These get used in "bandpass
>> sampling," which is uncommon in audio but commonplace in radio
>> applications.
>>
>> Ethan D
>>
>> On Fri, Sep 1, 2017 at 1:31 PM, Ethan Fenn 
>> wrote:
>>
>>> Thanks for posting this! It's always interesting to get such a good
>>> glimpse at someone else's mental model.
>>>
>>> I'm one of those people who prefer to think of a discrete-time signal as
>>> representing the unique bandlimited function interpolating its samples. And
>>> I don't think this point of view has crippled my understanding of
>>> resampling or any other DSP techniques!
>>>
>>> I'm curious -- from the impulse train point of view, how do you
>>> understand fractional delays? Or taking the derivative of a signal? Do you
>>> have to pass into the frequency domain in order to understand these?
>>> Thinking of a signal as a bandlimited function, I find it pretty easy to
>>> understand both of these processes from first principles in the time
>>> domain, which is one reason I like to think about things this way.
>>>
>>> -Ethan
>>>
>>>
>>>
>>>
>>> On Mon, Aug 28, 2017 at 12:15 PM, Nigel Redmon 
>>> wrote:
>>>
 Hi Remy,

 On Aug 28, 2017, at 2:16 AM, Remy Muller  wrote:

 I second Sampo about giving some more hints about Hilbert spaces,
 shift-invariance, Riesz representation theorem… etc



Re: [music-dsp] Sampling theory "best" explanation

2017-09-01 Thread Nigel Redmon
Hi Ethan,

Good comments and questions…I’m going to have to skip the questions for now 
(I’m in a race against time the next few days, then will been off the grid, 
relatively speaking, for a couple of weeks—but I didn’t want to seem like I was 
ignoring your reply; I think any quick answers to your questions will require 
some back and forth, and I won’t be here for the rest).

First, I want to be clear that I don’t think people are crippled by certain 
viewpoint—I’ve said this elsewhere before, maybe not it this thread or the 
article so much. It’s more than some things that come up as questions become 
trivially obvious when you understand that samples represent impulses (this is 
not so much a viewpoint as the basis of sampling). The fact that 5,17,-12,2 at 
sample rate 1X and 5,0,0,0,17,0,0,0,-12,0,0,0,2,0,0,0 at sample rate 4X are 
identical is obvious only for samples representing impulses. That sounds like a 
dumb observation, but I once had an argument on this board: After I explained 
why we stuff zeros of integer SRC, a guy said my explanation was BS. I said, 
OK, then why does inserting zeros work? He gave a one-word answer: 
“Serendipity.” So, he clearly knew how to get the job of SRC done—he wasn’t 
crippled—but he didn’t know why, was just following a formula (that’s OK—there 
are great cooks that only follow recipes).

But I think there are good reasons to understand the fact that samples 
represent a modulated impulse train. We all learn early on that we need the 
sample rate to be more than double the highest signal frequency. This usually 
is accompanies by diagram showing a sine wave under sampled, and a dotted line 
drawing of an alias at a lower frequency, maybe chat about wagon wheels going 
backwards in movies. But if you think about the frequency spectrum of a PAM 
signal, it’s apparent that the aliased image “sidebands” (radio term) will 
stretch down into the audio band (the band below half SR) if the signal 
stretches above it. So, you’d better filter it so that doesn’t happen. The best 
part is this doesn’t apply to just initial sampling, but is equally apparent 
for any interim upsampling and processing in the digital domain.

Cheers,

Nigel


> On Sep 1, 2017, at 1:31 PM, Ethan Fenn  wrote:
> 
> Thanks for posting this! It's always interesting to get such a good glimpse 
> at someone else's mental model.
> 
> I'm one of those people who prefer to think of a discrete-time signal as 
> representing the unique bandlimited function interpolating its samples. And I 
> don't think this point of view has crippled my understanding of resampling or 
> any other DSP techniques!
> 
> I'm curious -- from the impulse train point of view, how do you understand 
> fractional delays? Or taking the derivative of a signal? Do you have to pass 
> into the frequency domain in order to understand these? Thinking of a signal 
> as a bandlimited function, I find it pretty easy to understand both of these 
> processes from first principles in the time domain, which is one reason I 
> like to think about things this way.
> 
> -Ethan
> 
> 
> 
> 
> On Mon, Aug 28, 2017 at 12:15 PM, Nigel Redmon  > wrote:
> Hi Remy,
> 
>> On Aug 28, 2017, at 2:16 AM, Remy Muller > > wrote:
>> 
>> I second Sampo about giving some more hints about Hilbert spaces, 
>> shift-invariance, Riesz representation theorem… etc
> 
> I think you’ve hit upon precisely what my blog isn’t, and why it exists at 
> all. ;-)
> 
>> Correct me if you said it somewhere and I didn't saw it, but an important 
>> implicit assumption in your explanation is that you are talking about 
>> "uniform bandlimited sampling”.
> 
> Sure, like the tag line in the upper right says, it’s a blog about "practical 
> digital audio signal processing".
> 
>> Personnally, my biggest enlighting moment regarding sampling where when I 
>> read these 2 articles:
> 
> Nice, thanks for sharing.
> 
>> "Sampling—50 Years After Shannon"
>> http://bigwww.epfl.ch/publications/unser0001.pdf 
>> 
>> 
>> and 
>> 
>> "Sampling Moments and Reconstructing Signals of Finite Rate of Innovation: 
>> Shannon Meets Strang–Fix"
>> https://infoscience.epfl.ch/record/104246/files/DragottiVB07.pdf 
>> 
>> 
>> I wish I had discovered them much earlier during my signal processing 
>> classes.
>> 
>> Talking about generalized sampling, may seem abstract and beyond what you 
>> are trying to explain. However, in my personal experience, sampling seen 
>> through the lense of approximation theory as 'just a projection' onto a 
>> signal subspace made everything clearer by giving more perspective: 
>> The choice of basis functions and norms is wide. The sinc function being 
>> just one of them and not a causal realizable one (infinite temporal 

Re: [music-dsp] Sampling theory "best" explanation

2017-09-01 Thread Ethan Fenn
>
> This needs an additional qualifier, something about the bandlimited
> function with the lowest possible bandwidth, or containing DC, or
> "baseband," or such.


Yes, by bandlimited here I mean bandlimited to [-Nyquist, Nyquist].

Otherwise, there are a countably infinite number of bandlimited functions
> that interpolate any given set of samples. These get used in "bandpass
> sampling," which is uncommon in audio but commonplace in radio
> applications.


I see your nitpick and raise you. :o) Surely there are uncountably many
such functions, as the power at any apparent frequency can be distributed
arbitrarily among the bands.

-Ethan F


On Fri, Sep 1, 2017 at 5:30 PM, Ethan Duni  wrote:

> >I'm one of those people who prefer to think of a discrete-time signal as
> >representing the unique bandlimited function interpolating its samples.
>
> This needs an additional qualifier, something about the bandlimited
> function with the lowest possible bandwidth, or containing DC, or
> "baseband," or such.
>
> Otherwise, there are a countably infinite number of bandlimited functions
> that interpolate any given set of samples. These get used in "bandpass
> sampling," which is uncommon in audio but commonplace in radio
> applications.
>
> Ethan D
>
> On Fri, Sep 1, 2017 at 1:31 PM, Ethan Fenn  wrote:
>
>> Thanks for posting this! It's always interesting to get such a good
>> glimpse at someone else's mental model.
>>
>> I'm one of those people who prefer to think of a discrete-time signal as
>> representing the unique bandlimited function interpolating its samples. And
>> I don't think this point of view has crippled my understanding of
>> resampling or any other DSP techniques!
>>
>> I'm curious -- from the impulse train point of view, how do you
>> understand fractional delays? Or taking the derivative of a signal? Do you
>> have to pass into the frequency domain in order to understand these?
>> Thinking of a signal as a bandlimited function, I find it pretty easy to
>> understand both of these processes from first principles in the time
>> domain, which is one reason I like to think about things this way.
>>
>> -Ethan
>>
>>
>>
>>
>> On Mon, Aug 28, 2017 at 12:15 PM, Nigel Redmon 
>> wrote:
>>
>>> Hi Remy,
>>>
>>> On Aug 28, 2017, at 2:16 AM, Remy Muller  wrote:
>>>
>>> I second Sampo about giving some more hints about Hilbert spaces,
>>> shift-invariance, Riesz representation theorem… etc
>>>
>>>
>>> I think you’ve hit upon precisely what my blog isn’t, and why it exists
>>> at all. ;-)
>>>
>>> Correct me if you said it somewhere and I didn't saw it, but an
>>> important *implicit* assumption in your explanation is that you are
>>> talking about "uniform bandlimited sampling”.
>>>
>>>
>>> Sure, like the tag line in the upper right says, it’s a blog about
>>> "practical digital audio signal processing".
>>>
>>> Personnally, my biggest enlighting moment regarding sampling where when
>>> I read these 2 articles:
>>>
>>>
>>> Nice, thanks for sharing.
>>>
>>> "Sampling—50 Years After Shannon"
>>> http://bigwww.epfl.ch/publications/unser0001.pdf
>>>
>>> and
>>>
>>> "Sampling Moments and Reconstructing Signals of Finite Rate of
>>> Innovation: Shannon Meets Strang–Fix"
>>> https://infoscience.epfl.ch/record/104246/files/DragottiVB07.pdf
>>>
>>> I wish I had discovered them much earlier during my signal processing
>>> classes.
>>>
>>> Talking about generalized sampling, may seem abstract and beyond what
>>> you are trying to explain. However, in my personal experience, sampling
>>> seen through the lense of approximation theory as 'just a projection' onto
>>> a signal subspace made everything clearer by giving more perspective:
>>>
>>>- The choice of basis functions and norms is wide. The sinc function
>>>being just one of them and not a causal realizable one (infinite temporal
>>>support).
>>>- Analysis and synthesis functions don't have to be the same (cf
>>>wavelets bi-orthogonal filterbanks)
>>>- Perfect reconstruction is possible without requiring
>>>bandlimitedness!
>>>- The key concept is 'consistent sampling': *one seeks a signal
>>>approximation that is such that it would yield exactly the same
>>>measurements if it was reinjected into the system*.
>>>- All that is required is a "finite rate of innovation" (in the
>>>statistical sense).
>>>- Finite support kernels are easier to deal with in real-life
>>>because they can be realized (FIR) (reminder: time-limited <=>
>>>non-bandlimited)
>>>- Using the L2 norm is convenient because we can reason about best
>>>approximations in the least-squares sense and solve the projection 
>>> problem
>>>using Linear Algebra using the standard L2 inner product.
>>>- Shift-invariance is even nicer since it enables *efficient* signal
>>>processing.
>>>- Using sparser norms like the L1 norm enables 

Re: [music-dsp] Sampling theory "best" explanation

2017-09-01 Thread Ethan Duni
>I'm one of those people who prefer to think of a discrete-time signal as
>representing the unique bandlimited function interpolating its samples.

This needs an additional qualifier, something about the bandlimited
function with the lowest possible bandwidth, or containing DC, or
"baseband," or such.

Otherwise, there are a countably infinite number of bandlimited functions
that interpolate any given set of samples. These get used in "bandpass
sampling," which is uncommon in audio but commonplace in radio
applications.

Ethan D

On Fri, Sep 1, 2017 at 1:31 PM, Ethan Fenn  wrote:

> Thanks for posting this! It's always interesting to get such a good
> glimpse at someone else's mental model.
>
> I'm one of those people who prefer to think of a discrete-time signal as
> representing the unique bandlimited function interpolating its samples. And
> I don't think this point of view has crippled my understanding of
> resampling or any other DSP techniques!
>
> I'm curious -- from the impulse train point of view, how do you understand
> fractional delays? Or taking the derivative of a signal? Do you have to
> pass into the frequency domain in order to understand these? Thinking of a
> signal as a bandlimited function, I find it pretty easy to understand both
> of these processes from first principles in the time domain, which is one
> reason I like to think about things this way.
>
> -Ethan
>
>
>
>
> On Mon, Aug 28, 2017 at 12:15 PM, Nigel Redmon 
> wrote:
>
>> Hi Remy,
>>
>> On Aug 28, 2017, at 2:16 AM, Remy Muller  wrote:
>>
>> I second Sampo about giving some more hints about Hilbert spaces,
>> shift-invariance, Riesz representation theorem… etc
>>
>>
>> I think you’ve hit upon precisely what my blog isn’t, and why it exists
>> at all. ;-)
>>
>> Correct me if you said it somewhere and I didn't saw it, but an important
>> *implicit* assumption in your explanation is that you are talking about
>> "uniform bandlimited sampling”.
>>
>>
>> Sure, like the tag line in the upper right says, it’s a blog about
>> "practical digital audio signal processing".
>>
>> Personnally, my biggest enlighting moment regarding sampling where when I
>> read these 2 articles:
>>
>>
>> Nice, thanks for sharing.
>>
>> "Sampling—50 Years After Shannon"
>> http://bigwww.epfl.ch/publications/unser0001.pdf
>>
>> and
>>
>> "Sampling Moments and Reconstructing Signals of Finite Rate of
>> Innovation: Shannon Meets Strang–Fix"
>> https://infoscience.epfl.ch/record/104246/files/DragottiVB07.pdf
>>
>> I wish I had discovered them much earlier during my signal processing
>> classes.
>>
>> Talking about generalized sampling, may seem abstract and beyond what you
>> are trying to explain. However, in my personal experience, sampling seen
>> through the lense of approximation theory as 'just a projection' onto a
>> signal subspace made everything clearer by giving more perspective:
>>
>>- The choice of basis functions and norms is wide. The sinc function
>>being just one of them and not a causal realizable one (infinite temporal
>>support).
>>- Analysis and synthesis functions don't have to be the same (cf
>>wavelets bi-orthogonal filterbanks)
>>- Perfect reconstruction is possible without requiring
>>bandlimitedness!
>>- The key concept is 'consistent sampling': *one seeks a signal
>>approximation that is such that it would yield exactly the same
>>measurements if it was reinjected into the system*.
>>- All that is required is a "finite rate of innovation" (in the
>>statistical sense).
>>- Finite support kernels are easier to deal with in real-life because
>>they can be realized (FIR) (reminder: time-limited <=> non-bandlimited)
>>- Using the L2 norm is convenient because we can reason about best
>>approximations in the least-squares sense and solve the projection problem
>>using Linear Algebra using the standard L2 inner product.
>>- Shift-invariance is even nicer since it enables *efficient* signal
>>processing.
>>- Using sparser norms like the L1 norm enables sparse sampling and
>>the whole field of compressed sensing. But it comes at a price: we have to
>>use iterative projections to get there.
>>
>> All of this is beyond your original purpose, but from a pedagocial
>> viewpoint, I wish these 2 articles were systematically cited in a "Further
>> Reading" section at the end of any explanation regarding the sampling
>> theorem(s).
>>
>> At least the wikipedia page cites the first article and has a section
>> about non-uniform and sub-nyquist sampling but it's easy to miss the big
>> picture for a newcomer.
>>
>> Here's a condensed presentation by Michael Unser for those who would like
>> to have a quick historical overview:
>> http://bigwww.epfl.ch/tutorials/unser0906.pdf
>>
>>
>> On 27/08/17 08:20, Sampo Syreeni wrote:
>>
>> On 2017-08-25, Nigel Redmon wrote:
>>
>> 

Re: [music-dsp] Sampling theory "best" explanation

2017-09-01 Thread Ethan Fenn
Thanks for posting this! It's always interesting to get such a good glimpse
at someone else's mental model.

I'm one of those people who prefer to think of a discrete-time signal as
representing the unique bandlimited function interpolating its samples. And
I don't think this point of view has crippled my understanding of
resampling or any other DSP techniques!

I'm curious -- from the impulse train point of view, how do you understand
fractional delays? Or taking the derivative of a signal? Do you have to
pass into the frequency domain in order to understand these? Thinking of a
signal as a bandlimited function, I find it pretty easy to understand both
of these processes from first principles in the time domain, which is one
reason I like to think about things this way.

-Ethan




On Mon, Aug 28, 2017 at 12:15 PM, Nigel Redmon 
wrote:

> Hi Remy,
>
> On Aug 28, 2017, at 2:16 AM, Remy Muller  wrote:
>
> I second Sampo about giving some more hints about Hilbert spaces,
> shift-invariance, Riesz representation theorem… etc
>
>
> I think you’ve hit upon precisely what my blog isn’t, and why it exists at
> all. ;-)
>
> Correct me if you said it somewhere and I didn't saw it, but an important
> *implicit* assumption in your explanation is that you are talking about
> "uniform bandlimited sampling”.
>
>
> Sure, like the tag line in the upper right says, it’s a blog about
> "practical digital audio signal processing".
>
> Personnally, my biggest enlighting moment regarding sampling where when I
> read these 2 articles:
>
>
> Nice, thanks for sharing.
>
> "Sampling—50 Years After Shannon"
> http://bigwww.epfl.ch/publications/unser0001.pdf
>
> and
>
> "Sampling Moments and Reconstructing Signals of Finite Rate of Innovation:
> Shannon Meets Strang–Fix"
> https://infoscience.epfl.ch/record/104246/files/DragottiVB07.pdf
>
> I wish I had discovered them much earlier during my signal processing
> classes.
>
> Talking about generalized sampling, may seem abstract and beyond what you
> are trying to explain. However, in my personal experience, sampling seen
> through the lense of approximation theory as 'just a projection' onto a
> signal subspace made everything clearer by giving more perspective:
>
>- The choice of basis functions and norms is wide. The sinc function
>being just one of them and not a causal realizable one (infinite temporal
>support).
>- Analysis and synthesis functions don't have to be the same (cf
>wavelets bi-orthogonal filterbanks)
>- Perfect reconstruction is possible without requiring
>bandlimitedness!
>- The key concept is 'consistent sampling': *one seeks a signal
>approximation that is such that it would yield exactly the same
>measurements if it was reinjected into the system*.
>- All that is required is a "finite rate of innovation" (in the
>statistical sense).
>- Finite support kernels are easier to deal with in real-life because
>they can be realized (FIR) (reminder: time-limited <=> non-bandlimited)
>- Using the L2 norm is convenient because we can reason about best
>approximations in the least-squares sense and solve the projection problem
>using Linear Algebra using the standard L2 inner product.
>- Shift-invariance is even nicer since it enables *efficient* signal
>processing.
>- Using sparser norms like the L1 norm enables sparse sampling and the
>whole field of compressed sensing. But it comes at a price: we have to use
>iterative projections to get there.
>
> All of this is beyond your original purpose, but from a pedagocial
> viewpoint, I wish these 2 articles were systematically cited in a "Further
> Reading" section at the end of any explanation regarding the sampling
> theorem(s).
>
> At least the wikipedia page cites the first article and has a section
> about non-uniform and sub-nyquist sampling but it's easy to miss the big
> picture for a newcomer.
>
> Here's a condensed presentation by Michael Unser for those who would like
> to have a quick historical overview:
> http://bigwww.epfl.ch/tutorials/unser0906.pdf
>
>
> On 27/08/17 08:20, Sampo Syreeni wrote:
>
> On 2017-08-25, Nigel Redmon wrote:
>
> http://www.earlevel.com/main/tag/sampling-theory-series/?order=asc
>
>
> Personally I'd make it much simpler at the top. Just tell them sampling is
> what it is: taking an instantaneous value of a signal at regular intervals.
> Then tell them that is all it takes to reconstruct the waveform under the
> assumption of bandlimitation -- a high-falutin term for "doesn't change too
> fast between your samples".
>
> Even a simpleton can grasp that idea.
>
> Then if somebody wants to go into the nitty-gritty of it, start talking
> about shift-invariant spaces, eigenfunctions, harmonical analysis, and the
> rest of the cool stuff.
>
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> 

Re: [music-dsp] Sampling theory "best" explanation

2017-08-28 Thread Nigel Redmon
Hi Remy,

> On Aug 28, 2017, at 2:16 AM, Remy Muller  wrote:
> 
> I second Sampo about giving some more hints about Hilbert spaces, 
> shift-invariance, Riesz representation theorem… etc

I think you’ve hit upon precisely what my blog isn’t, and why it exists at all. 
;-)

> Correct me if you said it somewhere and I didn't saw it, but an important 
> implicit assumption in your explanation is that you are talking about 
> "uniform bandlimited sampling”.

Sure, like the tag line in the upper right says, it’s a blog about "practical 
digital audio signal processing".

> Personnally, my biggest enlighting moment regarding sampling where when I 
> read these 2 articles:

Nice, thanks for sharing.

> "Sampling—50 Years After Shannon"
> http://bigwww.epfl.ch/publications/unser0001.pdf 
> 
> 
> and 
> 
> "Sampling Moments and Reconstructing Signals of Finite Rate of Innovation: 
> Shannon Meets Strang–Fix"
> https://infoscience.epfl.ch/record/104246/files/DragottiVB07.pdf 
> 
> 
> I wish I had discovered them much earlier during my signal processing classes.
> 
> Talking about generalized sampling, may seem abstract and beyond what you are 
> trying to explain. However, in my personal experience, sampling seen through 
> the lense of approximation theory as 'just a projection' onto a signal 
> subspace made everything clearer by giving more perspective: 
> The choice of basis functions and norms is wide. The sinc function being just 
> one of them and not a causal realizable one (infinite temporal support).
> Analysis and synthesis functions don't have to be the same (cf wavelets 
> bi-orthogonal filterbanks)
> Perfect reconstruction is possible without requiring bandlimitedness! 
> The key concept is 'consistent sampling': one seeks a signal approximation 
> that is such that it would yield exactly the same measurements if it was 
> reinjected into the system. 
> All that is required is a "finite rate of innovation" (in the statistical 
> sense).
> Finite support kernels are easier to deal with in real-life because they can 
> be realized (FIR) (reminder: time-limited <=> non-bandlimited)
> Using the L2 norm is convenient because we can reason about best 
> approximations in the least-squares sense and solve the projection problem 
> using Linear Algebra using the standard L2 inner product.
> Shift-invariance is even nicer since it enables efficient signal processing.
> Using sparser norms like the L1 norm enables sparse sampling and the whole 
> field of compressed sensing. But it comes at a price: we have to use 
> iterative projections to get there.
> All of this is beyond your original purpose, but from a pedagocial viewpoint, 
> I wish these 2 articles were systematically cited in a "Further Reading" 
> section at the end of any explanation regarding the sampling theorem(s).
> 
> At least the wikipedia page cites the first article and has a section about 
> non-uniform and sub-nyquist sampling but it's easy to miss the big picture 
> for a newcomer.
> 
> Here's a condensed presentation by Michael Unser for those who would like to 
> have a quick historical overview:
> http://bigwww.epfl.ch/tutorials/unser0906.pdf 
> 
> 
> 
> On 27/08/17 08:20, Sampo Syreeni wrote:
>> On 2017-08-25, Nigel Redmon wrote: 
>> 
>>> http://www.earlevel.com/main/tag/sampling-theory-series/?order=asc 
>>>  
>> 
>> Personally I'd make it much simpler at the top. Just tell them sampling is 
>> what it is: taking an instantaneous value of a signal at regular intervals. 
>> Then tell them that is all it takes to reconstruct the waveform under the 
>> assumption of bandlimitation -- a high-falutin term for "doesn't change too 
>> fast between your samples". 
>> 
>> Even a simpleton can grasp that idea. 
>> 
>> Then if somebody wants to go into the nitty-gritty of it, start talking 
>> about shift-invariant spaces, eigenfunctions, harmonical analysis, and the 
>> rest of the cool stuff. 
> 
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Sampling theory "best" explanation

2017-08-28 Thread Theo Verelst

Nigel Redmon wrote:

Well, it’s quiet here, why not…



Right, a good reiteration never hurts! I quickly read through and find your explanation 
fine, it's not right to expect everybody to be theoretically sound and solid up to the 
level of mathematical proof, but I'm a strong proponent of preventing the main errors:


   - the main definitions and theorem must be solid and phrased unambiguously
   - the understanding created should not lead people astray in divergent 
diractions

It's like, it's good to know there's a mathematical theory that uniquely and like a proper 
bijection relate samples to the analog signal, under proper and necessary conditions.


Then there's the practical side: can an existing ADC create a file with samples that can 
be enjoyed back in their analog form to a high level of fidelity, and if so:


   - how
   - at which computational/electronic cost
   - for which (sub-)  class of "CDs" or other consumer digital signal forms
   - with qualifiable and quantifiable error ?

That's for engineers that want to really work in the subject, too, but a bit of an idea a 
lot of people will appreciate.


Theo
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Sampling theory "best" explanation

2017-08-28 Thread Remy Muller
I second Sampo about giving some more hints about Hilbert spaces, 
shift-invariance, Riesz representation theorem... etc


Correct me if you said it somewhere and I didn't saw it, but an 
important /implicit/ assumption in your explanation is that you are 
talking about "uniform bandlimited sampling".


Personnally, my biggest enlighting moment regarding sampling where when 
I read these 2 articles:


"Sampling—50 Years After Shannon"
http://bigwww.epfl.ch/publications/unser0001.pdf

and

"Sampling Moments and Reconstructing Signals of Finite Rate of 
Innovation: Shannon Meets Strang–Fix"

https://infoscience.epfl.ch/record/104246/files/DragottiVB07.pdf

I wish I had discovered them much earlier during my signal processing 
classes.


Talking about generalized sampling, may seem abstract and beyond what 
you are trying to explain. However, in my personal experience, sampling 
seen through the lense of approximation theory as 'just a projection' 
onto a signal subspace made everything clearer by giving more perspective:


 * The choice of basis functions and norms is wide. The sinc function
   being just one of them and not a causal realizable one (infinite
   temporal support).
 * Analysis and synthesis functions don't have to be the same (cf
   wavelets bi-orthogonal filterbanks)
 * Perfect reconstruction is possible without requiring bandlimitedness!
 * The key concept is 'consistent sampling': /one seeks a signal
   approximation that is such that it would yield exactly the same
   measurements if it was reinjected into the system/.
 * All that is required is a "finite rate of innovation" (in the
   statistical sense).
 * Finite support kernels are easier to deal with in real-life because
   they can be realized (FIR) (reminder: time-limited <=> non-bandlimited)
 * Using the L2 norm is convenient because we can reason about best
   approximations in the least-squares sense and solve the projection
   problem using Linear Algebra using the standard L2 inner product.
 * Shift-invariance is even nicer since it enables /efficient/ signal
   processing.
 * Using sparser norms like the L1 norm enables sparse sampling and the
   whole field of compressed sensing. But it comes at a price: we have
   to use iterative projections to get there.

All of this is beyond your original purpose, but from a pedagocial 
viewpoint, I wish these 2 articles were systematically cited in a 
"Further Reading" section at the end of any explanation regarding the 
sampling theorem(s).


At least the wikipedia page cites the first article and has a section 
about non-uniform and sub-nyquist sampling but it's easy to miss the big 
picture for a newcomer.


Here's a condensed presentation by Michael Unser for those who would 
like to have a quick historical overview:

http://bigwww.epfl.ch/tutorials/unser0906.pdf


On 27/08/17 08:20, Sampo Syreeni wrote:

On 2017-08-25, Nigel Redmon wrote:


http://www.earlevel.com/main/tag/sampling-theory-series/?order=asc


Personally I'd make it much simpler at the top. Just tell them 
sampling is what it is: taking an instantaneous value of a signal at 
regular intervals. Then tell them that is all it takes to reconstruct 
the waveform under the assumption of bandlimitation -- a high-falutin 
term for "doesn't change too fast between your samples".


Even a simpleton can grasp that idea.

Then if somebody wants to go into the nitty-gritty of it, start 
talking about shift-invariant spaces, eigenfunctions, harmonical 
analysis, and the rest of the cool stuff.



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Sampling theory "best" explanation

2017-08-27 Thread Nigel Redmon
> Nigel, i would be careful with superlative claims regarding one's own work.  
> it is too Trumpian in nature to be taken at face value. but i appreciate this 
> (and your other) work.
> 
Yeah, I just couldn’t think of anything that wasn’t “sampling theory—again”. As 
I noted elsewhere, it’s more like that Tenacious D song, “Tribute”—not the 
greatest song ever, but a song about the greatest song ever.

Anyway, the intent was not that it that everyone would think it was the best 
explanation they’ve ever heard, but that for some people, it will be the best 
explanation they’ve ever heard (that’s why “you’ve ever heard”, and not just 
the more compact claim of “the best explanation”). I never intended a title 
like that for the eventual youtube video, but I figured my blog could withstand 
it till I think of a better title to replace it, if I do.

;-)

> On Aug 27, 2017, at 9:01 AM, robert bristow-johnson 
> <r...@audioimagination.com> wrote:
> 
>  
> in my opinion, this old version of the Wikipedia article on it was 
> mathematically the most concise:
> 
> https://en.wikipedia.org/w/index.php?title=Nyquist%E2%80%93Shannon_sampling_theorem=234842277#Mathematical_basis_for_the_theorem
>  
> 
> then someone named BobK started the processes of fucking it up and now it is 
> unrecognizable  it *does* require the leap of faith that the dirac delta 
> function is a "function" (without worrying much about the "delta as 
> distribution" stuff).  if you wanna be mathematically correct and not make a 
> reference to the naked dirac or dirac comb, then the Poisson Summation 
> Formula is probably the way to go, but it is not as direct conceptually as 
> multiplying by the dirac comb as the sampling function. 
> https://en.wikipedia.org/wiki/Poisson_summation_formula 
> 
> Nigel, i would be careful with superlative claims regarding one's own work.  
> it is too Trumpian in nature to be taken at face value. but i appreciate this 
> (and your other) work.
> 
> L8r,
> 
> r b-j
> 
> 
> 
>  Original Message 
> Subject: Re: [music-dsp] Sampling theory "best" explanation
> From: "Sampo Syreeni" <de...@iki.fi>
> Date: Sun, August 27, 2017 2:20 am
> To: "A discussion list for music-related DSP" <music-dsp@music.columbia.edu>
> --
> 
> > On 2017-08-25, Nigel Redmon wrote:
> >
> >> http://www.earlevel.com/main/tag/sampling-theory-series/?order=asc
> >
> > Personally I'd make it much simpler at the top. Just tell them sampling
> > is what it is: taking an instantaneous value of a signal at regular
> > intervals. Then tell them that is all it takes to reconstruct the
> > waveform under the assumption of bandlimitation -- a high-falutin term
> > for "doesn't change too fast between your samples".
> >
> > Even a simpleton can grasp that idea.
> >
> > Then if somebody wants to go into the nitty-gritty of it, start talking
> > about shift-invariant spaces, eigenfunctions, harmonical analysis, and
> > the rest of the cool stuff.
> > --
> > Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
> > +358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
> > ___
> > dupswapdrop: music-dsp mailing list
> > music-dsp@music.columbia.edu
> > https://lists.columbia.edu/mailman/listinfo/music-dsp
> >
> >
> 
> 
> --
> 
>  
> 
> r b-j  r...@audioimagination.com
> 
>  
> 
> "Imagination is more important than knowledge."
> 
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Sampling theory "best" explanation

2017-08-27 Thread Nigel Redmon
Sampo, the purpose was to convince people that samples are impulses, and why 
that means the spectrum represented by a series of samples is the intended 
spectrum plus aliased images, forever. in the simplest, most intuitive way I 
could think of.

That’s why I put those points up front, before any background or explanation. 
The purpose was not to explain sampling to newbies or to satisfy the 
rigor-inclined who already know sampling theory.

> On Aug 26, 2017, at 11:20 PM, Sampo Syreeni  wrote:
> 
> On 2017-08-25, Nigel Redmon wrote:
> 
>> http://www.earlevel.com/main/tag/sampling-theory-series/?order=asc
> 
> Personally I'd make it much simpler at the top. Just tell them sampling is 
> what it is: taking an instantaneous value of a signal at regular intervals. 
> Then tell them that is all it takes to reconstruct the waveform under the 
> assumption of bandlimitation -- a high-falutin term for "doesn't change too 
> fast between your samples".
> 
> Even a simpleton can grasp that idea.
> 
> Then if somebody wants to go into the nitty-gritty of it, start talking about 
> shift-invariant spaces, eigenfunctions, harmonical analysis, and the rest of 
> the cool stuff.
> -- 
> Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
> +358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
> 

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Sampling theory "best" explanation

2017-08-27 Thread Nigel Redmon
Well, it’s a DSP blog. The intended audience is whoever reads it, I’m not 
judgmental. So, the question is probably more like “who can benefit from it”. 
At the novice end, I’d say they can probably benefit at least from the 
revelation that it comes from solving issues in analog communication, and 
subsequently figuring out the math of it. And if they don’t yet grok digital, 
but have a background (modular synthesist, electrical engineer) that gives them 
an intuitive grasp of amplitude modulation, I think they will benefit big over 
the typical classroom approach.

At the other end, there are certainly DSP experts who do not understand that 
samples represent impulses and the ramifications to the spectrum. This is no 
knock on them, there are good circuit designers who don’t know how generators 
work, of capable mechanics that don’t know how and why carburetors work. You 
don’t have to simultaneously know everything to be successful. But I think this 
lesson is an important one and that’s why I put it out there. For instance, 
sample rate conversion is a black art for many—they do the steps, but in cases 
that are a little out of the ordinary, they need to ask what to do. I think if 
you understand the points I made, SRC becomes incredibly obvious (particularly 
at integer ratios). Just an example.

I’m glad it was of some help to you, thanks for saying.

> On Aug 26, 2017, at 9:07 PM, Bernie Maier  wrote:
> 
>> Please check out my new series on sampling theory, and feel free to comment
>> here or there. The goal was to be brief, but thorough, and avoid abstract
>> mathematical explanations. In other words, accurate enough that you can
>> deduce correct calculations from it, but intuitive enough for the math-shy.
>> 
>> http://www.earlevel.com/main/tag/sampling-theory-series/?order=asc
>> 
> 
> Echoing the comments so far, thanks from me also for this and in particular 
> taking a new, or at least not commonly adopted, approach to explaining this.
> 
> That said, I felt unclear about who your intended audience is. I'm on this 
> list not out of any real DSP expertise, but more out of an interest in music, 
> audio software and (a long time ago) some mathematical background. But your 
> very opening section in part one appears to me to require quite a bit of 
> assumed background knowledge. The reader is expected to already know what an 
> impulse is, then a bandlimited impulse and so on.
> 
> Maybe your intended audience is DSP practitioners needing to solidify their 
> theoretical background. If so, perhaps you could be more clear about that in 
> the prologue. If your intention is, like I at first assumed, to make this a 
> thorough introduction to those with no DSP background then I suggest you may 
> need to spend some more time in the introduction defining terms at the very 
> least.
> 
> But even with my limited background theory, I did appreciate this 
> perspective, and it does correct some mistaken perceptions I had about 
> sampling theory. ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
> 

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Sampling theory "best" explanation

2017-08-27 Thread robert . bocquier

Le 2017-08-26 03:21, Nigel Redmon a écrit :


http://www.earlevel.com/main/tag/sampling-theory-series/?order=asc


Hi Nigel,

For me the best sampling theory explanation I ever saw is probably also 
one of the oldest (1980!)


This explanation can be found in the second chapter of the 2920 handbook 
manual.

(AFAIK, 2920 is first Intel DSP chip)

You can find it here 
https://archive.org/download/bitsavers_intel29201ProcessorDesignHandbook_5762322/1980_2920_Analog_Signal_Processor_Design_Handbook.pdf

(starting from page 16)

This also covers jitter noise and reconstruction filter details.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Sampling theory "best" explanation

2017-08-27 Thread Sampo Syreeni

On 2017-08-25, Nigel Redmon wrote:


http://www.earlevel.com/main/tag/sampling-theory-series/?order=asc


Personally I'd make it much simpler at the top. Just tell them sampling 
is what it is: taking an instantaneous value of a signal at regular 
intervals. Then tell them that is all it takes to reconstruct the 
waveform under the assumption of bandlimitation -- a high-falutin term 
for "doesn't change too fast between your samples".


Even a simpleton can grasp that idea.

Then if somebody wants to go into the nitty-gritty of it, start talking 
about shift-invariant spaces, eigenfunctions, harmonical analysis, and 
the rest of the cool stuff.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Sampling theory "best" explanation

2017-08-26 Thread Bernie Maier

Please check out my new series on sampling theory, and feel free to comment
here or there. The goal was to be brief, but thorough, and avoid abstract
mathematical explanations. In other words, accurate enough that you can
deduce correct calculations from it, but intuitive enough for the math-shy.

http://www.earlevel.com/main/tag/sampling-theory-series/?order=asc



Echoing the comments so far, thanks from me also for this and in particular 
taking a new, or at least not commonly adopted, approach to explaining this.


That said, I felt unclear about who your intended audience is. I'm on this 
list not out of any real DSP expertise, but more out of an interest in music, 
audio software and (a long time ago) some mathematical background. But your 
very opening section in part one appears to me to require quite a bit of 
assumed background knowledge. The reader is expected to already know what an 
impulse is, then a bandlimited impulse and so on.


Maybe your intended audience is DSP practitioners needing to solidify their 
theoretical background. If so, perhaps you could be more clear about that in 
the prologue. If your intention is, like I at first assumed, to make this a 
thorough introduction to those with no DSP background then I suggest you may 
need to spend some more time in the introduction defining terms at the very 
least.


But even with my limited background theory, I did appreciate this perspective, 
and it does correct some mistaken perceptions I had about sampling theory. 
___

dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Sampling theory "best" explanation

2017-08-26 Thread psy rabbit
thank you very much !

2017-08-26 4:21 GMT+03:00 Nigel Redmon :

> Well, it’s quiet here, why not…
>
> Please check out my new series on sampling theory, and feel free to
> comment here or there. The goal was to be brief, but thorough, and
> avoid abstract mathematical explanations. In other words, accurate enough
> that you can deduce correct calculations from it, but intuitive enough for
> the math-shy.
>
> http://www.earlevel.com/main/tag/sampling-theory-series/?order=asc
>
> I’m not trying to be presumptuous with the series title, “the best
> explanation you’ve ever heard”, but I think it’s unique in that
> it separates sampling origins from the digital aspects, making the
> mathematical basis more obvious. I’ve had several arguments over the years
> about what lies between samples in the digital domain, an epic argument
> about why and how zero-stuffing works in sample rate conversion here more
> than a decade ago, etc. I think if people understand exactly what sampling
> means, and what PCM means, it would be a benefit. And, basically, I
> couldn’t think of a way to titled it that didn’t sound like “yet another
> introduction to digital sampling”.
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Sampling theory "best" explanation

2017-08-26 Thread Alan Wolfe
This is neat, thanks for sharing Nigel

On Aug 25, 2017 6:22 PM, "Nigel Redmon"  wrote:

> Well, it’s quiet here, why not…
>
> Please check out my new series on sampling theory, and feel free to
> comment here or there. The goal was to be brief, but thorough, and
> avoid abstract mathematical explanations. In other words, accurate enough
> that you can deduce correct calculations from it, but intuitive enough for
> the math-shy.
>
> http://www.earlevel.com/main/tag/sampling-theory-series/?order=asc
>
> I’m not trying to be presumptuous with the series title, “the best
> explanation you’ve ever heard”, but I think it’s unique in that
> it separates sampling origins from the digital aspects, making the
> mathematical basis more obvious. I’ve had several arguments over the years
> about what lies between samples in the digital domain, an epic argument
> about why and how zero-stuffing works in sample rate conversion here more
> than a decade ago, etc. I think if people understand exactly what sampling
> means, and what PCM means, it would be a benefit. And, basically, I
> couldn’t think of a way to titled it that didn’t sound like “yet another
> introduction to digital sampling”.
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp