Re: [music-dsp] Nyquist-Shannon sampling theorem

2014-03-28 Thread Emanuel Landeholm
tl;dr version: The justification for DSP (equi-distant samples) is the
Whittaker-Shannon interpolation formula, which follows from the Poisson
summation formula plus some hand-waving about distributions (dirac delta
theory). Am I right?



On Fri, Mar 28, 2014 at 4:50 AM, Ethan Duni ethan.d...@gmail.com wrote:

 Hi Robert-

  i dunno what non-standard analysis you mean.

 I'm referring to the stuff based on hyperreal numbers:

 http://en.wikipedia.org/wiki/Hyperreal_number

 These are an extension of the extended real numbers, where each hyperreal
 number has a standard part (which is an extended real) and an
 infintesimal part (which corresponds to a convergence rate). The basic
 idea is that each hyperreal number represents an equivalence class of
 functions which converge (in the extended reals, so converging to
 infinity is allowed) to the same limit at the same rate. The limit is given
 by the standard part of the number, and the convergence rate by the
 infintesimal part. So you can make sense of statements like 0/0 or
 infinity - infinity in this context, by comparing the infintesimal parts.
 I.e., the usual epsilon-delta limit approach from standard analysis is
 embedded into the arithmetic of the hyperreals. So using this approach you
 can rigorously do the kinds of sloppy algebraic manipulations of dx
 type terms that we often see in introductory calculus classes, for one
 example.

  the only truly rigorous usage of the Dirac delta is to keep it clothed
 with a surrounding integral.

 That's true, but a Dirac delta in the context of non-standard analysis
 isn't naked - it comes clothed with an associated limiting process given
 by the infintesimal part. I.e., consider a sequence of functions that
 converges to a Dirac delta, as is used in the standard approach (there's
 the boxcar example you've already given, or you can use a two-sided
 exponential decay, or a Gaussian distribution with variance shrinking to
 zero, or any number of other things). For any such sequence, there is an
 associated hyperreal Dirac delta, which expresses all of the relevant
 analytic properties of that class of sequences - the fact that it tends to
 zero everywhere except the origin and blows up there, and also the rates at
 which each point converges. Using that, we should be able to do the usual
 non-rigorous algebraic manipulations used in undergrad engineering
 proofs, but make them rigorous (with a bit of care - you have to work out
 what effects the non-standard versions of various operations have, take the
 standard part at appropriate places to get back to the final answer,
 etc.).

 Anyway the whole thing is a bit of a curiosity. It's generally easier to
 just do the proofs the standard way if you're really interested, and just
 use the regular sloppy approach if you aren't. But still kind of neat I
 think, that the fake way can actually be made rigorous by embedding the
 relevant analytic framework into an extended number system.

 E



 On Thu, Mar 27, 2014 at 7:17 PM, robert bristow-johnson 
 r...@audioimagination.com wrote:

  On 3/27/14 5:27 PM, Ethan Duni wrote:
 
  it is, at least, if you accept the EE notion of the Dirac delta function
 
  and not worry so much about it not really being a function, which is
  literally what the math folks tell us.
 
  I may be misremembering, but can't non-standard analysis be used to make
  that whole Dirac delta function approach rigorous?
 
 
  i dunno what non-standard analysis you mean.  the only truly rigorous
  usage of the Dirac delta is to keep it clothed with a surrounding
 integral.
   so naked Dirac deltas are a no-no.  then we can't really have a notion
 of
  a Dirac comb function either.
 
 
 I know that it works for
  the whole algebraic manipulation of delta-x terms that we also like to
  do
  in engineering classes, intuitively seems like we could play the same
  trick
  with Dirac delta's and associated stuff. But I don't recall whether it
  actually works out entirely... although Wikipedia suggests that maybe it
  does (
  http://en.wikipedia.org/wiki/Dirac_delta_function#
  Infinitesimal_delta_functions).
 
  Not that it's worth the trouble to really work out - we already know
 what
  the correct answers are from measure theory/distributions - but it's
 nice
  to keep in mind that these pedantic math complaints are actually kind of
  baseless, at least if some care is taken to adhere to the rules of
  non-standard analysis and so avoid various pitfalls.
 
 
  i just treat the Dirac delta in time as if it has a Planck Time (10^(-43)
  second) width.  then it's a true function and it still has, to within an
  immeasureable degree of accuracy, the same properties that i want.
 
  L8r,
 
 
  --
 
  r b-j  r...@audioimagination.com
 
  Imagination is more important than knowledge.
 
 
 
 
  --
  dupswapdrop -- the music-dsp mailing list and website:
  subscription info, FAQ, source code archive, list archive, book reviews,
  dsp 

Re: [music-dsp] Nyquist-Shannon sampling theorem

2014-03-28 Thread Theo Verelst

robert bristow-johnson wrote:

...
and in my opinion, a very small amount of hand-waving regarding the
Dirac delta (to get us to the same understanding one gets at the
sophomore or junior level EE) is *much* *much* easier to gain
understanding than farting around with the Dirac delta as a
distribution.  i.e. even though the mathematicians say it ain't true,
there *does* exist a function that is zero almost everywhere, yet the
integral is 1.  if you can get past that, the EE treatment (which i
think some physicists also use) is much much better.
...


Ahem, uhm, like, Dirac was a physicist (gmbl grmbl in mathematical space 
like functional integrals, grmbl).


It's not needed to be difficult about sampling a time signal. If there's 
a signal which is described as a function S of t (time): S(t), you fill 
in the time values in a equidistant way, and out come the samples. 
Usually it's boundary condition in DEs that are the hard part, and in 
other cases, the closing definitions. Sometimes (like complex plane 
analysis and Fourier proofs) a corollary can be proven decently, but 
outside of the arm waving math levels. And everybody knows if you want 
smart math you go to EE or Physics.


T.

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Nyquist-Shannon sampling theorem

2014-03-28 Thread Emanuel Landeholm
rb-j, you wrote


again, all you really need is

+inf   +inf
 T  SUM{ delta(t-nT) }  =  SUM{ e^(i 2 pi k/T t) }
n=-inf k=-inf


Precisely, and one way to get there is by starting from the Poisson
Summation Formula and taking f(n) = T dirac(t-nT) (thus the distributional
hand waving requirement). This is what I meant by PSF + hand waving. I
think we're on the same page, basically.

cheers,
E


On Fri, Mar 28, 2014 at 1:32 PM, robert bristow-johnson 
r...@audioimagination.com wrote:

 On 3/28/14 4:25 AM, Emanuel Landeholm wrote:

 tl;dr version: The justification for DSP (equi-distant samples) is the
 Whittaker-Shannon interpolation formula, which follows from the Poisson
 summation formula plus some hand-waving about distributions (dirac delta
 theory). Am I right?


 i would say the word plus should be replaced by or.

 and in my opinion, a very small amount of hand-waving regarding the Dirac
 delta (to get us to the same understanding one gets at the sophomore or
 junior level EE) is *much* *much* easier to gain understanding than farting
 around with the Dirac delta as a distribution.  i.e. even though the
 mathematicians say it ain't true, there *does* exist a function that is
 zero almost everywhere, yet the integral is 1.  if you can get past that,
 the EE treatment (which i think some physicists also use) is much much
 better.

 again, all you really need is



+inf   +inf
 T  SUM{ delta(t-nT) }  =  SUM{ e^(i 2 pi k/T t) }
n=-inf k=-inf


 and the existing shifting theorems of the Fourier Transform.  but the
 mathematicians object to the identity above because they say the left side
 of the equation is meaningless without surrounding it with an integral.
  mathematicians do not like naked Dirac delta functions (they're not
 functions!), but EEs have no problem with them.

 from that EE POV, i still believe that this treatment:

 https://en.wikipedia.org/w/index.php?title=Nyquist%E2%80%
 93Shannon_sampling_theoremoldid=217945915

 is the simplest and most direct of them all.  doesn't even require
 convolution in the frequency domain like most textbooks do.



 --

 r b-j  r...@audioimagination.com

 Imagination is more important than knowledge.



 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Nyquist-Shannon sampling theorem

2014-03-28 Thread Sampo Syreeni

On 2014-03-27, robert bristow-johnson wrote:

the *sampling* function is periodic (that's why we call it uniform 
sampling), but the function being sampled, x(t), is just any 
reasonably well-behaved function of t.


Ah, yes, that much is true. But in fact, if you look a bit further, 
actually the uniformity isn't a requirement. It only makes the proof 
easier and the translational symmetry that goes with it was the original 
simplification which enabled the theory to be discovered. In reality, we 
now have a proof somewhere in the compressed sensing literature which 
says that the sampling instants are almost completely irrelevant as far 
as the invertibility of the representation goes. All that matters is the 
bandlimit. And in fact even the characterization of the Nyquist 
frequency usually given is wrong: you don't have to sample at twice the 
highest frequency present, but in fact only at twice the frequency of 
the total support width of the spectrum, even if the support is pretty 
much arbitrarily chopped up over many frequencies. And if you don't know 
where the support is, then another theorem says twice the critical 
frequency again suffices.


All that stuff follows from the weird and wonderful properties of 
complex analysis. When you impose a bandlimit, you at the same time make 
your signal analytic. That is a stupendously strong condition slipped in 
through the back door, and makes the class of bandlimited signals 
exceedingly rigid. In the strict sense, they have pretty much no 
genuinely local properties, but instead their information content is 
spread out over all of them, in both time and in frequency. As a result, 
not only are the signals so rigid that the dimensionality of them as a 
function space drops from a double continuum to a discrete one, it does 
so in an manner which lets you reconstitute the signal from pretty much 
any sufficient number of samples either in the frequency or in the 
temporal domain, no matter where they lie. Taking a million samples 
within some second now, in pretty much any arrangement, theoretically 
lets you perfectly reconstruct a bandlimited signal into the end of 
time. (More exactly, as long as the rate of innovation of the signal is 
lower than that of the information gathered by sampling process, perfect 
reconstructibility is guaranteed. Thought the interpolation formulae 
which result can be pretty horrendous.)


That's a slightly more unnnerving way to put Ethan's earlier point: 
technically you can't fully satisfy the bandlimiting condition. His 
rationale was a bit different and didn't sound too bad, but this one's 
really dehumanizing: real bandlimitation implies perfect 
reconstructibility of events which have yet to happen, so that no finite 
delay process can even theoretically produce a truly bandlimited signal, 
by any process at all. But of course as Ethan explained, in the square 
norm sense you can easily approach that situation, to the degree that 
you don't have to worry about it in practice, just by intelligently 
cutting of the tails of the sinc interpolation kernel in time.


Furthermore, it generalizes to settings where periodicity isn't even an 
option.


oh, it outa be an *option*.  we know how to take the Fourier transform 
of a sinusoid.  (it's not square integrable, but we hand-wave our way 
through it anyway.)


Those situations have to do with abstract harmonic analysis over groups 
other than the real numbers. The addition operator there doesn't have to 
have an interpretation as a shift like it has with the real line. Thus, 
periodicity as a concept doesn't make much sense there either.


back, before i was banned from editing wikipedia (now i just edit it 
anonymously), [...]


Jesus, your case went as far as the arbitration committee. What the hell 
did you do? Given gun control and the like in the record, was it the age 
old mistake of going full libertarian? If you don't mind my asking? ;)


[...] i spelled out the mathematical basis for the sampling and 
reconstruction theorem in this version of the page:


https://en.wikipedia.org/w/index.php?title=Nyquist%E2%80%93Shannon_sampling_theoremoldid=70176128

since then two particular editors (Dicklyon and BobK) have really made 
the mathematical part less informative and useful. they just refer you 
to the Poisson summation formula as the mathematical basis.


Not good. That article is in dire need of TLC. While you can logically 
make it about Poisson summation, historically I seem to remember at 
least Nyquist's signalling work was independent of it. Plus the text as 
it stands really has nigh zero pedagogical value, compared to what you'd 
expect to find e.g. in Britannica.


the only lacking in this proof is the same lacking that most 
electrical engineering texts make with the Dirac delta function (or 
the Dirac comb, which is the sampling function). to be strictly 
legitimate, i'm not s'pose to have naked Dirac impulses in a 
mathematical expression.


Naked 

Re: [music-dsp] Nyquist–Shannon sampling theorem

2014-03-27 Thread Nigel Redmon
On Mar 26, 2014, at 8:42 PM, Doug Houghton doug_hough...@sympatico.ca wrote:
 I'm guessing this somehow scratches at the surface of what I've read about no 
 signal being properly band limited unless it's infinit.

Sure, in the same sense, we don’t properly sample to digital or properly 
convert back to analog anything—if “properly” means perfectly. But if it means 
“adequately, then we’re good.

Perfectly is a brick wall lowpass filter. Adequately is is a steep filter that 
gives us flat response out as far as we can hear, and results in aliasing of an 
amplitude that is below out ability to hear.

BTW, there’s always something to learn or think about it seems. Having dinner 
by myself tonight, I started thinking about those zeros between samples when 
up-sampling. What if the sampled signal had a significant DC offset to start 
with? What's the difference between inserting zeros, and inserting the DC 
offset instead? Well, I did figure it out, and confirmed it when I got home, 
but it was amusing to give some thought to something I hadn’t thought much 
about, considering I’ve been writing oversampling code for 25+ years...
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Nyquist–Shannon sampling theorem

2014-03-27 Thread Sampo Syreeni

On 2014-03-27, Doug Houghton wrote:

I understand the basics, my question is in the constraints that might 
be imposed on the signal or functon as referenced by the theory.


The basic theory presupposes that the signal is square integrable and 
bandlimited. That's pretty much it. If you want to make it hard and 
nasty, you can go well beyond that, but for the most part it suffices 
that you can ordinarily Riemann integrate the square of your signal, you 
get a finite total power figure that way, and then once you integrate 
the product of your signal with any fixed sinusoid, over some fixed 
frequency cutoff every such product integrates identically to zero.


When you have that, you can derive the sampling theorem. It says that 
taking an equidistant train of instantaneous values of your signal at 
any rate at or above twice the bandlimit mentioned above is enough fully 
reconstruct the original waveform. Point for point, exactly, no ifs, ors 
or buts. So the only real limitation is the upper bandlimit.



Is it understood to be repeating?


No it doesn't have to be. Yes, there are four separate forms of Fourier 
analysis which are commonly used, and which have their own analogues of 
the sampling theorem. Or perhaps rather the sampling theorem itself is a 
reflection of the same Fourier stuff which all of those forms of 
analysis rely upon. Two of the forms are periodic in time, which is why 
you might be thrown off here. But the basic form under which the 
Shannon-Nyquist sampling theorem is proved is not one of them; it covers 
all of real square integrable functions, R to R.


I'm thinking the math must consider it this way, or rather the 
difference is abstracted since the signal is assumed to be band 
limited, which means infinit, which means you can create any random 
signal by inject the required freuencies at the reuired amplitides and 
phase from start to finish, even a 20k 2ms blip in the middle of 
endless silence.


If you inject something with time structure, the Fourier transform will 
decompose it as an integral of a continuum of separate frequencies. This 
is part of the deeper structure of Fourier analysis, and what in the 
quantum physics circles is called the uncertainty principle. What we in 
the DSP circles think of as the tradeoff between time and frequency 
structure, and operationalize via the idea that time domain convolution 
becomes a multiplication in the frequency one, and vice versa, is 
thought of as in the physics circles as the duality between any two 
conjugate variables, lower-bounded by uncertainty principle. What they 
call a physical law, us math freaks always called just a basic 
eigenproperty of any linear operator on R, lower bounded by the 
eigenfunctions of the class of linear, shift-invariant operators, those 
being the exponential class, containing complex sinusoids and in the 
proper limit Gaussians, impulses, infinite sinusoids, and all of their 
shifted linear combinations.


That is a deep, and rich, and beautiful theory in harmonical analysis if 
you choose to go that way. At its most beautiful it exhibits itself in 
the class of tempered distributions, which you most easily get via 
metric completion of the intersection of real functions in square and 
absolute value norm, and then going to the natural topological dual of 
the function space which results. If you go there, you can suddenly do 
things like derivatives of a delta function, and bandlimitation on top 
of it, in your head. And whatnot.


But for the most part you don't want to go there, because there's no 
return and no end to what follows, and it nary helps you with anything 
practicable. The better way for a digital bloke is to just assume a 
bandlimit, and to see what comes out of that. Apply the sampling theorem 
as it was proven, and then get acquainted with discrete time math as 
fully as one can.


For there lies salvation, and the app which pays your bills.
--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Nyquist–Shannon sampling theorem

2014-03-27 Thread Ross Bencina

On 27/03/2014 3:23 PM, Doug Houghton wrote:

Is that making any sense? I'm struggling with the fine points.  I bet
this is obvious if you understand the math in the proof.


I'm following along, vaguely.

My take is that this conversation is not making enough sense to give you 
the certainty you seem to be looking for.


Your question seems to be very particular regarding specifics of the 
definitions used in a theorem, but you have not quoted the theorem or 
the definitions.


Most of the answers so far seem to be talking about interpretations and 
consequences of the theorem.


May I suggest: quote the version of the theorem that you're working from 
and the definitions of terms used that you're assuming, and we can go 
from there. It may also be helpful to see the proof that you are working 
from, perhaps someone can help unpack that.


Here's a version of the theorem that you may or may not be happy with:

SHANNON-NYQUIST SAMPLING THEOREM [1]

If a function x(t) contains no frequencies higher than B hertz,
it is completely determined by giving its ordinates at a series
of points spaced 1/(2B) seconds apart.

---


I'm loath to contribute my limited interpretation, but let me try (feel 
free to ignore or ideally, correct me):


x(t) is an infinite duration continuous-time function.

a frequency is defined to be an infinite duration complex 
sinusoid with a particular period.


The theorem is saying something about an infinite duration continuous 
time signal x(t), and expressing a constraint on that signal in terms of 
the signal's frequency components.


To be able to talk about the frequency components of x(t) we can use a 
continuous Fourier representation of the signal, i.e. the Fourier 
transform [2], say x'(w), a complex valued function, w is a (continuous) 
real-valued frequency parameter:


 +inf
x'(w) = integrate x(t)*e^(-2*pi*i*t*w) dt
 -inf

The Fourier transform can represent any continuous signal that is 
integrable and continuous (I deduce this from the invertability of the 
Fourier transform [3]). One consequence of this is that any practical 
analog signal x(t) may be represented by its Fourier transform.


The theorem expresses a constraint the frequencies for which the Fourier 
transform may be non-zero. Specifically, it requires that x'(w) = 0 for 
all w  -N and all w  N, where N is the Nyquist frequency.


Note specifically that we are dealing with the continuous Fourier 
transform, therefore there is no requirement for x(t) to be periodic or 
of finite temporal extent.


The theorem also does not say anything about the time extent of the 
discrete time signal (it is assumed to be infinite too).


That's my take on it anyway.

Ross.

[1] 
http://www.princeton.edu/~achaney/tmve/wiki100k/docs/Nyquist%E2%80%93Shannon_sampling_theorem.html


[2] http://en.wikipedia.org/wiki/Fourier_transform

[3] http://en.wikipedia.org/wiki/Fourier_inversion_formula
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Nyquist-Shannon sampling theorem

2014-03-27 Thread Ethan Duni
Hi Doug-

To address some of your general questions about Fourier analysis and
relationship to sampling theory:

Broadly speaking any reasonably well-behaved signal can be decomposed into
a sum of sinusoids (actually complex exponentials but don't worry about
that detail for now). There are several flavors of Fourier analysis
corresponding to different classes of signals. I.e., there are variants for
continuous-time signals and discrete-time signals, and also for periodic
signals and general signals. For the case of periodic signals you use
what's called the Fourier Series (there you add up harmonic components),
for general signals you use the Fourier Transform (this uses both harmonic
and inharmonic components). The key difference between discrete time and
continuous time from a Fourier analysis perspective (either series or
transform) is that continuous time signals can have arbitrarily high
frequencies, but discrete-time signals can only admit a finite bandwidth
(related to the sampling rate).

So in all cases of Fourier analysis, we're decomposing the signal into
sinusoids. As you have noted, sinusoids all extend off to +/- infinity. You
are correct to note that this corresponds to steady-state analysis, when
used in for example a circuit analysis context. One consequence of this is
that any perfectly bandlimited signal, like in the Sampling Theorem, also
has to extend to +/- infinity. The other way around is also true: any
signal that only lasts a finite length of time must contain frequencies all
the way up to infinity. So, strictly speaking, it is true that the
conditions of the Sampling Theorem cannot ever be truly fulfilled in
practice, since all practical signals are necessarily time-limited. There
is *always* theoretically some level of aliasing in practice because of the
time-limiting constraint.

But it turns out that this doesn't present much of a difficulty in
practice. To see why, consider another question you raised: if arbitrary
signals can be decomposed into sums of sinusoids that all repeat off to
infinity, how do we represent signals that are concentrated in a particular
time, or have frequency components that turn off midway through, etc.? It's
easy enough to prove that the Fourier transform is invertible - i.e., that
all of the information in the signal is contained in the sinusoidal
parameters - but where does all that time information go? It turns out that
it gets folded into the relationship between the amplitudes and phases of
the various sinusoids in a complicated way - the information is still there
and recoverable, but it's not usually easy to discern from looking at the
output of the Fourier transform. For a signal where a given frequency
turns on at some particular time, the transform is only going to give you
a single parameter for that frequency, corresponding to the average energy
over all time. The information about it turning on and off shows up at
sidelobes around the frequency in question. Likewise, the time signal
already contains all the information about the sinusoid parameters, folded
up in a way that's hard to see.

Why does that help us with the Sampling Theorem? Well, it turns out that
while perfectly bandlimited signals can't also be perfectly time-limited,
they *can* still have they're energy concentrated in one place in time.
They don't ever go to zero and stay there, but they can die off, even quite
strongly. The pertinent example here is the sinc signal, which is the ideal
reconstruction filter used in the Sampling Theorem. This signal has energy
concentrated at time 0, and falls off as 1/n from there. So if we truncate
such a signal at some reasonable length, we can capture nearly all of its
energy. And then if we take the Fourier Transform of the truncated signal,
we will see that it is no longer perfectly bandlimited, but the energy in
the signal outside the desired bandwidth will be very low. We can get
further control over this by using a window instead of simply truncating
(or even fancier ideas), but you get the general idea. In engineering
terms, it's possible to build reconstruction filters with reasonable delay
and very good stopband rejection - 100dB and beyond, pretty much the useful
range of human hearing. In practice it is not the finite-time constraint on
stopband rejection that limits sampling performance, but rather other more
arcane circuits and systems considerations.

E


On Wed, Mar 26, 2014 at 10:07 PM, Doug Houghton
doug_hough...@sympatico.cawrote:

 so is there a requirement for the signal to be periodic? or can any series
 of numbers be cnsidered periodic if it is bandlimited, or infinit?
  Periodic is the best word I can come up with.
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:

Re: [music-dsp] Nyquist-Shannon sampling theorem

2014-03-27 Thread Stefan Sullivan
 On Mar 26, 2014, at 10:07 PM, Doug Houghton doug_hough...@sympatico.ca
wrote:
  so is there a requirement for the signal to be periodic? or can any
series of numbers be cnsidered periodic if it is bandlimited, or infinit?
 Periodic is the best word I can come up with.
  --

 Well, no--you can decompose any portion of waveform that you want...I'm not
sure at this point if you're talking about the discrete Fourier Transform
or continuous, but I assume discrete in this context...but it's not that
generally useful to, say, do a single transform of an entire song. Sorry,
I'm not sure where you're going here...


Actually, yes there IS a requirement that it be periodic. Fourier theorem
says that any periodic sequence can be represented as a sum of sinusoids.
Sampling theory says that any band-limited _perriodic_ signal can be
properly sampled at the Nyquist rate. The trick is that any finite-duration
signal can be thought of as one period of a periodic signal. This is part
of the reason you get infinite repetitions in the frequency domain after
you sample. ...sort of.

As for frequencies jumping in and out, I think you were on the right track
when you said that it's a Fourier theorem thing. Imagine you had a signal
with one sinusoid that slowly fades in and out for the duration of the
signal. Imagine that the the envelope of this sinusoid is the first half of
a sinusoid. The envelope can be described as a sinusoid whose period is
twice the signal duration. If you were to simply take these two stationary
sinusoids (the envelope and the audible tone) and multiply them you end up
with a spectrum that contains their sum and difference tones. In that way
it can already be thought of as a tone that (slowly) pops in and out, but
which is represented as a sum of two stationary sinusoids.

If you wanted to have the tone come in and out more quickly you could add
the first harmonic of a square wave (or several) to the envelope. For each
additional harmonic you add to the envelope you get an additional two
sinusoids in spectrum of the whole signal. You can keep adding harmonics up
to the Nyquist frequency. This means that your frequencies can pop in and
out very quickly, but only as fast as your sampling rate allows.

Rinse and repeat for additional e.g. harmonics of your audible tone,
additional tones, etc.

Note that you can construct any envelope you could imagine, including
apparently asymmetrical, or ones which are approximately zero for most of
its duration pops in and out for only a small portion of it. That all comes
from Fourier theorem. The important part is that each of these envelopes
would be periodic in the duration of your sampled signal, as far as the
sampling theorem is concerned.

I hope that helps a bit
-Stefan
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Nyquist-Shannon sampling theorem

2014-03-27 Thread Sampo Syreeni

On 2014-03-27, Stefan Sullivan wrote:


Actually, yes there IS a requirement that it be periodic.


No, there is not. And the Shannon-Nyquist theorem isn't typically proven 
under under any such assumption. Furthermore, it generalizes to settings 
where periodicity isn't even an option.


Granted, it *is* possible to prove the sampling theorem starting with a 
square integrable, continuous, periodic function. That would in fact be 
the most classical treatment of the subject, starting with Fourier 
himself, and the way he utilized his series in the context of the heat 
equation. But if you try to treat any general, real signals that way, 
you'll have to pass the period to the limit in infinity, and that then 
makes some of the attendant calculus unwieldy.


Nowadays pretty much nobody bothers with those details, except perhaps 
to show the connection between the periodic in time, original series, 
and the continuous time, far easier and more generalizable transform.


(The difference between the four signal theoretically meaningful modern 
forms of Fourier analysis is that the Fourier transform (FT) is 
continuous both in time and in frequency, the original Fourier series 
(FS) is periodic in time and discrete in frequency, the discrete time 
Fourier transform (DTFT) is discrete in time and periodic in frequency, 
and the discrete Fourier transform (DFT) is discrete and periodic in 
both time and in frequency. Obviously we can only compute with the 
all-discrete thingy in there in DSP, that being DFT; the rest of the lot 
are relevant only if you have to analyze mixed mode systems, like 
delta-sigma-converters, radar pulse compression schemes and whatnot. All 
of the domains can be related to each other in a regular fashion (a 
lattice of linear homomorphisms), using certain intuitively sensible 
limit arguments, but only DTFT and FS admit a full isomorphism between 
them. However the topological properties of the function spaces FT and 
DFT work on, and in the case of FT in particular the measure theoretic 
properties, are rather different from the middle pair.)


Fourier theorem says that any periodic sequence can be represented as 
a sum of sinusoids.


Which Fourier theorem? There are tons and tons of different Fourier 
theorems in the modern mathematical literature, all of them subtly and 
sometimes not so subtly different. The original one had to do with 
decomposing (many but by no means all) periodic (dis?)continuous real 
functions into a certain kind of an (in?)finite series of sinusoids, 
under a certain kind of a convergence criterion. But then pretty much 
every part of that sentence has since been mauled, reinterpreted, 
twisted, übergeneralized, and thoroughly fucked over, and usually in 
more than one direction at the same time. That's why harmonic analysis 
is to this date a lively -- and exceedingly finicky -- subspecialty of 
the mathematical science.


One does not simply go into the Fourier domain.

Sampling theory says that any band-limited _perriodic_ signal can be 
properly sampled at the Nyquist rate.


No. Absolutely not. It says any bandlimited signal can be fully 
represented by its equidistant samples, at rate twice that of the 
highest sinusoid present. That in fact is the essence and the full shock 
of the theorem as originally presented and proven: bandlimitation in 
fully continuous R-R function terms can be translated into discreteness 
in time. That's the very thing which makes DSP possible in the first 
place, so everybody here still ought to understand how revolutionary, 
shocking, counterintuitive and far-reaching that idea really is. What it 
says is that under no realistic, physical constraint at all, any and 
every continuous time signal, be it periodic or not, can be losslessly, 
linearly represented by its equidistant samples, and since that is so, 
processed by discrete time circuitry to any given accuracy simply given 
the sampled representation.


That you can go to such discreteness was no surprise with periodic 
signals. Not for a hundred years or so. The precise reason why we invoke 
the theorem, and laud its inventors by name as Shannon and Nyquist, *is* 
that the *continuous* time, FT-derived, *aperiodic* version used to be 
so counter-intuitive. Yet it went through, obviously carried shocking 
implications from the start, and now we're suddenly here as just one 
casual minded such implication.


Never forget.

The trick is that any finite-duration signal can be thought of as one 
period of a periodic signal.


No, it can not. If you stretch a single cycle of cosine to eternity, it 
won't be square integrable. There's no real way to really handle that 
basic case in a systematic fashion, either.


This is part of the reason you get infinite repetitions in the 
frequency domain after you sample. ...sort of.


That's the DTFT, which you can sorta relate to the FT, using modulation 
by a train of delta functionals. But in order to do that for real, you 

Re: [music-dsp] Nyquist-Shannon sampling theorem

2014-03-27 Thread Ethan Duni
Hi Doug-

Regarding this:

Terms like well behaived when applied to the functon make me wonder
what
stipulations might be implied by the language that you'd have to be a formal
mathmatician to interpret.  As an example, I don't even know what the
instrinsic properties of a function may be in this context.

It turns out to be mostly math details that don't really come up in
practice, as somebody (Sampo? Robert?) already mentioned. You have to avoid
stuff where the signal blows up to infinity, or has badly-behaved
discontinuities or things like that. But in practice, there are no
realistic signals that display these issues. The only part where practical
signals are problematic is that they are necessarily time-limited, and so
cannot be perfectly band-limited. So the Sampling Theorem conditions can't
be exactly fulfilled, and we have to live with some (hopefully extremely
small) aliasing as a results. But, again, this turns out to be a quite
minor issue compared to various of the other practical concerns that come
up in designing A/D/A converters.

And this one:

If it was just a bunch of random numbers that started
somewhere and stopped somewhere, I doubt anyone would be writing equations
that mean anything.  I'd guess we would turn to statistics at that pint to
supply some context.

Fourier analysis also works on random signals. But usually in that case we
are less interested in the Fourier Transform of the random signals
directly, and look more at the Fourier Transform of their correlation
functions (this is called the power spectrum). That quantity is generally
more useful for your usual engineering stuff like filter design, system
analysis, etc. If you go to get a graduate degree in signal processing, the
core first-year courses are typically what's called statistical signal
processing, which as the name suggests covers signal processing issues in
the context of random signals. This is an important, interesting, and
worthwhile subject, but also maybe getting a bit afield from the Sampling
Theorem issues you are most immediately interested in?

E


On Thu, Mar 27, 2014 at 11:20 AM, Doug Houghton
doug_hough...@sympatico.cawrote:

 Some great replies, gives me a lot to think about

 Terms like well behaived when applied to the functon make me wonder
 what
 stipulations might be implied by the language that you'd have to be a
 formal
 mathmatician to interpret.  As an example, I don't even know what the
 instrinsic properties of a function may be in this context.

 Since it's an infinit series I suppose it doesn't really matter, given
 enough time you could prove out any rational requirement? which is why you
 can throw math at it.  If it was just a bunch of random numbers that
 started
 somewhere and stopped somewhere, I doubt anyone would be writing equations
 that mean anything.  I'd guess we would turn to statistics at that pint to
 supply some context.

 As a broad answer to questions posted in a couple of the replies, my
 interest lies in imrpoving my understanding of specifically what the SNST
 proves, and the requirements for it to be valid.


 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Nyquist-Shannon sampling theorem

2014-03-27 Thread robert bristow-johnson

On 3/27/14 2:20 PM, Doug Houghton wrote:

Some great replies, gives me a lot to think about

Terms like well behaved when applied to the functon make me wonder 
what
stipulations might be implied by the language that you'd have to be a 
formal

mathmatician to interpret.


i'm not so terribly worried about the existence of audio or music 
signals that are not sufficiently well behaved



  As an example, I don't even know what the
instrinsic properties of a function may be in this context.


the context are audio and music signals.  the end receptacle of these 
signals are our ears and brains.  i'm pretty sure that bandwidth 
restrictions apply.  that *really* nails down the well-behaved.  the 
are continuous-time, finite power signals that are also bandlimited.  
whether it's bandlimited to 22.05 kHz or to 48 kHz or 96 kHz, doesn't 
matter.  that is a quantitative issue.  doesn't change the validity nor 
the qualitative conditions for the theorem.




Since it's an infinit series I suppose it doesn't really matter, given
enough time you could prove out any rational requirement? which is why 
you

can throw math at it.


yup.  and then, when you get practical about reconstruction, you realize 
that the infinite series of sinc() functions will turn into a finite 
approximation to the same thing.  one approach, to a finite sum, is to 
truncate the sinc() function to a finite length.  that is the same as 
applying a rectangular window (which is often the worst kind), so then 
you try the sinc() function windowed by a good window function.  now 
that is a slightly different low-pass filter than the ideal brick-wall 
filter (which has a sinc() function for its impulse response).  so then 
you investigate how bad is it from different points of view (usually a 
spectral POV over some frequencies of interest).



  If it was just a bunch of random numbers that started
somewhere and stopped somewhere, I doubt anyone would be writing 
equations

that mean anything.


???

um, you can model *any* linear and time-invariant signal reconstruction 
problem (or interpolation problem) as a specific case of a string of 
impulses weighted with the samples values, x[n], going into a particular 
low-pass filter.  you can write equations for that.  in both the time 
domain (you would use these equations to implement the interpolation) 
and in the frequency domain: [what does this LPF do to the baseband 
signal?  what does it do to the images?]


so, for even polynomial interpolation (like Lagrange or Hermite), you 
can model it as a convolution with an impulse response and you can 
compute the Fourier transform of that continuous-time impulse response 
and see how good or how bad the frequency response is.  how bad does it 
kill the images and how safe is it to your original signal?


you can write equations for that, and they mean something.


  I'd guess we would turn to statistics at that pint to
supply some context.



but you can make some good guesses instead of doing this as a 
complicated statistical process





As a broad answer to questions posted in a couple of the replies, my
interest lies in imrpoving my understanding of specifically what the SNST
proves, and the requirements for it to be valid.


take a look at that earlier wikipedia version to show you.  if you 
ideally uniformly sample, your spectrum is repeatedly shifted (by 
integer multiples of the sampling frequency, these are called images) 
and added together.  to recover the original signal, you must remove all 
of the images, yet preserve the original (that's what the brick-wall LPF 
does).  Only if there is no overlap of the adjacent images is it 
possible to recover the original spectrum.  To make that happen, the 
sampling frequency must exceed twice the frequency of the highest 
frequency component of a bandlimited signal.  if it does not do that, 
images will overlap and once you add two numbers, it's pretty hard to 
separate them.  so the overlapped images can be thought of as 
non-overlapped frequency components that just happened to be at those 
frequencies.  those frequency components are called aliases.


take a look at


https://en.wikipedia.org/w/index.php?title=Nyquist%E2%80%93Shannon_sampling_theoremoldid=217945915 




the sampling theorem is actually quite simple to be expressed 
rigorously.  it is, at least, if you accept the EE notion of the Dirac 
delta function and not worry so much about it not really being a 
function, which is literally what the math folks tell us.


--

r b-j  r...@audioimagination.com

Imagination is more important than knowledge.



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Nyquist-Shannon sampling theorem

2014-03-27 Thread robert bristow-johnson

On 3/27/14 4:05 PM, Ethan Duni wrote:

Hi Doug-

Regarding this:

Terms like well behaived when applied to the functon make me wonder
what
stipulations might be implied by the language that you'd have to be a formal
mathmatician to interpret.  As an example, I don't even know what the
instrinsic properties of a function may be in this context.

It turns out to be mostly math details that don't really come up in
practice, as somebody (Sampo? Robert?) already mentioned. You have to avoid
stuff where the signal blows up to infinity, or has badly-behaved
discontinuities or things like that.


you only have to worry about discontinuities regarding the signal x(t) 
itself.  but if it's an audio/music signal that is bandlimited to some 
finite frequency, i think the signal is, by any definition, sufficiently 
well behaved.  of course there are huge discontinuities in the impulse 
train that is the sampling function (or Dirac comb), but we have math 
to deal with that.  but it requires accepting


   +inf   +inf
T  SUM{ delta(t-nT) }  =  SUM{ e^(i 2 pi k/T t) }
   n=-inf k=-inf

  But in practice, there are no
realistic signals that display these issues.


certainly not that are finite power and bandlimited.


  The only part where practical
signals are problematic is that they are necessarily time-limited, and so
cannot be perfectly band-limited.


but we can get very, very close.  leave your audio device turned on for 
an hour or two.  how bandlimited can you make that?




  So the Sampling Theorem conditions can't
be exactly fulfilled, and we have to live with some (hopefully extremely
small) aliasing as a results.


yes.  but we can write equations that tell us an upper limit to the 
energy in those aliases.



  But, again, this turns out to be a quite
minor issue compared to various of the other practical concerns that come
up in designing A/D/A converters.


right, but it becomes about the only issue (or a main tradeoff issue) in 
software or totally digital interpolation or sample-rate-conversion 
processes.  we do that basically by imagining what we would be doing 
with a D/A converter running at one Fs connected to an A/D running at a 
different Fs.  so, we *reconstruct*, using Shannon-Nyquist, from the 
samples that were obtained at the source Fs, the continuous-time signal 
at instances of time that are the sampling instances of the destination Fs.



And this one:

If it was just a bunch of random numbers that started
somewhere and stopped somewhere, I doubt anyone would be writing equations
that mean anything.  I'd guess we would turn to statistics at that pint to
supply some context.

Fourier analysis also works on random signals. But usually in that case we
are less interested in the Fourier Transform of the random signals
directly, and look more at the Fourier Transform of their correlation
functions (this is called the power spectrum).


which is the ensemble magnitude-square of the Fourier Transform of the 
random signals themselves.


so the Fourier analysis we do on random signals really doesn't know 
anything about the phase of their spectrums.  their spectrum magnitudes 
may still be deterministic, but their phases are totally random.




  That quantity is generally
more useful for your usual engineering stuff like filter design, system
analysis, etc. If you go to get a graduate degree in signal processing, the
core first-year courses are typically what's called statistical signal
processing, which as the name suggests covers signal processing issues in
the context of random signals.


but, we can make some approximations and get good results without 
getting too deep in the weeds.



  This is an important, interesting, and
worthwhile subject, but also maybe getting a bit afield from the Sampling
Theorem issues you are most immediately interested in?


--

r b-j  r...@audioimagination.com

Imagination is more important than knowledge.



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Nyquist-Shannon sampling theorem

2014-03-27 Thread Ethan Duni
 it is, at least, if you accept the EE notion of the Dirac delta function
and not worry so much about it not really being a function, which is
literally what the math folks tell us.

I may be misremembering, but can't non-standard analysis be used to make
that whole Dirac delta function approach rigorous? I know that it works for
the whole algebraic manipulation of delta-x terms that we also like to do
in engineering classes, intuitively seems like we could play the same trick
with Dirac delta's and associated stuff. But I don't recall whether it
actually works out entirely... although Wikipedia suggests that maybe it
does (
http://en.wikipedia.org/wiki/Dirac_delta_function#Infinitesimal_delta_functions).

Not that it's worth the trouble to really work out - we already know what
the correct answers are from measure theory/distributions - but it's nice
to keep in mind that these pedantic math complaints are actually kind of
baseless, at least if some care is taken to adhere to the rules of
non-standard analysis and so avoid various pitfalls.

E


On Thu, Mar 27, 2014 at 12:34 PM, robert bristow-johnson 
r...@audioimagination.com wrote:

 On 3/27/14 2:20 PM, Doug Houghton wrote:

 Some great replies, gives me a lot to think about

 Terms like well behaved when applied to the functon make me wonder
 what

 stipulations might be implied by the language that you'd have to be a
 formal
 mathmatician to interpret.


 i'm not so terribly worried about the existence of audio or music signals
 that are not sufficiently well behaved


As an example, I don't even know what the
 instrinsic properties of a function may be in this context.


 the context are audio and music signals.  the end receptacle of these
 signals are our ears and brains.  i'm pretty sure that bandwidth
 restrictions apply.  that *really* nails down the well-behaved.  the are
 continuous-time, finite power signals that are also bandlimited.  whether
 it's bandlimited to 22.05 kHz or to 48 kHz or 96 kHz, doesn't matter.  that
 is a quantitative issue.  doesn't change the validity nor the qualitative
 conditions for the theorem.



 Since it's an infinit series I suppose it doesn't really matter, given
 enough time you could prove out any rational requirement? which is why you
 can throw math at it.


 yup.  and then, when you get practical about reconstruction, you realize
 that the infinite series of sinc() functions will turn into a finite
 approximation to the same thing.  one approach, to a finite sum, is to
 truncate the sinc() function to a finite length.  that is the same as
 applying a rectangular window (which is often the worst kind), so then you
 try the sinc() function windowed by a good window function.  now that is a
 slightly different low-pass filter than the ideal brick-wall filter (which
 has a sinc() function for its impulse response).  so then you investigate
 how bad is it from different points of view (usually a spectral POV over
 some frequencies of interest).


If it was just a bunch of random numbers that started
 somewhere and stopped somewhere, I doubt anyone would be writing equations
 that mean anything.


 ???

 um, you can model *any* linear and time-invariant signal reconstruction
 problem (or interpolation problem) as a specific case of a string of
 impulses weighted with the samples values, x[n], going into a particular
 low-pass filter.  you can write equations for that.  in both the time
 domain (you would use these equations to implement the interpolation) and
 in the frequency domain: [what does this LPF do to the baseband signal?
  what does it do to the images?]

 so, for even polynomial interpolation (like Lagrange or Hermite), you can
 model it as a convolution with an impulse response and you can compute the
 Fourier transform of that continuous-time impulse response and see how good
 or how bad the frequency response is.  how bad does it kill the images and
 how safe is it to your original signal?

 you can write equations for that, and they mean something.


I'd guess we would turn to statistics at that pint to
 supply some context.


 but you can make some good guesses instead of doing this as a complicated
 statistical process




  As a broad answer to questions posted in a couple of the replies, my
 interest lies in imrpoving my understanding of specifically what the SNST
 proves, and the requirements for it to be valid.


 take a look at that earlier wikipedia version to show you.  if you ideally
 uniformly sample, your spectrum is repeatedly shifted (by integer multiples
 of the sampling frequency, these are called images) and added together.
  to recover the original signal, you must remove all of the images, yet
 preserve the original (that's what the brick-wall LPF does).  Only if there
 is no overlap of the adjacent images is it possible to recover the original
 spectrum.  To make that happen, the sampling frequency must exceed twice
 the frequency of the highest frequency component 

Re: [music-dsp] Nyquist-Shannon sampling theorem

2014-03-27 Thread Theo Verelst



In the time when Einstein started to work on his theories, the main hip 
 and profound mathematics of the time came to be a consequence of the 
important physics problems of the time, and mostly (if I'm not 
forgetting some other factors) they were the higher maths, formulated as 
functional integrals. That's hard to explain such as to achieve actual 
understanding, easier for people with undergrad levels in at least 
Mechanical Engineering or better still.


Apart from catching some small fish applying the dynamic of find an 
error in.. and such exercises, it isn't very worth while to try to 
grasp the mind of Fourier or something, really, seriously, usually 
pretty futile.



I'm glad to see some influence of my repeated mention of some of my 
theoretical concerns leads to thoughts getting formulated, and more, up 
to quite some, precision being present.


robert bristow-johnson wrote: On 3/27/14 5:27 PM, Ethan Duni wrote:
 it is, at least, if you accept the EE notion of the Dirac delta 
function

 and not worry so much about it not really being a function, which is
 literally what the math folks tell us.

 I may be misremembering, but can't non-standard analysis be used to make
 that whole Dirac delta function approach rigorous?

 i dunno what non-standard analysis you mean.  the only truly rigorous
 usage of the Dirac delta is to keep it clothed with a surrounding
 integral.  so naked Dirac deltas are a no-no.  ...

That, like what some others have phrased/quoted, sound good.

I had the good fortune when I graduated to have had a nice subject with 
influence from university and some commercial lab (HP) influence, and 
can't help to think when some people feel properly in place and 
motivated to influence the world of science and connected interested 
people, that there would be very, very many interesting subjects 
possible to think about and work on, without having to go rough on the 
lower regions of theory.


T.

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Nyquist-Shannon sampling theorem

2014-03-27 Thread robert bristow-johnson

On 3/27/14 10:58 PM, Theo Verelst wrote:


I'm glad to see some influence of my repeated mention of some of my 
theoretical concerns leads to thoughts getting formulated, and more, 
up to quite some, precision being present.


well, Theo, i've been thinking (and writing about 
http://www.aes.org/e-lib/browse.cfm?elib=5122 ) the sampling theorem and 
reconstruction issues for longer ago than this mailing list existed.




robert bristow-johnson wrote: On 3/27/14 5:27 PM, Ethan Duni wrote:
 it is, at least, if you accept the EE notion of the Dirac delta 
function

 and not worry so much about it not really being a function, which is
 literally what the math folks tell us.

 I may be misremembering, but can't non-standard analysis be used to 
make

 that whole Dirac delta function approach rigorous?

 i dunno what non-standard analysis you mean.  the only truly rigorous
 usage of the Dirac delta is to keep it clothed with a surrounding
 integral.  so naked Dirac deltas are a no-no.  ...

That, like what some others have phrased/quoted, sound good.


but it's *convenient* to be able to express and make use of naked delta 
functions.  i *want* to be able to say:



   +inf   +inf
T  SUM{ delta(t-nT) }  =  SUM{ e^(i 2 pi k/T t) }
   n=-inf k=-inf


and the summation on the left is a bunch of naked delta functions.  no 
integral surrounding them (at least not until later).


I had the good fortune when I graduated to have had a nice subject 
with influence from university and some commercial lab (HP) influence, 
and can't help to think when some people feel properly in place and 
motivated to influence the world of science and connected interested 
people, that there would be very, very many interesting subjects 
possible to think about and work on, without having to go rough on the 
lower regions of theory.


prognosticating about whether the Dirac delta is a function or not is 
less useful to me than just moving past that and treating it as if it 
were a function defined by the limit of some nascent delta function 
(which is not the way the math guys do it).  it still has all the 
properties i need from it.



--

r b-j  r...@audioimagination.com

Imagination is more important than knowledge.



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Nyquist-Shannon sampling theorem

2014-03-27 Thread Ethan Duni
Hi Robert-

 i dunno what non-standard analysis you mean.

I'm referring to the stuff based on hyperreal numbers:

http://en.wikipedia.org/wiki/Hyperreal_number

These are an extension of the extended real numbers, where each hyperreal
number has a standard part (which is an extended real) and an
infintesimal part (which corresponds to a convergence rate). The basic
idea is that each hyperreal number represents an equivalence class of
functions which converge (in the extended reals, so converging to
infinity is allowed) to the same limit at the same rate. The limit is given
by the standard part of the number, and the convergence rate by the
infintesimal part. So you can make sense of statements like 0/0 or
infinity - infinity in this context, by comparing the infintesimal parts.
I.e., the usual epsilon-delta limit approach from standard analysis is
embedded into the arithmetic of the hyperreals. So using this approach you
can rigorously do the kinds of sloppy algebraic manipulations of dx
type terms that we often see in introductory calculus classes, for one
example.

 the only truly rigorous usage of the Dirac delta is to keep it clothed
with a surrounding integral.

That's true, but a Dirac delta in the context of non-standard analysis
isn't naked - it comes clothed with an associated limiting process given
by the infintesimal part. I.e., consider a sequence of functions that
converges to a Dirac delta, as is used in the standard approach (there's
the boxcar example you've already given, or you can use a two-sided
exponential decay, or a Gaussian distribution with variance shrinking to
zero, or any number of other things). For any such sequence, there is an
associated hyperreal Dirac delta, which expresses all of the relevant
analytic properties of that class of sequences - the fact that it tends to
zero everywhere except the origin and blows up there, and also the rates at
which each point converges. Using that, we should be able to do the usual
non-rigorous algebraic manipulations used in undergrad engineering
proofs, but make them rigorous (with a bit of care - you have to work out
what effects the non-standard versions of various operations have, take the
standard part at appropriate places to get back to the final answer,
etc.).

Anyway the whole thing is a bit of a curiosity. It's generally easier to
just do the proofs the standard way if you're really interested, and just
use the regular sloppy approach if you aren't. But still kind of neat I
think, that the fake way can actually be made rigorous by embedding the
relevant analytic framework into an extended number system.

E



On Thu, Mar 27, 2014 at 7:17 PM, robert bristow-johnson 
r...@audioimagination.com wrote:

 On 3/27/14 5:27 PM, Ethan Duni wrote:

 it is, at least, if you accept the EE notion of the Dirac delta function

 and not worry so much about it not really being a function, which is
 literally what the math folks tell us.

 I may be misremembering, but can't non-standard analysis be used to make
 that whole Dirac delta function approach rigorous?


 i dunno what non-standard analysis you mean.  the only truly rigorous
 usage of the Dirac delta is to keep it clothed with a surrounding integral.
  so naked Dirac deltas are a no-no.  then we can't really have a notion of
 a Dirac comb function either.


I know that it works for
 the whole algebraic manipulation of delta-x terms that we also like to
 do
 in engineering classes, intuitively seems like we could play the same
 trick
 with Dirac delta's and associated stuff. But I don't recall whether it
 actually works out entirely... although Wikipedia suggests that maybe it
 does (
 http://en.wikipedia.org/wiki/Dirac_delta_function#
 Infinitesimal_delta_functions).

 Not that it's worth the trouble to really work out - we already know what
 the correct answers are from measure theory/distributions - but it's nice
 to keep in mind that these pedantic math complaints are actually kind of
 baseless, at least if some care is taken to adhere to the rules of
 non-standard analysis and so avoid various pitfalls.


 i just treat the Dirac delta in time as if it has a Planck Time (10^(-43)
 second) width.  then it's a true function and it still has, to within an
 immeasureable degree of accuracy, the same properties that i want.

 L8r,


 --

 r b-j  r...@audioimagination.com

 Imagination is more important than knowledge.




 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Nyquist–Shannon sampling theorem

2014-03-26 Thread Kenneth Ciszewski
As I remember it, the sampling theorem says that the sampling rate used to 
sample a signal must be at least twice the highest frequency being sampled in 
order to get a faithful reproduction when the samples are turned back into a 
(continuous) output signal. In practice, because it is necessary to band limit 
most signals to prevent aliasing artifacts, the sampling rate usually needs to 
be about 2.2 times the highest frequency being sampled, since it is impossible 
in practice to create low pass filters that are extremely steep, steep enough 
to allow a sampling rate of only 2 times the highest frequency involved to 
prevent aliasing.

Take the standard 8000 Hz sampling rate for telephone toll quality voice used 
with mu-law and a-law voice codecs for long distance digital transmission.  The 
specified upper frequency is about 3400 Hz, as I recall.  8000 Hz is more than 
2 times 3400 Hz, it's close to 2.2 times.

I think you can have frequencies  changing amplitude and jumping in and out 
subject to the constraints given above.  Obviously, telephone voice signal will 
have different frequencies at different times depending on the speaker and the 
words being spoken. Music has different frequencies and amplitudes throughout a 
particular performance.

I'm not sure what you mean by discontinuous.  When people speak on the 
telephone, there are often periods of silence between periods of speech signal. 
 

Signals that have sharp rising edges like the unit step function and the 
infinite impulse (Dirac Delta) function will obviously be band limited and 
their shape  and frequency content changed by the anti-aliasing low pass filter 
used before sampling takes take place.  

I don't think you get an exact reconstruction of the signal, but with proper 
filtering after being converted by a digital to analog converter, you can get a 
signal that sounds (or looks like, on an oscilloscope) a lot like the original 
signal, such that it is recognizable and intelligible.

Music is band limited to 20 KHz and sampled at at least 44,100 Hz for recording 
on CDs.  

What is your application?







 From: Doug Houghton doug_hough...@sympatico.ca
To: A discussion list for music-related DSP music-dsp@music.columbia.edu 
Sent: Wednesday, March 26, 2014 10:42 PM
Subject: [music-dsp] Nyquist–Shannon sampling theorem
 

I can't seem to get to the bottom of this with the usual internet pages.

Is the test signal, while possibly containing any number of wave compenents at 
various frequencies, required to be continous ansd uniform?

By this I mean you can't have frequencies jumping in and out, changing in 
amplitude etc...

I'm guessing this somehow scratches at the surface of what I've read about no 
signal being properly band limited unless it's infinit.

I fail to see how a readable proof is possible to explain exact reconstruction 
of any real recording sound, whether it's music or crickets chirping.

I sort of see maybe how an infinit signal could solve some of these issues, 
meaning any amplitude/frequency  complexities over infinity may simply resolve 
to something that can be bandlimited and described as a frequency of a steady 
signal, something like that.

Curouis, I am starting to suspect there is a lot of typical misconceptions 
about what the math really proves, I can't read the equations I'm turning to 
this list. 
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Nyquist–Shannon sampling theorem

2014-03-26 Thread Doug Houghton
The application is music.  I understand the basics, my question is in the 
constraints that might be imposed on the signal or functon as referenced 
by the theory.  Is it understood to be repeating? for lack of a better term, 
essentually just a mash of frequencies that bever change from start to 
finish.


I'm thinking the math must consider it this way, or rather the difference is 
abstracted since the signal is assumed to be band limited, which means 
infinit, which means you can create any random signal by inject the required 
freuencies at the reuired amplitides and phase from start to finish, even a 
20k 2ms blip in the middle of endless silence.


Is that making any sense? I'm struggling with the fine points.  I bet this 
is obvious if you understand the math in the proof. 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Nyquist–Shannon sampling theorem

2014-03-26 Thread Doug Houghton

sorry about all the attachments, didn't see that coming.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Nyquist–Shannon sampling theorem

2014-03-26 Thread Nigel Redmon
Hi Doug,

I think you’re overthinking this…

There is the frequency-sensitive requirement that you can’t properly sample a 
signal that has frequencies higher than half the sample rate. For music, that’s 
not a problem, since our ears have a significant band limitation anyway.

So, if we have a musical signal with lots of discontinuities, resulting in 
strong frequencies to, say, 100 kHz, and we make a copy with everything 
stripped off above 20 kHz with a lowpass filter, the two waveforms will not 
look alike. But they will sound alike to us. Now sample that at 44.1 kHz, 
24-bit. Then push that out to a D/A converter back to a third analog waveform. 
It *will* look just like the second waveform, and it will sound like both it 
and the original waveform.

By 20k 2ms blip”, I assume you mean a 2 ms step that has been band limited to 
20 kHz. Sure, no problem.


On Mar 26, 2014, at 9:23 PM, Doug Houghton doug_hough...@sympatico.ca wrote:

 The application is music.  I understand the basics, my question is in the 
 constraints that might be imposed on the signal or functon as referenced 
 by the theory.  Is it understood to be repeating? for lack of a better term, 
 essentually just a mash of frequencies that bever change from start to finish.
 
 I'm thinking the math must consider it this way, or rather the difference is 
 abstracted since the signal is assumed to be band limited, which means 
 infinit, which means you can create any random signal by inject the required 
 freuencies at the reuired amplitides and phase from start to finish, even a 
 20k 2ms blip in the middle of endless silence.
 
 Is that making any sense? I'm struggling with the fine points.  I bet this is 
 obvious if you understand the math in the proof. 
 --

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Nyquist–Shannon sampling theorem

2014-03-26 Thread Doug Houghton



There is the frequency-sensitive requirement that you can’t properly sample 
a signal that has frequencies higher than half the sample rate. For music, 
that’s not a problem, since our ears have a significant band limitation 
anyway.


This is intuitive.  I think perhaps what I'm asking has more to do directly 
with the fourier series than sample theory.


It's my understanding that the fourier theory says any signal can be 
created by summing various frequencies at various phases and amplitudes.  So 
this would answer my question then that it's not really a stipulation of the 
function persay, since any signal at all can be described this way. 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Nyquist–Shannon sampling theorem

2014-03-26 Thread Nigel Redmon
It's my understanding that the fourier theory says any signal can be created 
by summing various frequencies at various phases and amplitudes.

OK, now recall that the Fourier series describes a subset of “any signal” with 
a subset of “various frequencies”. It’s more like one cycle of any waveform can 
be created by summing sine waves of multiples of that cycle at various phases 
and amplitudes (a little awkward, but trying to modify your words). Fourier 
figured that out by observing the way heat traveled around an iron ring (hence 
the focus on cycles)–he wasn’t really into the recording scene back then ;-)

(It’s true that Fourier techniques can be used to create more arbitrary 
signals, but that somewhat in the manner that movies are made from many still 
pictures.)

So, it seems that you’re trying to match the theory of a very specific, limited 
portion of a signal, with one that doesn’t have those limitations.

On Mar 26, 2014, at 9:46 PM, Doug Houghton doug_hough...@sympatico.ca wrote:
 
 There is the frequency-sensitive requirement that you can’t properly sample 
 a signal that has frequencies higher than half the sample rate. For music, 
 that’s not a problem, since our ears have a significant band limitation 
 anyway.
 
 This is intuitive.  I think perhaps what I'm asking has more to do directly 
 with the fourier series than sample theory.
 
 It's my understanding that the fourier theory says any signal can be 
 created by summing various frequencies at various phases and amplitudes.  So 
 this would answer my question then that it's not really a stipulation of the 
 function persay, since any signal at all can be described this way. 
 --

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Nyquist–Shannon sampling theorem

2014-03-26 Thread Doug Houghton
so is there a requirement for the signal to be periodic? or can any series 
of numbers be cnsidered periodic if it is bandlimited, or infinit?  Periodic 
is the best word I can come up with. 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Nyquist–Shannon sampling theorem

2014-03-26 Thread Thor Harald Johansen

I'm guessing this somehow scratches at the surface of what I've read
about no signal being properly band limited unless it's infinit.


You're talking about Sinc filtering (ideal low pass filter), which is 
essentially an IIR filter that needs infinite past and future samples. 
In practice, a very steep filter is used to attenuate the signal above 
the Nyquist frequency to almost nothing. A Lanczos (windowed Sinc) 
filter will be close to ideal.


For synthesis of non-sinusoidal test waveforms, a BLIT (Band-Limited 
Impulse Train) oscillator will give you a perfectly band limited signal.


Thor




smime.p7s
Description: S/MIME Cryptographic Signature
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Nyquist–Shannon sampling theorem

2014-03-26 Thread Nigel Redmon
On Mar 26, 2014, at 10:07 PM, Doug Houghton doug_hough...@sympatico.ca wrote:
 so is there a requirement for the signal to be periodic? or can any series of 
 numbers be cnsidered periodic if it is bandlimited, or infinit?  Periodic is 
 the best word I can come up with. 
 --

Well, no—you can decompose any portion of waveform that you want…I’m not sure 
at this point if you’re talking about the discrete Fourier Transform or 
continuous, but I assume discrete in this context…but it’s not that generally 
useful to, say, do a single transform of an entire song. Sorry, I’m not sure 
where you’re going here…

So, let’s back off. The sampling theorem says that you can recreate any signal 
as long as you sample at a rate of more than twice the highest frequency 
component. Now, how do you feel that conflicts with Fourier theory, 
specifically?
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp