---------------------------- Original Message ----------------------------

Subject: Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

From: "Ethan Duni" <ethan.d...@gmail.com>

Date: Wed, November 11, 2015 7:36 pm

To: "robert bristow-johnson" <r...@audioimagination.com>

"A discussion list for music-related DSP" <music-dsp@music.columbia.edu>

--------------------------------------------------------------------------



>>no. we need ergodicity to take a definition of autocorrelation, which we

> are all familiar with:

>

>> Rx[k] = lim_{N->inf} 1/(2N+1) sum_{n=-N}^{+N} x[n] x[n+k]

>

>>and turn that into a probabilistic expression

>

>> Rx[k] = E{ x[n] x[n-k] }

>

>>which we can figger out with the joint p.d.f.

>

>

> That's one way to do it. And if you're working only within the class of

> stationary signals, it's a convenient way to set everything up. But it's

> not necessary. There's nothing stopping you from simply defining

> autocorrelation as r(n,k) = E(x[n]x[n-k]) at the outset.
�
well there's nothing stopping us from defining autocorrelation as Rx[k] = 5 
(for all k). �but such a definition is not particularly useful.
�
there is nothing *motivating* us to define Rx[k] =
E{x[n] x[n+k]} except that we expect that expectation value (which is an 
average) to be the same as the other definition, which is what we use in all of 
this deterministic Fourier signal theory we start with in communication 
systems. �otherwise, given the probabilistic definition, why would we
expect the Fourier Transform of Rx[k] = E{x[n] x[n+k]}�to be the power 
spectrum? �you get to that fact long before any of this statistical 
communications theory.
�
> You then need (WS)
> stationarity to make that a function of only the lag, and then ergodicity

> to establish that the statistical estimate of autocorrelation (the sample

> autocorrelation, as it is commonly known) will converge,
�
to *what*? �by definition, **iff** it's ergodic, then the statistical estimate 
(by that you mean the average over time) converges to the probabilistic 
expectation value. �if it's *not* ergodic, then you don't
know that they are the same.
�
> but you can ignore
> it if you are just dealing with probabilistic quantities and not worrying

> about the statistics.
�
we got some semantic differences here. �by "statistical estimate" i know you're 
referring to the same result that is what we get in our first semester 
communications (long before statistical communications) as the **definition** of
autocorrelation. �what you call the "statistical estimate" is what i call the 
"primary definition".
�
>>i totally disagree. i consider this to be fundamental (and it's how i
> remember doing statistical communication theory back in grad school).

>

>

> That was a common approach in classical signal processing

> literature/cirricula, since you're typically assuming stationarity at the

> outset anyway. And this approach matches the historical development of the

> concepts (people were computing sample autocorrelations before they squared

> away the probabilistic interpretation). But this is kind of a historical

> artifact that has fallen out of favor.

�
in an electrical engineering communications class? �are you sure?
�
see, they gotta teach these kids things about signals and Fourier and spectra 
and LTI and the like so they have a concept of what that stuff is about 
*without* necessarily bringing into the
conversation probability, random variables, p.d.f., and random processes. 
�pedagogically, i am quite dubious that this *historical artifact* is not how 
they teach statistical communications even now. �i know my vanTrees and 
Wozencraft&Jacobs books are old, but this is timeless and
classical. �i doubt it has fallen out of favor.

>

> In modern statistical signal processing contexts (and the wider prob/stat

> world) it's typically done the other way around: you define all the random

> variables/processes up front, and then define autocorrelation as r(n,k) =

> E(x[n]x[n-k]).
�
well, it's not just random processes that have autocorrelations. �deterministic 
signals have them too.
�
and pedagocially, i can't imagine teaching statistical signal processing before 
teaching the fundamentals of signal
processing.
�
> Once you have that all sorted out, you turn to the question
> of whether the corresponding statistics (the sample autocorrelation

> function for example) converge, which is where the ergodicity stuff comes

> in.
�
yes.
�
> The advantage to doing it this way is that you start with the most

> general stuff requiring the least assumptions, and then build up more and

> more specific results as you add assumptions. Assuming ergodicity at the

> outset and defining everything in terms of the statistics produces the same

> results for that case, but leaves you unable to say anything about

> non-stationary signals, non-ergodic signals, etc.

>

>

> Leafing through my college books, I can't find a single one that does it

> the old way.
�
you must be a lot younger than me.
�
>They all start with definitions in the probability domain, and

> then tackle the statistics after that's all set up.

>
your first communications class (the book i had was A.B. Carlson) started out 
with probability and stochastic processes??? �i can't imagine teaching 
undergraduates what they need to know (ya know, stuff like what AM and FM and 
SSB and QAM and QPSK is) in one semester doing it that
way. �statistical communications was a subsequent graduate course, for me.
�

--
�


r b-j � � � � � � � � � r...@audioimagination.com
�


"Imagination is more important than knowledge."
_______________________________________________
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Reply via email to