Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-18 Thread Theo Verelst
I don't recall the various chapters of Information Theory and practical digtial 
electronics or something about these kinds of power estimates, so I'm not putting this 
thread in some direction, except two small considerations.


When the power of a signal is concerned, which in the continuous case for white noise has 
an interesting statistical convergence, you can try to square it, and do a frequency 
analysis. For a blocky signal like a sequence of amplitude values, that interpretation can 
go quite a bit wrong when those are samples of a real (analog) signal or a theoretical 
signal where you don't distinguish the possible meaning of a spectrum. If the signal is 
white noise coming from a actual white noise signal being sampled, there's going to be 
aliasing that utters itself statistically, and/or there's an issue with the band limiting 
filtering. Not so important you might think, but how much "energy"  of the signal is 
actually producing aliasing (and what is the mapping that then takes place into the 
frequency measurement ?). For the EE interested persons in computing acoustic power in the 
digital domain, you might even in some cases need to know the relation in phase sense of 
the voltage (the value of the sample) and the corresponding current, or there could be 
quite noticeable discrepancies.


Of course you could say, I take a random generator and N samples from the same, and 
compute an FFT. Sure, but theoretically, that not a very clear definition, and you might 
be surprised that practice is a bit different. Also, when the signal is sort of sampled 
(as for instance in HF systems), taking the samples as indicative of the power can give 
you errors because the summation of the squared signal samples isn't the same as the 
integral of the original analog signal! Also, there are interesting little complex 
functions that you can do funny iterations with, and that seem perfectly decent and easy 
to take for instance an average or a frequency analysis of, while in fact that isn't so 
trivial, so I'm not always convinced that a simplified statistical analysis is actually 
close to the real analysis.


T.V.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-17 Thread Martin Vicanek

>/  > well, pink is -3 dB/octave and red (a.k.a. brown) is -6 dB/octave. a
/
>/  > roll-off of -12//  dB/octave would be very brown. -- r b-j
/

>/  Those values are for amplitudes - for a power spectrum the slopes  double.
/  
no sir.  not with dB.  this is why we use
  
dB = 20 * log10(amplitude)
  
and
  
dB = 10 * log10(power)


r b-j

I see, so it is 6 dB/octave (brown).

The FT of (1/3)*(1-P)^|k| is (1/3)*(1-Q^2)/(1-2Qcos(w) + Q^2), where Q =
(1-P).

Looks like you were thinking of the expression for the transform of the
one-sided decaying signal u[k]*(1-P)^k?

E

Yes, exactly. Thanks for sorting that out.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

[music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-16 Thread Martin Vicanek
Has this been answered yet? If not, I'll try a back-of-the-envelope 
derivation.


Consider two consecutive samples. By definition, the probability for 
them to be equal is (1-P), else they will be different and perfectly 
uncorrelated. Hence the expected correlation between two consecutive 
samples is


 = (1/3)*(1-P)

where 1/3 is the variance of x[n]. It is easy to see that for two 
samples separated by a lag 2 the probability for them to be equal is 
(1-P)^2, and, in general, (1-P)^k for a lag k. Hence the autocorrelation is


 = (1/3)*(1-P)^|k|

(I checked that with a little MC code before posting.) So the power 
spectrum is (1/3)/(1 + (1-P)z^-1), i.e flat at DC and pink at higher 
frequencies. For reasonably small P the corner frequency is


w_c = P/sqrt(1-P).
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-16 Thread robert bristow-johnson







 Original Message 

Subject: Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

From: "Martin Vicanek" 

Date: Mon, November 16, 2015 3:50 pm

To: music-dsp@music.columbia.edu

--



> > well, pink is -3 dB/octave and red (a.k.a. brown) is -6 dB/octave. a

> roll-off of -12

> > dB/octave would be very brown. -- r b-j

>

> Those values are for amplitudes - for a power spectrum the slopes double.
�
no sir. �not with dB. �this is why we use�
�
� �dB = 20 * log10(amplitude)
�
and�
�
� �dB = 10 *
log10(power)





--
�


r b-j � � � � � � � � � r...@audioimagination.com
�


"Imagination is more important than knowledge."
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-16 Thread Ethan Duni
>> [..] the autocorrelation is
>>
>>  = (1/3)*(1-P)^|k|
>>
>> (I checked that with a little MC code before posting.) So the power
>> spectrum is (1/3)/(1 + (1-P)z^-1)

The FT of (1/3)*(1-P)^|k| is (1/3)*(1-Q^2)/(1-2Qcos(w) + Q^2), where Q =
(1-P).

Looks like you were thinking of the expression for the transform of the
one-sided decaying signal u[k]*(1-P)^k?

E



On Mon, Nov 16, 2015 at 12:35 PM, robert bristow-johnson <
r...@audioimagination.com> wrote:

>
> > Am 16.11.2015 20:00, schrieb Martin Vicanek:
> >> [..] the autocorrelation is
> >>
> >>  = (1/3)*(1-P)^|k|
> >>
> >> (I checked that with a little MC code before posting.) So the power
> >> spectrum is (1/3)/(1 + (1-P)z^-1), i.e flat at DC and pink at higher
> >> frequencies. For reasonably small P the corner frequency is
> >>
> >> w_c = P/sqrt(1-P).
> >
> > Erratum: The power spectrum is brown, not pink. The fall-off is 12
> > dB/octave, not 6. Sorry, next time I'll use a larger envelope. ;-)
> >
>
> well, pink is -3 dB/octave and red (a.k.a. brown) is -6 dB/octave.  a
> roll-off of -12 dB/octave would be very brown.
>
>
>
> --
>
>
>
>
> r b-j   r...@audioimagination.com
>
>
>
>
> "Imagination is more important than knowledge."
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-16 Thread robert bristow-johnson





> Am 16.11.2015 20:00, schrieb Martin Vicanek:

>> [..] the autocorrelation is

>>

>>  = (1/3)*(1-P)^|k|

>>

>> (I checked that with a little MC code before posting.) So the power

>> spectrum is (1/3)/(1 + (1-P)z^-1), i.e flat at DC and pink at higher

>> frequencies. For reasonably small P the corner frequency is

>>

>> w_c = P/sqrt(1-P).

>

> Erratum: The power spectrum is brown, not pink. The fall-off is 12

> dB/octave, not 6. Sorry, next time I'll use a larger envelope. ;-)

>
well, pink is -3 dB/octave and red (a.k.a. brown) is -6 dB/octave. �a roll-off 
of -12 dB/octave would be very brown.




--
�


r b-j � � � � � � � � � r...@audioimagination.com
�


"Imagination is more important than knowledge."
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-16 Thread Martin Vicanek

Am 16.11.2015 20:00, schrieb Martin Vicanek:

[..] the autocorrelation is

 = (1/3)*(1-P)^|k|

(I checked that with a little MC code before posting.) So the power 
spectrum is (1/3)/(1 + (1-P)z^-1), i.e flat at DC and pink at higher 
frequencies. For reasonably small P the corner frequency is


w_c = P/sqrt(1-P).


Erratum: The power spectrum is brown, not pink. The fall-off is 12 
dB/octave, not 6. Sorry, next time I'll use a larger envelope. ;-)


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-16 Thread Martin Vicanek
> well, pink is -3 dB/octave and red (a.k.a. brown) is -6 dB/octave.  a 
roll-off of -12

> dB/octave would be very brown. -- r b-j

Those values are for amplitudes - for a power spectrum the slopes double.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-12 Thread Theo Verelst
Everywhere in the exact sciences there's the dualism between statistical analysis and 
deterministic engineering tools, since the major break through in quantum physics at the 
beginning of the 20th century. Whether that's some sort of diabolical duality or, as it 
actually is at the higher levels of mathematics, some natural state of affairs with on one 
side theoretical science that properly decorrelates what's not connected and the better 
beta scientists construct working theories and machinery on the basis of deterministic 
("analytic" or number based) hypotheses depends in my opinion on the nature of the beast.


In physics, the strong and hard mathematical foundation of the main solutions for the 
quantum mechanical equations of name comes primarily from physical observations: nature 
appears to play a lot of dice at some level, whether we like it or not! That's a real 
given, not a lack of high frequency measurements or lack of practical knowledge about 
electromagnetics, waveguides and linear and non-linear electronic networks, but as of a 
century ago until this day, because of physics laws that in incredible accuracy appear to 
be based on pure statistics, and hard givens about "causality".


Electronics in the higher frequency ranges, since the beginning, are usually designed in 
terms of networks (oscillators, mixers, amplifiers, cables), EM field considerations 
(antennas, waveguides) and a quantum mechanics at the level of transistor design. There 
are many fields around communications obviously in progress the last decades, including 
better measurement equipment and better high speed digital processing tools, as well as 
design software for creating (digital) transmitters and receivers.


Recently I've witnessed Agilent software for mobile phones and other applications digital 
transmitters and other circuitry, and had some hands on experience with Keysight 
technology oscilloscopes in the many tens of giga Hertz range. Pretty interesting to 
actually being able to sample signals of 10s of GHz into a computer memory and for 
instance do eye-based analysis on digital signals, or play with the various statistics 
modules in such a device.


I heard the story that some of the latest Xilinx high speed FPGAs with their 28Gb 
transceiver links when connected over a back-plane create working "eye" diagrams, i;e; the 
communication works good, but measurement equipment fails to acknowledge this by proper 
measurement. That's an interesting EE design dilemma right there: is the measurement 
equipment better than the design at hand, or: do you need a bigger and faster computer 
than the target computer system you're designing, etc.


So the statistics being discussed come mainly I think from electronics about information 
theory, and some people, as is normal in inf. th. find it fun to take out some singular 
(simpler) components like basic statistical signal considerations, in the hope to easily 
design some competing digital communication protocols. Scientific relevance: close to 
zero, unless maybe you'd get lucky.


With respect to musical signal analysis, it could be fun to theorize a bit about corner 
cases that exist since a long time, like a noise source feeding a sample and hold circuit, 
and making interesting tones and processing with that. Like a S unit from a classical 
60's modular Moog synthesizer, which probably can be clocked with varying clock, and 
feedback signals. The prime objective at the time was probably more related to finding out 
the deep effects of sampling on signals, and encoding small signal corrections in analog 
signal for when they where going to be on CD. My guess...


T.V.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-11 Thread Ethan Duni
>there is nothing *motivating* us to define Rx[k] = E{x[n] x[n+k]} except
that we
>expect that expectation value (which is an average) to be the same as the
other definition

Sure there is. That definition gets you everything you need to work out a
whole list of major results (for example, optimal linear predictors and how
they relate the the properties of the probabilistic model) without any
reference to statistics. You get all the insights into how everything fits
together, and then you move on to the extra wrinkles that arise when
dealing in statistical estimates of the quantities in question.

To relate this back to the OP: Ross gave us a probabilistic description of
a random process, from which we can work out the autocorrelation and psd
without any reference to ergodicity, or any signal realizations to compute
sample autocorrelations on.

>otherwise, given the probabilistic definition, why would we expect the
Fourier Transform of Rx[k] = E{x[n] x[n+k]} to be the power spectrum?

In the modern context, that is the *definition* of the power spectral
density. The question of whether any particular statistic converges to it
is a separate question, considered after setting up the underlying
probabilistic models and relationships. You sort out what the underlying
quantity is, and only then do you consider how well a particular statistic
is able to approach it.

>by definition, **iff** it's ergodic, then the statistical estimate (by
that you mean the average over time)
>converges to the probabilistic expectation value.  if it's *not* ergodic,
then you don't know that they are the same.

Right, that's the definition of ergodicity. This seems phrased as a
disagreement or criticism but I'm not seeing the issue?

I certainly agree that autocorrelation and power spectral density are of
limited utility in the context of non-ergodic processes. And even more so
for non-stationary processes. But they're still well-defined (well, not so
much psd for non-WSS processes, but autocorrelation is perfectly general).

>what you call the "statistical estimate" is what i call the "primary
definition".

Right.

>well, it's not just random processes that have autocorrelations.
 deterministic signals have them too.

Deterministic signals are a subset of random processes. The probabilistic
treatment is a generalization of the deterministic case. It's overkill if
you only want to deal with deterministic signals, but in the general case
it's all you need.

>your first communications class (the book i had was A.B. Carlson) started
out with probability and stochastic processes???

My first communications class required as a prerequisite an entire course
on random processes. Which in turn required as a prerequisite yet another
entire course on basic probability and statistics. So there were two entire
courses of prob/stat/random processes pre-reqs before you get to day 1 of
communications systems.

Not sure what Carlson looked like in your time, but the modern editions do
a kind of weird parallel-track thing in this area. He does deterministic
signals first, and uses the same definitions as you. Then halfway through
he switches to random signals, and defines autocorrelation and psd directly
in terms of expected values as I describe. So it's "one definition for
deterministic case, another for random case," and then some paragraphs
bringing up the concept of ergodicity and how it bridges the two cases. The
way that the Carlson pedagogy would approach the OP - where we were given
an explicit description of a random signal - is in probabilistic terms
using definitions of acf and psd in terms of expected value.

E

On Wed, Nov 11, 2015 at 5:02 PM, robert bristow-johnson <
r...@audioimagination.com> wrote:

>
>
>
>  Original Message 
> Subject: Re: [music-dsp] how to derive spectrum of random sample-and-hold
> noise?
> From: "Ethan Duni" 
> Date: Wed, November 11, 2015 7:36 pm
> To: "robert bristow-johnson" 
> "A discussion list for music-related DSP" 
> --
>
> >>no. we need ergodicity to take a definition of autocorrelation, which we
> > are all familiar with:
> >
> >> Rx[k] = lim_{N->inf} 1/(2N+1) sum_{n=-N}^{+N} x[n] x[n+k]
> >
> >>and turn that into a probabilistic expression
> >
> >> Rx[k] = E{ x[n] x[n-k] }
> >
> >>which we can figger out with the joint p.d.f.
> >
> >
> > That's one way to do it. And if you're working only within the class of
> > stationary signals, it's a convenient way to set everything up. But it's
> > not necessary. There's nothing stopping you from simply defining
> > autocorrelation as r(n,k) = E(x[n]x[n-k]) at the outset.
>
>
>
> well there's nothing stopping us from defining autocorrelation as Rx[k] =
> 5 (for all k).  but such a definition is not particularly useful.
>
>
>
> there is nothing *motivating* 

Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-11 Thread Ethan Duni
>no.  we need ergodicity to take a definition of autocorrelation, which we
are all familiar with:

>  Rx[k] = lim_{N->inf} 1/(2N+1) sum_{n=-N}^{+N} x[n] x[n+k]

>and turn that into a probabilistic expression

>  Rx[k] = E{ x[n] x[n-k] }

>which we can figger out with the joint p.d.f.


That's one way to do it. And if you're working only within the class of
stationary signals, it's a convenient way to set everything up. But it's
not necessary. There's nothing stopping you from simply defining
autocorrelation as r(n,k) = E(x[n]x[n-k]) at the outset. You then need (WS)
stationarity to make that a function of only the lag, and then ergodicity
to establish that the statistical estimate of autocorrelation (the sample
autocorrelation, as it is commonly known) will converge, but you can ignore
it if you are just dealing with probabilistic quantities and not worrying
about the statistics.


>i totally disagree.  i consider this to be fundamental (and it's how i
remember doing statistical communication theory back in grad school).


That was a common approach in classical signal processing
literature/cirricula, since you're typically assuming stationarity at the
outset anyway. And this approach matches the historical development of the
concepts (people were computing sample autocorrelations before they squared
away the probabilistic interpretation). But this is kind of a historical
artifact that has fallen out of favor.


In modern statistical signal processing contexts (and the wider prob/stat
world) it's typically done the other way around: you define all the random
variables/processes up front, and then define autocorrelation as r(n,k) =
E(x[n]x[n-k]). Once you have that all sorted out, you turn to the question
of whether the corresponding statistics (the sample autocorrelation
function for example) converge, which is where the ergodicity stuff comes
in. The advantage to doing it this way is that you start with the most
general stuff requiring the least assumptions, and then build up more and
more specific results as you add assumptions. Assuming ergodicity at the
outset and defining everything in terms of the statistics produces the same
results for that case, but leaves you unable to say anything about
non-stationary signals, non-ergodic signals, etc.


Leafing through my college books, I can't find a single one that does it
the old way. They all start with definitions in the probability domain, and
then tackle the statistics after that's all set up.


E

On Wed, Nov 11, 2015 at 4:04 PM, robert bristow-johnson <
r...@audioimagination.com> wrote:

>
>
>  Original Message 
> Subject: Re: [music-dsp] how to derive spectrum of random sample-and-hold
> noise?
> From: "Ethan Duni" 
> Date: Wed, November 11, 2015 5:57 pm
> To: "robert bristow-johnson" 
> "A discussion list for music-related DSP" 
> --
>
> >>all ergodic processes are stationary. (not necessarily the other way
> > around.)
> >
> > Ah, right, there is no constant mean for a time average to converge to if
> > the process isn't stationary in the first place. Been a while since I
> > worried about the details of ergodicity, mostly I have the intuitive
> notion
> > that there is no "unreachable" state or infinite memory (ala a fully
> > connected Markov chain).
> >
> >>the reason (besides forgetting stuff i learned 4 decades ago) i left out
> > "stationary" was that i was sorta conflating the two. i just wanted to be
> > able to turn the time-averages in the whatever norm (and L^2 is as good
> as
> > any) with probabilistic averages, which is the root meaning of the
> property
> > "ergodic". but probably "stationary" is a better (stronger) assumption to
> > make.
> >
> > Err, didn't we just establish that ergodicity is the stronger condition?
> >
>
> yeah, we did.
>
>
> > Also I don't think we need to worry about ergodicity in the first place.
> > The process in the OP is ergodic (for P not equal to 0) but we don't need
> > to use that anywhere.
>
>
>
> yeah, we do.
>
>
>
> > We can compute the autocorrelation directly without
> > any reference to time averages or other statistics.
>
>
>
> well, the original definition of autocorrelation *is* in reference to a
> time average.  same thing with the "A" in AMDF and ASDF (the latter is an
> upside-down version of autocorrelation).
>
>
>
> > We only need ergodicity
> > if we also want to estimate the autocorrelation/psd from example data.
>
> no.  we need ergodicity to take a definition of autocorrelation, which we
> are all familiar with:
>
>
>
>Rx[k] = lim_{N->inf} 1/(2N+1) sum_{n=-N}^{+N} x[n] x[n+k]
>
>
>
> and turn that into a probabilistic expression
>
>
>
>Rx[k] = E{ x[n] x[n-k] }
>
>
>
> which we can figger out with the joint p.d.f.
>
>
>
> > Which is important for making plots to verify that 

Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-11 Thread robert bristow-johnson









 Original Message 

Subject: Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

From: "Ethan Duni" 

Date: Wed, November 11, 2015 7:36 pm

To: "robert bristow-johnson" 

"A discussion list for music-related DSP" 

--



>>no. we need ergodicity to take a definition of autocorrelation, which we

> are all familiar with:

>

>> Rx[k] = lim_{N->inf} 1/(2N+1) sum_{n=-N}^{+N} x[n] x[n+k]

>

>>and turn that into a probabilistic expression

>

>> Rx[k] = E{ x[n] x[n-k] }

>

>>which we can figger out with the joint p.d.f.

>

>

> That's one way to do it. And if you're working only within the class of

> stationary signals, it's a convenient way to set everything up. But it's

> not necessary. There's nothing stopping you from simply defining

> autocorrelation as r(n,k) = E(x[n]x[n-k]) at the outset.
�
well there's nothing stopping us from defining autocorrelation as Rx[k] = 5 
(for all k). �but such a definition is not particularly useful.
�
there is nothing *motivating* us to define Rx[k] =
E{x[n] x[n+k]} except that we expect that expectation value (which is an 
average) to be the same as the other definition, which is what we use in all of 
this deterministic Fourier signal theory we start with in communication 
systems. �otherwise, given the probabilistic definition, why would we
expect the Fourier Transform of Rx[k] = E{x[n] x[n+k]}�to be the power 
spectrum? �you get to that fact long before any of this statistical 
communications theory.
�
> You then need (WS)
> stationarity to make that a function of only the lag, and then ergodicity

> to establish that the statistical estimate of autocorrelation (the sample

> autocorrelation, as it is commonly known) will converge,
�
to *what*? �by definition, **iff** it's ergodic, then the statistical estimate 
(by that you mean the average over time) converges to the probabilistic 
expectation value. �if it's *not* ergodic, then you don't
know that they are the same.
�
> but you can ignore
> it if you are just dealing with probabilistic quantities and not worrying

> about the statistics.
�
we got some semantic differences here. �by "statistical estimate" i know you're 
referring to the same result that is what we get in our first semester 
communications (long before statistical communications) as the **definition** of
autocorrelation. �what you call the "statistical estimate" is what i call the 
"primary definition".
�
>>i totally disagree. i consider this to be fundamental (and it's how i
> remember doing statistical communication theory back in grad school).

>

>

> That was a common approach in classical signal processing

> literature/cirricula, since you're typically assuming stationarity at the

> outset anyway. And this approach matches the historical development of the

> concepts (people were computing sample autocorrelations before they squared

> away the probabilistic interpretation). But this is kind of a historical

> artifact that has fallen out of favor.

�
in an electrical engineering communications class? �are you sure?
�
see, they gotta teach these kids things about signals and Fourier and spectra 
and LTI and the like so they have a concept of what that stuff is about 
*without* necessarily bringing into the
conversation probability, random variables, p.d.f., and random processes. 
�pedagogically, i am quite dubious that this *historical artifact* is not how 
they teach statistical communications even now. �i know my vanTrees and 
Wozencraft books are old, but this is timeless and
classical. �i doubt it has fallen out of favor.

>

> In modern statistical signal processing contexts (and the wider prob/stat

> world) it's typically done the other way around: you define all the random

> variables/processes up front, and then define autocorrelation as r(n,k) =

> E(x[n]x[n-k]).
�
well, it's not just random processes that have autocorrelations. �deterministic 
signals have them too.
�
and pedagocially, i can't imagine teaching statistical signal processing before 
teaching the fundamentals of signal
processing.
�
> Once you have that all sorted out, you turn to the question
> of whether the corresponding statistics (the sample autocorrelation

> function for example) converge, which is where the ergodicity stuff comes

> in.
�
yes.
�
> The advantage to doing it this way is that you start with the most

> general stuff requiring the least assumptions, and then build up more and

> more specific results as you add assumptions. Assuming ergodicity at the

> outset and defining everything in terms of the statistics produces the same

> results for that case, but leaves you unable to say anything about

> non-stationary signals, non-ergodic signals, etc.

>

>

> Leafing through my college books, I can't find a 

Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-11 Thread robert bristow-johnson







 Original Message 

Subject: Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

From: "Ethan Duni" 

Date: Wed, November 11, 2015 5:57 pm

To: "robert bristow-johnson" 

"A discussion list for music-related DSP" 

--



>>all ergodic processes are stationary. (not necessarily the other way

> around.)

>

> Ah, right, there is no constant mean for a time average to converge to if

> the process isn't stationary in the first place. Been a while since I

> worried about the details of ergodicity, mostly I have the intuitive notion

> that there is no "unreachable" state or infinite memory (ala a fully

> connected Markov chain).

>

>>the reason (besides forgetting stuff i learned 4 decades ago) i left out

> "stationary" was that i was sorta conflating the two. i just wanted to be

> able to turn the time-averages in the whatever norm (and L^2 is as good as

> any) with probabilistic averages, which is the root meaning of the property

> "ergodic". but probably "stationary" is a better (stronger) assumption to

> make.

>

> Err, didn't we just establish that ergodicity is the stronger condition?

>
yeah, we did.


> Also I don't think we need to worry about ergodicity in the first place.

> The process in the OP is ergodic (for P not equal to 0) but we don't need

> to use that anywhere.
�
yeah, we do.
�
> We can compute the autocorrelation directly without

> any reference to time averages or other statistics.
�
well, the original definition of autocorrelation *is* in reference to a time 
average. �same thing with the "A" in AMDF and ASDF (the latter is an 
upside-down version of
autocorrelation).
�
> We only need ergodicity
> if we also want to estimate the autocorrelation/psd from example data.
no. �we need ergodicity to take a definition of autocorrelation, which we are 
all familiar with:
�
� �Rx[k] = lim_{N->inf} 1/(2N+1) sum_{n=-N}^{+N} x[n] x[n+k]
�
and
turn that into a probabilistic expression
�
� �Rx[k] = E{ x[n] x[n-k] }
�
which we can figger out with the joint p.d.f.
�
> Which is important for making plots to verify that the answer is correct,
> but not needed just to derive the autocorrelation/spectrum themselves.
�
i totally disagree. �i consider this to be fundamental (and it's how i remember 
doing statistical communication theory back in grad school).


> Unless I missed something - where did this ergodicity assumption come from?

>
it came from me. �but i forgot that a simpler (and stronger) assumption would 
have been stationarity.
�


> On Tue, Nov 10, 2015 at 6:33 PM, robert bristow-johnson 
>  wrote:

>

>>

>>

>>  Original Message 

>> Subject: Re: [music-dsp] how to derive spectrum of random sample-and-hold

>> noise?

>>

From: "Ethan Duni" 

>> Date: Tue, November 10, 2015 8:58 pm

>> To: "A discussion list for music-related DSP" <

>> music-dsp@music.columbia.edu>

>> --

>>

>> >>(Semi-)stationarity, I'd say. Ergodicity is a weaker condition, true,

>> >>but it doesn't then really capture how your usual L^2 correlative

>> >>measures truly work.

>> >

>> > I think we need both conditions, no?

>>

>> all ergodic processes are stationary. (not necessarily the other way

>> around.)

>>

>>

>>

>> the reason (besides forgetting stuff i learned 4 decades ago) i left out

>> "stationary" was that i was sorta conflating the two. i just wanted to be

>> able to turn the time-averages in the whatever norm (and L^2 is as good as

>> any) with probabilistic averages, which is the root meaning of the property

>> "ergodic". but probably "stationary" is a better (stronger) assumption to

>> make.

>>

>>

>>

>>

>> >

>> >>Something like that, yes, except that you have to factor in aliasing.

>> >

>> > What aliasing? Isn't this process generated directly in the discrete time

>> > domain?

>>

>> i'm thinking the same thing. it's a discrete-time Markov process. just

>> model it and analyze it as such. assuming stationarity, we should be able

>> to derive an autocorrelation function (and i think you guys did) and from

>> that (and the DTFT) you have the (periodic) power spectrum.

>>

>> worry about frequency aliasing when you decide to output this to a DAC.

>>







--
�


r b-j � � � � � � � � � r...@audioimagination.com
�


"Imagination is more important than knowledge."
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-11 Thread Ethan Duni
>all ergodic processes are stationary.  (not necessarily the other way
around.)

Ah, right, there is no constant mean for a time average to converge to if
the process isn't stationary in the first place. Been a while since I
worried about the details of ergodicity, mostly I have the intuitive notion
that there is no "unreachable" state or infinite memory (ala a fully
connected Markov chain).

>the reason (besides forgetting stuff i learned 4 decades ago) i left out
"stationary" was that i was sorta conflating the two. i just wanted to be
able to turn the time-averages in the whatever norm (and L^2 is >as good as
any) with probabilistic averages, which is the root meaning of the property
"ergodic".  but probably "stationary" is a better (stronger) assumption to
make.

Err, didn't we just establish that ergodicity is the stronger condition?

Also I don't think we need to worry about ergodicity in the first place.
The process in the OP is ergodic (for P not equal to 0) but we don't need
to use that anywhere. We can compute the autocorrelation directly without
any reference to time averages or other statistics. We only need ergodicity
if we also want to estimate the autocorrelation/psd from example data.
Which is important for making plots to verify that the answer is correct,
but not needed just to derive the autocorrelation/spectrum themselves.
Unless I missed something - where did this ergodicity assumption come from?

E

On Tue, Nov 10, 2015 at 6:33 PM, robert bristow-johnson <
r...@audioimagination.com> wrote:

>
>
>  Original Message 
> Subject: Re: [music-dsp] how to derive spectrum of random sample-and-hold
> noise?
> From: "Ethan Duni" 
> Date: Tue, November 10, 2015 8:58 pm
> To: "A discussion list for music-related DSP" <
> music-dsp@music.columbia.edu>
> --
>
> >>(Semi-)stationarity, I'd say. Ergodicity is a weaker condition, true,
> >>but it doesn't then really capture how your usual L^2 correlative
> >>measures truly work.
> >
> > I think we need both conditions, no?
>
> all ergodic processes are stationary.  (not necessarily the other way
> around.)
>
>
>
> the reason (besides forgetting stuff i learned 4 decades ago) i left out
> "stationary" was that i was sorta conflating the two.  i just wanted to be
> able to turn the time-averages in the whatever norm (and L^2 is as good as
> any) with probabilistic averages, which is the root meaning of the property
> "ergodic".  but probably "stationary" is a better (stronger) assumption to
> make.
>
>
>
>
> >
> >>Something like that, yes, except that you have to factor in aliasing.
> >
> > What aliasing? Isn't this process generated directly in the discrete time
> > domain?
>
> i'm thinking the same thing.  it's a discrete-time Markov process.  just
> model it and analyze it as such. assuming stationarity, we should be able
> to derive an autocorrelation function (and i think you guys did) and from
> that (and the DTFT) you have the (periodic) power spectrum.
>
> worry about frequency aliasing when you decide to output this to a DAC.
>
>
>
> --
>
>
>
>
> r b-j   r...@audioimagination.com
>
>
>
>
> "Imagination is more important than knowledge."
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-10 Thread Theo Verelst
In the course of these discussions, let's not forget the difference between a convolution 
with 1/(Pi*t) (a Hilbert transform kernel) and the inversion of the transfer function of a 
linear system.


T.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-10 Thread robert bristow-johnson







 Original Message 

Subject: Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

From: "Ethan Duni" 

Date: Tue, November 10, 2015 8:58 pm

To: "A discussion list for music-related DSP" 

--



>>(Semi-)stationarity, I'd say. Ergodicity is a weaker condition, true,

>>but it doesn't then really capture how your usual L^2 correlative

>>measures truly work.

>

> I think we need both conditions, no?
all ergodic processes are stationary. �(not necessarily the other way around.)
�
the reason (besides forgetting stuff i learned 4 decades ago) i left out 
"stationary" was that i was sorta conflating the two. �i just
wanted to be able to turn the time-averages in the whatever norm (and L^2 is as 
good as any) with probabilistic averages, which is the root meaning of the 
property "ergodic". �but probably "stationary" is a better (stronger) 
assumption to make.
�

>

>>Something like that, yes, except that you have to factor in aliasing.

>

> What aliasing? Isn't this process generated directly in the discrete time

> domain?
i'm thinking the same thing. �it's a discrete-time Markov process. �just model 
it and analyze it as such. assuming stationarity, we should be able to derive 
an autocorrelation function (and i think you guys did) and from that (and the 
DTFT) you have the (periodic) power
spectrum.
worry about frequency aliasing when you decide to output this to a DAC.



--
�


r b-j � � � � � � � � � r...@audioimagination.com
�


"Imagination is more important than knowledge."
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-10 Thread Ethan Duni
>(Semi-)stationarity, I'd say. Ergodicity is a weaker condition, true,
>but it doesn't then really capture how your usual L^2 correlative
>measures truly work.

I think we need both conditions, no?

>Something like that, yes, except that you have to factor in aliasing.

What aliasing? Isn't this process generated directly in the discrete time
domain?

E

On Tue, Nov 10, 2015 at 5:43 PM, Sampo Syreeni  wrote:

> On 2015-11-04, robert bristow-johnson wrote:
>
> it is the correct way to characterize the spectra of random signals. the
>> spectra (PSD) is the Fourier Transform of autocorrelation and is scaled as
>> magnitude-squared.
>>
>
> The normal way to derive the spectrum of S/H-noise goes a bit around these
> kinds of considerations. It takes as given that we have a certain sampling
> frequency, which is the same as the S/H frequency. Under that assumption,
> sample-and-hold takes any value, and holds it constant for a sampling
> period. You can model that by a convolution with a rectangular function
> which takes the value one for one sampling period, and which is zero
> everywhere else. Then the rest of the modelling has to do with normal
> aliasing analysis.
>
> That's at least how they did it before the era of delta-sigma converters.
>
> with the assumption of ergodicity, [...]
>>
>
> (Semi-)stationarity, I'd say. Ergodicity is a weaker condition, true, but
> it doesn't then really capture how your usual L^2 correlative measures
> truly work.
>
> i have a sneaky suspicion that this Markov process is gonna be something
>> like pink noise.
>>
>
> Something like that, yes, except that you have to factor in aliasing.
>
>
> r[n] = uniform_random(0, 1)
> if (r[n] <= P)
>x[n] = uniform_random(-1, 1);
> else
>x[n] = x[n-1];
>
>
> If P==1, that give uniform white noise. If P==0, it yields a constant. If
> P==.5, half of the time it holds the previous value.
>
> In a continuous time Markov process you'd get something like pink noise,
> yes. But in a discrete time process you have to factor in aliasing. It goes
> pretty bad, pretty fast.
>
> --
> Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
> +358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-10 Thread Sampo Syreeni

On 2015-11-04, robert bristow-johnson wrote:

it is the correct way to characterize the spectra of random signals. 
the spectra (PSD) is the Fourier Transform of autocorrelation and is 
scaled as magnitude-squared.


The normal way to derive the spectrum of S/H-noise goes a bit around 
these kinds of considerations. It takes as given that we have a certain 
sampling frequency, which is the same as the S/H frequency. Under that 
assumption, sample-and-hold takes any value, and holds it constant for a 
sampling period. You can model that by a convolution with a rectangular 
function which takes the value one for one sampling period, and which is 
zero everywhere else. Then the rest of the modelling has to do with 
normal aliasing analysis.


That's at least how they did it before the era of delta-sigma 
converters.



with the assumption of ergodicity, [...]


(Semi-)stationarity, I'd say. Ergodicity is a weaker condition, true, 
but it doesn't then really capture how your usual L^2 correlative 
measures truly work.


i have a sneaky suspicion that this Markov process is gonna be 
something like pink noise.


Something like that, yes, except that you have to factor in aliasing.


r[n] = uniform_random(0, 1)
if (r[n] <= P)
   x[n] = uniform_random(-1, 1);
else
   x[n] = x[n-1];


If P==1, that give uniform white noise. If P==0, it yields a constant. 
If P==.5, half of the time it holds the previous value.


In a continuous time Markov process you'd get something like pink noise, 
yes. But in a discrete time process you have to factor in aliasing. It 
goes pretty bad, pretty fast.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-09 Thread Dan Stowell
Hi Ross,

Just spotted this. I don't have an answer for you, but a possible
helpful literature connection...?

The system you describe is a simple Markov model. It's ergodic and
time-homogeneous and reversible, and has no hidden state, so I'd guess
that there must be results from the Markov model literature that can
help. In particular MCMC work, which uses reversible Markov chains and
their stationary distributions. In fact they often consider the
autocorrelation of the converged processes, so they can work out how to
take uncorrelated samples.

Best
Dan


On 03/11/15 17:42, Ross Bencina wrote:
> Hi Everyone,
> 
> Suppose that I generate a time series x[n] as follows:
> 

> P is a constant value between 0 and 1
> 
> At each time step n (n is an integer):
> 
> r[n] = uniform_random(0, 1)
> x[n] = (r[n] <= P) ? uniform_random(-1, 1) : x[n-1]
> 
> Where "(a) ? b : c" is the C ternary operator that takes on the value b
> if a is true, and c otherwise.
> <<<
> 
> What would be a good way to derive a closed-form expression for the
> spectrum of x? (Assuming that the series is infinite.)
> 
> 
> I'm guessing that the answer is an integral over the spectra of shifted
> step functions, but I don't know how to deal with the random magnitude
> of each step, or the random onsets. Please assume that I barely know how
> to take the Fourier transform of a step function.
> 
> Maybe the spectrum of a train of randomly spaced, random amplitude
> pulses is easier to model (i.e. w[n] = x[n] - x[n-1]). Either way, any
> hints would be appreciated.
> 
> Thanks in advance,
> 
> Ross.
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp

-- 
Dan Stowell
EPSRC Research Fellow
Centre for Digital Music
Queen Mary, University of London
Mile End Road, London E1 4NS
http://www.mcld.co.uk/research/
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-06 Thread Vadim Zavalishin

On 06-Nov-15 11:03, Vadim Zavalishin wrote:

Apologies if this question has already been answered, I didn't read the
entire thread, just wanted to share the following idea off the top of my
head FWIW.


Oops, nevermind, I didn't realize that the SnH period is also random in 
the original question.


--
Vadim Zavalishin
Reaktor Application Architect | R
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-06 Thread Vadim Zavalishin
Okay, an updated idea. Represent the signal as a sum of time-shifted box 
functions of random amplitudes and durations. We assume that the sum is 
finite and then we can take the limit (if the values approach the 
infinity as the result, we can normalize them according to the length of 
the signal).


Respectively the (complex) spectrum of such sum will be the sum of box 
function spectra which are randomly phase-rotated and randomly 
scaled/stretched in the frequency domain (according to their stretching 
in the time domain). The phase rotation can be assumed uniformly 
distributed. So we need to determine the distribution of the amplitudes 
and of the stretching of the box function spectra. Both of the latter 
can be found from the distribution of the box amplitudes and box lengths 
(under the assumption of uniform phase rotation distribution). The box 
amplitudes are uniformly distributed according to your specs. The 
distribution of box lengths must be IIRC one of the commonly known 
distributions, don't remember which one.


On 06-Nov-15 11:06, Vadim Zavalishin wrote:

On 06-Nov-15 11:03, Vadim Zavalishin wrote:

Apologies if this question has already been answered, I didn't read the
entire thread, just wanted to share the following idea off the top of my
head FWIW.


Oops, nevermind, I didn't realize that the SnH period is also random in
the original question.




--
Vadim Zavalishin
Reaktor Application Architect | R
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-05 Thread Ethan Fenn
>
> What is the method that you used to go from ac[k] to psd[w]? Robert
> mentioned that psd was the Fourier transform of ac. Is this particular case
> a standard transform that you knew off the top of your head?


Yes, this is the Fourier transform of P^|k| (following Ethan D's notation).
To derive it you can use the fact that sum(P^k*exp(jwk)) =
sum((P*exp(jw))^k) = 1/(1-P*exp(jw)), where the sums are over k >= 0. Then
you have to add the same thing for negative frequencies, then subtract a
constant because you've accidentally included the k=0 term twice, then turn
the algebra crank a couple times. Or get the answer from a table or piece
of software. :)

And is psd[w] in exactly the same units as the magnitude squared spectrum
> of x[n] (i.e. |ft(x)|^2)?


More or less, with the proviso that you have to be careful whether you're
talking about power per unit frequency which the psd will give you, and
power per frequency bin which is often the correct interpretation of
magnitude squared FFT results -- the latter depending on the FFT scaling
conventions used.

The psd makes no reference to any transform length, since it's based on the
statistical properties of the process. So I think it would be wrong (or at
least inexact) to have a scale related to N applied to it. If you want the
magnitude squared results of an FFT to match the psd, it seems more correct
to scale the FFT and try a few different N's to see what factor of N will
give consistent results.

As to the exact scale that should be applied... I think there should be a
1/3 in the expression for psd, because E[x^2]=1/3 where x is uniform in
[-1,1]. Aside from that, there might be a factor of 2pi depending on
whether we're talking about power per linear or angular frequency. And
there could be others I'm not thinking of maybe someone else can shed
more light here.

Hope that's somewhat helpful!

-Ethan




On Thu, Nov 5, 2015 at 11:00 AM, Ross Bencina 
wrote:

> Thanks Ethan(s),
>
> I was able to follow your derivation. A few questions:
>
> On 4/11/2015 7:07 PM, Ethan Duni wrote:
>
>> It's pretty straightforward to derive the autocorrelation and psd for
>> this one. Let me restate it with some convenient notation. Let's say
>> there are a parameter P in (0,1) and 3 random processes:
>> r[n] i.i.d. ~U(0,1)
>> y[n] i.i.d. ~(some distribution with at least first and second moments
>> finite)
>> x[n] = (r[n]>
>> Note that I've switched the probability of holding to P from (1-P), and
>> that the signal being sampled-and-held can have an arbitrary (if well
>> behaved) distribution. Let's also assume wlog that E[y[n]y[n]] = 1
>> (Scale the final results by the power of whatever distribution you
>> prefer).
>>
>
> So for y[n] ~U(-1,1) I should multiply psd[w] by what exactly?
>
>
> Now, the autocorrelation function is ac[k] = E[x[n]x[n-k]]. Let's work
>> through the first few values:
>> k=0:
>> ac[0] = E[x[n]x[n]] = E[y[n]y[n]] = 1
>> k=1:
>> ac[1] = E[x[n]x[n-1]] = P*E[x[n-1]x[n-1]] + (1-P)*E[x[n-1]y[n]] =
>> P*E[y[n]y[n]] = P
>>
>> The idea is that P of the time, x[n] = x[n-1] (resulting in the first
>> term) and (1-P) of the time, x[n] is a new, uncorrelated sample from
>> y[n]. So we're left with P times the power (assumed to be 1 above).
>>
>> k=2:
>> ac[2] = P*P*E[x[n-2]x[n-2]] = P^2
>>
>> Again, we decompose the expected value into the case where x[n] = x[n-2]
>> - this only happens if both of the previous samples were held
>> (probability P^2). The rest of the time - if there was at least one
>> sample event - we have uncorrelated variables and the term drops out.
>>
>> So, by induction and symmetry, we conclude:
>>
> >
>
>> ac[k] = P^abs(k)
>>
> >
>
>> And so the psd is given by:
>>
>> psd[w] = (1 - P^2)/(1 - 2Pcos(w) + P^2)
>>
>
> What is the method that you used to go from ac[k] to psd[w]? Robert
> mentioned that psd was the Fourier transform of ac. Is this particular case
> a standard transform that you knew off the top of your head?
>
> And is psd[w] in exactly the same units as the magnitude squared spectrum
> of x[n] (i.e. |ft(x)|^2)?
>
>
> Unless I've screwed up somewhere?
>>
>
> A quick simulation suggests that it might be okay:
>
> https://www.dropbox.com/home/Public?preview=SH1_1.png
>
>
> But I don't seem to have the scale factors correct. The psd has
> significantly smaller magnitude than the fft.
>
> Here's the numpy code I used (also pasted below).
>
> https://gist.github.com/RossBencina/a15a696adf0232c73a55
>
> The FFT output is scaled by (2.0/N) prior to computing the magnitude
> squared spectrum.
>
> I have also scaled the PSD by (2.0/N). That doesn't seem quite right to me
> for two reasons: (1) the scale factor is applied to the linear FFT, but to
> the mag squared PSD and (2) I don't have the 1/3 factor anywhere.
>
> Any thoughts on what I'm doing wrong?
>
> Thanks,
>
> Ross.
>
>
> P.S. Pasting the numpy code below:
>
> # ---8<
> # see
> 

Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-05 Thread Ross Bencina

Thanks Ethan(s),

I was able to follow your derivation. A few questions:

On 4/11/2015 7:07 PM, Ethan Duni wrote:

It's pretty straightforward to derive the autocorrelation and psd for
this one. Let me restate it with some convenient notation. Let's say
there are a parameter P in (0,1) and 3 random processes:
r[n] i.i.d. ~U(0,1)
y[n] i.i.d. ~(some distribution with at least first and second moments
finite)
x[n] = (r[n]

ac[k] = P^abs(k)

>

And so the psd is given by:

psd[w] = (1 - P^2)/(1 - 2Pcos(w) + P^2)


What is the method that you used to go from ac[k] to psd[w]? Robert 
mentioned that psd was the Fourier transform of ac. Is this particular 
case a standard transform that you knew off the top of your head?


And is psd[w] in exactly the same units as the magnitude squared 
spectrum of x[n] (i.e. |ft(x)|^2)?




Unless I've screwed up somewhere?


A quick simulation suggests that it might be okay:

https://www.dropbox.com/home/Public?preview=SH1_1.png


But I don't seem to have the scale factors correct. The psd has 
significantly smaller magnitude than the fft.


Here's the numpy code I used (also pasted below).

https://gist.github.com/RossBencina/a15a696adf0232c73a55

The FFT output is scaled by (2.0/N) prior to computing the magnitude 
squared spectrum.


I have also scaled the PSD by (2.0/N). That doesn't seem quite right to 
me for two reasons: (1) the scale factor is applied to the linear FFT, 
but to the mag squared PSD and (2) I don't have the 1/3 factor anywhere.


Any thoughts on what I'm doing wrong?

Thanks,

Ross.


P.S. Pasting the numpy code below:

# ---8<
# see 
https://lists.columbia.edu/pipermail/music-dsp/2015-November/000424.html

# psd derivation due to Ethan Duni
import numpy as np
from numpy.fft import fft, fftfreq
import matplotlib.pyplot as plt

N = 16384*2*2*2*2*2 # FFT size

y = (np.random.random(N) * 2) - 1 # ~U(-1,1)
r = np.random.random(N) # ~U(0,1)
x = np.empty(N) # (r[n]

Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-05 Thread Ethan Fenn
>
> Let's see if I got this right: each bin contains the power for a frequency
> interval of 2pi/N radians. If I multiply each bin's power by N/2pi I should
> get power values in units of power/radian.
>

Sounds reasonable to me, but I'm not sure I've got it right so who knows!

I think I was slightly off when I said that the units of psd are power per
unit frequency -- since the whole signal has infinite power, the units
really need to be power per unit frequency per unit time, which
(confusingly) is the same thing as power. This could be another reason why
some special scaling is needed as compared to a finite-length FFT.

I'm not sure whether the FFT values should be fringing above the psd line
> or not:


The psd line is the expected value, so some FFT values should be above it
and some below. You could try averaging the squared spectra from a bunch of
separate FFT trials and see if that makes things coverge toward the line.

-Ethan


On Thu, Nov 5, 2015 at 3:48 PM, Ross Bencina 
wrote:

> Thanks Ethan,
>
> I think that I have it working. It would be great is someone could check
> the scaling though. I'm not sure whether the FFT values should be fringing
> above the psd line or not:
>
> https://www.dropbox.com/s/txc0txhxqr1t274/SH1_2.png?dl=0
>
> I removed the hamming window, which was causing scaling problems. The FFT
> output is now scaled so that the the sum of power over all bins matches the
> power of the time domain signal:
>
>
> https://gist.github.com/RossBencina/a15a696adf0232c73a55/bdefe5ab0b5c218a966bd6a04d9d998a708faf99
>
>
> On 6/11/2015 12:02 AM, Ethan Fenn wrote:
>
>> And is psd[w] in exactly the same units as the magnitude squared
>> spectrum of x[n] (i.e. |ft(x)|^2)?
>>
>>
>> More or less, with the proviso that you have to be careful whether
>> you're talking about power per unit frequency which the psd will give
>> you, and power per frequency bin which is often the correct
>> interpretation of magnitude squared FFT results -- the latter depending
>> on the FFT scaling conventions used.
>>
>
> Let's see if I got this right: each bin contains the power for a frequency
> interval of 2pi/N radians. If I multiply each bin's power by N/2pi I should
> get power values in units of power/radian.
>
>
> The psd makes no reference to any transform length, since it's based on
>> the statistical properties of the process. So I think it would be wrong
>> (or at least inexact) to have a scale related to N applied to it. If you
>> want the magnitude squared results of an FFT to match the psd, it seems
>> more correct to scale the FFT and try a few different N's to see what
>> factor of N will give consistent results.
>>
>
> That makes sense.
>
>
> As to the exact scale that should be applied... I think there should be
>> a 1/3 in the expression for psd, because E[x^2]=1/3 where x is uniform
>> in [-1,1]. Aside from that, there might be a factor of 2pi depending on
>> whether we're talking about power per linear or angular frequency. And
>> there could be others I'm not thinking of maybe someone else can
>> shed more light here.
>>
>
> I multiplied the psd by 1/3 and as you can see from the graph it looks as
> though the FFT and the psd are more-or-less aligned.
>
>
> Hope that's somewhat helpful!
>>
>
> Very clear thanks,
>
> Ross.
>
>
>
> -Ethan
>>
>>
>>
>>
>> On Thu, Nov 5, 2015 at 11:00 AM, Ross Bencina
>> > wrote:
>>
>> Thanks Ethan(s),
>>
>> I was able to follow your derivation. A few questions:
>>
>> On 4/11/2015 7:07 PM, Ethan Duni wrote:
>>
>> It's pretty straightforward to derive the autocorrelation and
>> psd for
>> this one. Let me restate it with some convenient notation. Let's
>> say
>> there are a parameter P in (0,1) and 3 random processes:
>> r[n] i.i.d. ~U(0,1)
>> y[n] i.i.d. ~(some distribution with at least first and second
>> moments
>> finite)
>> x[n] = (r[n]>
>> Note that I've switched the probability of holding to P from
>> (1-P), and
>> that the signal being sampled-and-held can have an arbitrary (if
>> well
>> behaved) distribution. Let's also assume wlog that E[y[n]y[n]] = 1
>> (Scale the final results by the power of whatever distribution
>> you prefer).
>>
>>
>> So for y[n] ~U(-1,1) I should multiply psd[w] by what exactly?
>>
>>
>> Now, the autocorrelation function is ac[k] = E[x[n]x[n-k]].
>> Let's work
>> through the first few values:
>> k=0:
>> ac[0] = E[x[n]x[n]] = E[y[n]y[n]] = 1
>> k=1:
>> ac[1] = E[x[n]x[n-1]] = P*E[x[n-1]x[n-1]] + (1-P)*E[x[n-1]y[n]] =
>> P*E[y[n]y[n]] = P
>>
>> The idea is that P of the time, x[n] = x[n-1] (resulting in the
>> first
>> term) and (1-P) of the time, x[n] is a new, uncorrelated sample

Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-05 Thread Ethan Fenn
Yes, thank you! I guess most of the places I typed the word power I really
meant energy... units are hard...

-Ethan


On Thu, Nov 5, 2015 at 7:33 PM, Ethan Duni  wrote:

> >since the whole signal has infinite power, the units really
> >need to be power per unit frequency per unit time, which
> >(confusingly) is the same thing as power.
>
> I think you mean to say "infinite energy" and then "energy per unit
> frequency per unit time," no?
>
> E
>
> On Thu, Nov 5, 2015 at 8:21 AM, Ethan Fenn  wrote:
>
>> Let's see if I got this right: each bin contains the power for a
>>> frequency interval of 2pi/N radians. If I multiply each bin's power by
>>> N/2pi I should get power values in units of power/radian.
>>>
>>
>> Sounds reasonable to me, but I'm not sure I've got it right so who knows!
>>
>> I think I was slightly off when I said that the units of psd are power
>> per unit frequency -- since the whole signal has infinite power, the units
>> really need to be power per unit frequency per unit time, which
>> (confusingly) is the same thing as power. This could be another reason why
>> some special scaling is needed as compared to a finite-length FFT.
>>
>> I'm not sure whether the FFT values should be fringing above the psd line
>>> or not:
>>
>>
>> The psd line is the expected value, so some FFT values should be above it
>> and some below. You could try averaging the squared spectra from a bunch of
>> separate FFT trials and see if that makes things coverge toward the line.
>>
>> -Ethan
>>
>>
>> On Thu, Nov 5, 2015 at 3:48 PM, Ross Bencina 
>> wrote:
>>
>>> Thanks Ethan,
>>>
>>> I think that I have it working. It would be great is someone could check
>>> the scaling though. I'm not sure whether the FFT values should be fringing
>>> above the psd line or not:
>>>
>>> https://www.dropbox.com/s/txc0txhxqr1t274/SH1_2.png?dl=0
>>>
>>> I removed the hamming window, which was causing scaling problems. The
>>> FFT output is now scaled so that the the sum of power over all bins matches
>>> the power of the time domain signal:
>>>
>>>
>>> https://gist.github.com/RossBencina/a15a696adf0232c73a55/bdefe5ab0b5c218a966bd6a04d9d998a708faf99
>>>
>>>
>>> On 6/11/2015 12:02 AM, Ethan Fenn wrote:
>>>
 And is psd[w] in exactly the same units as the magnitude squared
 spectrum of x[n] (i.e. |ft(x)|^2)?


 More or less, with the proviso that you have to be careful whether
 you're talking about power per unit frequency which the psd will give
 you, and power per frequency bin which is often the correct
 interpretation of magnitude squared FFT results -- the latter depending
 on the FFT scaling conventions used.

>>>
>>> Let's see if I got this right: each bin contains the power for a
>>> frequency interval of 2pi/N radians. If I multiply each bin's power by
>>> N/2pi I should get power values in units of power/radian.
>>>
>>>
>>> The psd makes no reference to any transform length, since it's based on
 the statistical properties of the process. So I think it would be wrong
 (or at least inexact) to have a scale related to N applied to it. If you
 want the magnitude squared results of an FFT to match the psd, it seems
 more correct to scale the FFT and try a few different N's to see what
 factor of N will give consistent results.

>>>
>>> That makes sense.
>>>
>>>
>>> As to the exact scale that should be applied... I think there should be
 a 1/3 in the expression for psd, because E[x^2]=1/3 where x is uniform
 in [-1,1]. Aside from that, there might be a factor of 2pi depending on
 whether we're talking about power per linear or angular frequency. And
 there could be others I'm not thinking of maybe someone else can
 shed more light here.

>>>
>>> I multiplied the psd by 1/3 and as you can see from the graph it looks
>>> as though the FFT and the psd are more-or-less aligned.
>>>
>>>
>>> Hope that's somewhat helpful!

>>>
>>> Very clear thanks,
>>>
>>> Ross.
>>>
>>>
>>>
>>> -Ethan




 On Thu, Nov 5, 2015 at 11:00 AM, Ross Bencina
 > wrote:

 Thanks Ethan(s),

 I was able to follow your derivation. A few questions:

 On 4/11/2015 7:07 PM, Ethan Duni wrote:

 It's pretty straightforward to derive the autocorrelation and
 psd for
 this one. Let me restate it with some convenient notation.
 Let's say
 there are a parameter P in (0,1) and 3 random processes:
 r[n] i.i.d. ~U(0,1)
 y[n] i.i.d. ~(some distribution with at least first and second
 moments
 finite)
 x[n] = (r[n]

Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-05 Thread Ethan Duni
>So for y[n] ~U(-1,1) I should multiply psd[w] by what exactly?

The variance of y[n]. For U(-1,1) this is 1/3. From your subsequent post it
sounds like you got this ironed out?

>What is the method that you used to go from ac[k] to psd[w]?
>Robert mentioned that psd was the Fourier transform of ac.
>Is this particular case a standard transform that you knew off the top of
your head?

Yeah it's just the DTFT of the autocorrelation function. You can find that
one in a suitably complete table of transform pairs, or just evaluate it
directly by using the geometric series formula and a bit of manipulation as
Ethan F described (that's what I did, for old time's sake).

Are you comparing this to a big FFT of a long sequence of samples from this
process (i.e., periodogram)? The basic shape should be visible there, but
with quite a lot of noise (because the frequency resolution increases at
the same rate as the number of samples, you have a constant number of
samples per frequency bin so that error never converges to zero). To really
see the effect you can use a more sophisticated spectral density estimate
like Welch's method. Basically you chop the signal into chunks (with length
determined by your frequency resolution) and then average the FFT
magnitudes of those. That way you have a constant frequency resolution and
increasing samples per bin, so the result will converge to the underlying
PSD. There are more details with windowing and overlap and scaling, but
that's the basic idea.

https://en.wikipedia.org/wiki/Spectral_density_estimation

E



On Thu, Nov 5, 2015 at 2:00 AM, Ross Bencina 
wrote:

> Thanks Ethan(s),
>
> I was able to follow your derivation. A few questions:
>
> On 4/11/2015 7:07 PM, Ethan Duni wrote:
>
>> It's pretty straightforward to derive the autocorrelation and psd for
>> this one. Let me restate it with some convenient notation. Let's say
>> there are a parameter P in (0,1) and 3 random processes:
>> r[n] i.i.d. ~U(0,1)
>> y[n] i.i.d. ~(some distribution with at least first and second moments
>> finite)
>> x[n] = (r[n]>
>> Note that I've switched the probability of holding to P from (1-P), and
>> that the signal being sampled-and-held can have an arbitrary (if well
>> behaved) distribution. Let's also assume wlog that E[y[n]y[n]] = 1
>> (Scale the final results by the power of whatever distribution you
>> prefer).
>>
>
> So for y[n] ~U(-1,1) I should multiply psd[w] by what exactly?
>
>
> Now, the autocorrelation function is ac[k] = E[x[n]x[n-k]]. Let's work
>> through the first few values:
>> k=0:
>> ac[0] = E[x[n]x[n]] = E[y[n]y[n]] = 1
>> k=1:
>> ac[1] = E[x[n]x[n-1]] = P*E[x[n-1]x[n-1]] + (1-P)*E[x[n-1]y[n]] =
>> P*E[y[n]y[n]] = P
>>
>> The idea is that P of the time, x[n] = x[n-1] (resulting in the first
>> term) and (1-P) of the time, x[n] is a new, uncorrelated sample from
>> y[n]. So we're left with P times the power (assumed to be 1 above).
>>
>> k=2:
>> ac[2] = P*P*E[x[n-2]x[n-2]] = P^2
>>
>> Again, we decompose the expected value into the case where x[n] = x[n-2]
>> - this only happens if both of the previous samples were held
>> (probability P^2). The rest of the time - if there was at least one
>> sample event - we have uncorrelated variables and the term drops out.
>>
>> So, by induction and symmetry, we conclude:
>>
> >
>
>> ac[k] = P^abs(k)
>>
> >
>
>> And so the psd is given by:
>>
>> psd[w] = (1 - P^2)/(1 - 2Pcos(w) + P^2)
>>
>
> What is the method that you used to go from ac[k] to psd[w]? Robert
> mentioned that psd was the Fourier transform of ac. Is this particular case
> a standard transform that you knew off the top of your head?
>
> And is psd[w] in exactly the same units as the magnitude squared spectrum
> of x[n] (i.e. |ft(x)|^2)?
>
>
> Unless I've screwed up somewhere?
>>
>
> A quick simulation suggests that it might be okay:
>
> https://www.dropbox.com/home/Public?preview=SH1_1.png
>
>
> But I don't seem to have the scale factors correct. The psd has
> significantly smaller magnitude than the fft.
>
> Here's the numpy code I used (also pasted below).
>
> https://gist.github.com/RossBencina/a15a696adf0232c73a55
>
> The FFT output is scaled by (2.0/N) prior to computing the magnitude
> squared spectrum.
>
> I have also scaled the PSD by (2.0/N). That doesn't seem quite right to me
> for two reasons: (1) the scale factor is applied to the linear FFT, but to
> the mag squared PSD and (2) I don't have the 1/3 factor anywhere.
>
> Any thoughts on what I'm doing wrong?
>
> Thanks,
>
> Ross.
>
>
> P.S. Pasting the numpy code below:
>
> # ---8<
> # see
> https://lists.columbia.edu/pipermail/music-dsp/2015-November/000424.html
> # psd derivation due to Ethan Duni
> import numpy as np
> from numpy.fft import fft, fftfreq
> import matplotlib.pyplot as plt
>
> N = 16384*2*2*2*2*2 # FFT size
>
> y = (np.random.random(N) * 2) - 1 # ~U(-1,1)
> r = np.random.random(N) # ~U(0,1)
> x = 

Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-05 Thread Ethan Duni
>since the whole signal has infinite power, the units really
>need to be power per unit frequency per unit time, which
>(confusingly) is the same thing as power.

I think you mean to say "infinite energy" and then "energy per unit
frequency per unit time," no?

E

On Thu, Nov 5, 2015 at 8:21 AM, Ethan Fenn  wrote:

> Let's see if I got this right: each bin contains the power for a frequency
>> interval of 2pi/N radians. If I multiply each bin's power by N/2pi I should
>> get power values in units of power/radian.
>>
>
> Sounds reasonable to me, but I'm not sure I've got it right so who knows!
>
> I think I was slightly off when I said that the units of psd are power per
> unit frequency -- since the whole signal has infinite power, the units
> really need to be power per unit frequency per unit time, which
> (confusingly) is the same thing as power. This could be another reason why
> some special scaling is needed as compared to a finite-length FFT.
>
> I'm not sure whether the FFT values should be fringing above the psd line
>> or not:
>
>
> The psd line is the expected value, so some FFT values should be above it
> and some below. You could try averaging the squared spectra from a bunch of
> separate FFT trials and see if that makes things coverge toward the line.
>
> -Ethan
>
>
> On Thu, Nov 5, 2015 at 3:48 PM, Ross Bencina 
> wrote:
>
>> Thanks Ethan,
>>
>> I think that I have it working. It would be great is someone could check
>> the scaling though. I'm not sure whether the FFT values should be fringing
>> above the psd line or not:
>>
>> https://www.dropbox.com/s/txc0txhxqr1t274/SH1_2.png?dl=0
>>
>> I removed the hamming window, which was causing scaling problems. The FFT
>> output is now scaled so that the the sum of power over all bins matches the
>> power of the time domain signal:
>>
>>
>> https://gist.github.com/RossBencina/a15a696adf0232c73a55/bdefe5ab0b5c218a966bd6a04d9d998a708faf99
>>
>>
>> On 6/11/2015 12:02 AM, Ethan Fenn wrote:
>>
>>> And is psd[w] in exactly the same units as the magnitude squared
>>> spectrum of x[n] (i.e. |ft(x)|^2)?
>>>
>>>
>>> More or less, with the proviso that you have to be careful whether
>>> you're talking about power per unit frequency which the psd will give
>>> you, and power per frequency bin which is often the correct
>>> interpretation of magnitude squared FFT results -- the latter depending
>>> on the FFT scaling conventions used.
>>>
>>
>> Let's see if I got this right: each bin contains the power for a
>> frequency interval of 2pi/N radians. If I multiply each bin's power by
>> N/2pi I should get power values in units of power/radian.
>>
>>
>> The psd makes no reference to any transform length, since it's based on
>>> the statistical properties of the process. So I think it would be wrong
>>> (or at least inexact) to have a scale related to N applied to it. If you
>>> want the magnitude squared results of an FFT to match the psd, it seems
>>> more correct to scale the FFT and try a few different N's to see what
>>> factor of N will give consistent results.
>>>
>>
>> That makes sense.
>>
>>
>> As to the exact scale that should be applied... I think there should be
>>> a 1/3 in the expression for psd, because E[x^2]=1/3 where x is uniform
>>> in [-1,1]. Aside from that, there might be a factor of 2pi depending on
>>> whether we're talking about power per linear or angular frequency. And
>>> there could be others I'm not thinking of maybe someone else can
>>> shed more light here.
>>>
>>
>> I multiplied the psd by 1/3 and as you can see from the graph it looks as
>> though the FFT and the psd are more-or-less aligned.
>>
>>
>> Hope that's somewhat helpful!
>>>
>>
>> Very clear thanks,
>>
>> Ross.
>>
>>
>>
>> -Ethan
>>>
>>>
>>>
>>>
>>> On Thu, Nov 5, 2015 at 11:00 AM, Ross Bencina
>>> > wrote:
>>>
>>> Thanks Ethan(s),
>>>
>>> I was able to follow your derivation. A few questions:
>>>
>>> On 4/11/2015 7:07 PM, Ethan Duni wrote:
>>>
>>> It's pretty straightforward to derive the autocorrelation and
>>> psd for
>>> this one. Let me restate it with some convenient notation. Let's
>>> say
>>> there are a parameter P in (0,1) and 3 random processes:
>>> r[n] i.i.d. ~U(0,1)
>>> y[n] i.i.d. ~(some distribution with at least first and second
>>> moments
>>> finite)
>>> x[n] = (r[n]>>
>>> Note that I've switched the probability of holding to P from
>>> (1-P), and
>>> that the signal being sampled-and-held can have an arbitrary (if
>>> well
>>> behaved) distribution. Let's also assume wlog that E[y[n]y[n]] =
>>> 1
>>> (Scale the final results by the power of whatever distribution
>>> you prefer).
>>>
>>>
>>> So for y[n] ~U(-1,1) I should multiply psd[w] by what exactly?
>>>
>>>
>>> 

Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-05 Thread Sampo Syreeni

On 2015-11-05, robert bristow-johnson wrote:

I think I was slightly off when I said that the units of psd are 
power per unit frequency -- since the whole signal has infinite 
power,


no, i don't think so.


Me neither. Power is already by definition energy per unit time. Even if 
an infinitely long signal often (not always) has infinite energy, *any* 
practical signal you would be dealing with has finite power over its 
entire length. And in fact physical signals can't have even infinite 
length or energy, so even if you go crazy and conceptually integrate 
everything from the beginning of the time to the End Times, it's 
reasonable to model any realistic signal as being globally, not just 
locally, square integrable (the square integral from -inf to +inf being 
the total energy).


the units really need to be power per unit frequency per unit time, 
which (confusingly) is the same thing as power.


the signal has infinite energy because it goes on (with power greater 
than some positive lower-bound) forever.  but it's not infinite power 
unless it's something like "true" white noise (which has infinite 
bandwidth).


Precisely. So what probably trips some up here is "power per unit time". 
That's nonsensical. What we need is power per unit frequency, i.e. 
energy per unit time per unit frequency.


what comes out of a random number generator (a good one) is white only 
up to Nyquist. not infinite-bandwidth white noise.


And, as a matter of fact, if you go to the kind of distributional stuff 
we talked about a while back, you can even deal with real white noise to 
a degree. That's because you can do local integration in the Fourier 
domain, where white noise has unity norm.


The same argument then explains why a signal which has infinite length 
and infinite energy in the time domain is absolutely no problem for the 
kind of analysis we're talking about here: STFT analysis already makes 
your stuff local in time, so as long as the signal is of finite power, 
you'll get sensible local results, even if the signal is globally 
speaking of infinite energy (say like an ideal sinusoid).


The only real kink is that when you localise your analysis, you're 
bringing in an extra degree of freedom: what precisely do the length and 
shape of your windowing/apodization function do to the results of the 
analysis. In spectral analysis work, that then mostly revolves around 
energy concentration within an FFT band, and specral spillover to the 
adjacent ones. Sometimes statistical estimation theory, and what phase 
does over successive windows.


so the integral, from -Nyquist to +Nyquist of the PSD must equal the 
variance, as derived from the p.d.f. and that value also has to be the 
zeroth-lag value of the autocorrelation.


Yes, and that by definition. If you have to deal with DC, then you have 
to separate autocorrelation from autocovariance. Also in that case the 
DC part spills over asymmetrically after windowing, because essentially 
it will be AM modulated by the window, and will alias upwards across the 
zero frequency point.


This could be another reason why some special scaling is needed as 
compared to a finite-length FFT.


really, the only scaling would be that comparing the Fourier integral 
(with truncated and finite limts) to a Riemann summation (which could 
be expressed as the DFT).


As I understand it, scaling is mostly necessary because of numerical 
concerns. I mean...


When you do longer STFT's, the implicit filter represented by each bin 
grows narrower and more selective. In other words, more and more 
resonant. If it then so happens that you hit a sinusoid right in the 
middle of the passband, a growing analysis window leads to an unlimited 
amount of power gathered on that coefficient. After all, the continuous 
time Fourier transform of a sinusoid is a Dirac distribution, and with a 
growing analysis window you'll approach that -- the series doesn't 
converge in the normal but only the weak sense, so that your STFT bin 
blows up. So there's a tradeoff between headroom and noise floor, here.


Though, I could be talking about a different scaling problem than you 
folks. I did jump into the fray pretty late. :)

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-05 Thread robert bristow-johnson





> I think I was slightly off when I said that the units of psd are power per

> unit frequency -- since the whole signal has infinite power,
�
no, i don't think so.
�
> the units�really need to be power per unit frequency per unit time, which

> (confusingly) is the same thing as power.
�
the signal has infinite energy because it goes on (with power greater than some 
positive lower-bound) forever. �but it's not infinite power unless it's 
something like "true" white noise (which has infinite bandwidth).
�what comes out of a random number generator (a good one) is white only up to 
Nyquist. �not infinite-bandwidth white noise.
�
the power of a random process is the mean-square which is the variance plus 
DC^2. �i think the DC component is 0 in the present
case.
�
so the integral, from -Nyquist to +Nyquist of the PSD must equal the variance, 
as derived from the p.d.f. �and that value also has to be the zeroth-lag value 
of the autocorrelation.
�
> This could be another reason why
> some special scaling is needed as compared to a finite-length FFT.
really, the only scaling would be that comparing the Fourier integral (with 
truncated and finite limts) to a Riemann summation (which could be expressed as 
the DFT).





--
�


r b-j � � � � � � � � � r...@audioimagination.com
�


"Imagination is more important than knowledge."
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-04 Thread Theo Verelst

Ross Bencina wrote:

Hi Everyone,

Suppose that I generate a time series x[n] as follows:

 >>>
P is a constant value between 0 and 1

At each time step n (n is an integer):

r[n] = uniform_random(0, 1)
x[n] = (r[n] <= P) ? uniform_random(-1, 1) : x[n-1]

Where "(a) ? b : c" is the C ternary operator that takes on the value b if a is 
true, and
c otherwise.
<<<

What would be a good way to derive a closed-form expression for the spectrum of 
x?
(Assuming that the series is infinite.)

...


Hi, from me at the moment only some generalities that appear difficult (at any level) 
usually: if you want the Fourier transform of a "signal" in this case a sequence of 
numbers between 0 and 1 (inclusive), the interpretations are important: is this coming 
from a physical process you sampled (without much regarding Shannon), or is this some sort 
of discrete Markov chain output, where you interpret the sequence of samples zeroth order 
interpolated as an continuous signal that you take the FT of? In the last case, you'll get 
a spectrum that shows clear multiples of the "sampling frequency" and that is highly 
irregular because of the randomness, and I don't know if the FT's infinite integral of 
this signal converges and is unambiguous, you might have to prove that first.


Statistically, often a problem, this sequence of numbers is like two experiments in 
sequence, on depending on the other. The randomness of the P invoked choice still easily 
works with the norm "big numbers" approximation, clearly, but the second one, and therefor 
the result of the function prescription, has a ***dependency** which makes normal 
statistical shortcuts invalid. I don't know a proper way to give a proper and correct 
statistical analysis of the number sequence, and I am not even sure there is a infinite 
summation based proper DC average computable. Two statistical variables with a 
inter-dependency requires the use of proper summations sums or maybe Poisson sequence 
analysis, I don't recall exactly, but the dependency makes it hard to do an "easy" analysis.


It could be a problem from an electronics circuit for switched supplies or something, or 
maybe in a more restricted form it's an antenna signal processor step or something, 
usually there are more givens in these cases, analog or digital, that you might want to 
know before a proper statistical analysis can be in order, but anyhow, you could write a 
simple program and do some very large sum computations, as separate experiments a number 
of times with different random seeds or generators and see what happens, for instance if 
simulation results soon give the impression of a fixed signal average.


T.V.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-04 Thread Ethan Duni
It's pretty straightforward to derive the autocorrelation and psd for this
one. Let me restate it with some convenient notation. Let's say there are a
parameter P in (0,1) and 3 random processes:
r[n] i.i.d. ~U(0,1)
y[n] i.i.d. ~(some distribution with at least first and second moments
finite)
x[n] = (r[n]
wrote:

> On 4/11/2015 5:26 AM, Ethan Duni wrote:
>
>> Do you mean the literal Fourier spectrum of some realization of this
>> process, or the power spectral density? I don't think you're going to
>> get a closed-form expression for the former (it has a random component).
>>
>
> I am interested in the long-term magnitude spectrum. I had assumed
> (wrongly?) that in the limit (over an infinite length series), that the
> fourier integral would converge. And modeling in that way would be
> (slightly) more familiar to me. However, If autocorrelation or psd is the
> better way to characterize the spectra of random signals then I should
> learn about that.
>
>
> For the latter what you need to do is work out an expression for the
>> autocorrelation function of the process.
>>
> >
>
>> As far as the autocorrelation function goes you can get some hints by
>> thinking about what happens for different values of P. For P=1 you get
>> an IID uniform noise process, which will have autocorrelation equal to a
>> kronecker delta, and so psd equal to 1. For P=0 you get a constant
>> signal. If that's the zero signal, then the autocorrelation and psd are
>> both zero. If it's a non-zero signal (depends on your initial condition
>> at n=-inf) then the autocorrelation is a constant and the psd is a dirac
>> delta.Those are the extreme cases. For P in the middle, you have a
>> piecewise-constant signal where the length of each segment is given by a
>> stopping time criterion on the uniform process (and P). If you grind
>> through the math, you should end up with an autocorrelation that decays
>> down to zero, with a rate of decay related to P (the larger P, the
>> longer the decay). The FFT of that will have a similar shape, but with
>> the rate of decay inversely proportional to P (ala Heisenberg
>> Uncertainty principle).
>>
>> So in broad strokes, what you should see is a lowpass spectrum
>> parameterized by P - for P very small, you approach a flat spectrum, and
>> for P close to 1 you approach a spectrum that's all DC.
>>
>> Deriving the exact expression for the autocorrelation/spectrum is left
>> as an exercise for the reader :]
>>
>
> Ok, thanks. That gives me a place to start looking.
>
> Ross.
>
>
>
> E
>>
>> On Tue, Nov 3, 2015 at 9:42 AM, Ross Bencina > > wrote:
>>
>> Hi Everyone,
>>
>> Suppose that I generate a time series x[n] as follows:
>>
>>  >>>
>> P is a constant value between 0 and 1
>>
>> At each time step n (n is an integer):
>>
>> r[n] = uniform_random(0, 1)
>> x[n] = (r[n] <= P) ? uniform_random(-1, 1) : x[n-1]
>>
>> Where "(a) ? b : c" is the C ternary operator that takes on the
>> value b if a is true, and c otherwise.
>> <<<
>>
>> What would be a good way to derive a closed-form expression for the
>> spectrum of x? (Assuming that the series is infinite.)
>>
>>
>> I'm guessing that the answer is an integral over the spectra of
>> shifted step functions, but I don't know how to deal with the random
>> magnitude of each step, or the random onsets. Please assume that I
>> barely know how to take the Fourier transform of a step function.
>>
>> Maybe the spectrum of a train of randomly spaced, random amplitude
>> pulses is easier to model (i.e. w[n] = x[n] - x[n-1]). Either way,
>> any 

Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-04 Thread Ethan Duni
Yep that's the same approach I just posted :]

E

On Tue, Nov 3, 2015 at 11:48 PM, Ethan Fenn  wrote:

> How about this:
>
> For a lag of t, the probability that no new samples have been accepted is
> (1-P)^|t|.
>
> So the autocorrelation should be:
>
> AF(t) = E[x(n)x(n+t)] = (1-P)^|t| * E[x(n)^2] + (1 -
> (1-P)^|t|)*E[x(n)*x_new]
>
> The second term covers the case that a new sample has popped up, so x(n)
> and x(n+t) are uncorrelated. So, this term vanishes. The first term is
> (1/3)*(1-P)^|t|, so I reckon:
>
> AF(t) = (1/3)*(1-P)^|t|
>
> Does that make sense?
>
> -Ethan
>
>
>
>
>
> On Wed, Nov 4, 2015 at 8:21 AM, robert bristow-johnson <
> r...@audioimagination.com> wrote:
>
>>
>>
>>  Original Message 
>> Subject: Re: [music-dsp] how to derive spectrum of random sample-and-hold
>> noise?
>> From: "Ross Bencina" 
>> Date: Wed, November 4, 2015 12:22 am
>> To: r...@audioimagination.com
>> music-dsp@music.columbia.edu
>> --
>>
>>
>>
>> with mods
>>
>>
>> > Using ASDF instead of autocorrelation:
>> >
>> > let n be an arbitrary time index
>> > let t be the ASDF lag time of interest
>> >
>> > ASDF[t] = (x[n] - x[n-t])^2
>> >
>> > there are two cases:
>> >
>> > case 1, (holding): x[n-t] == x[n]
>>
>> this has probability of P^|t|
>>
>>
>>
>>
>> > case 2, (not holding) x[n-t] == uniform_random(-1, 1)
>>
>> this has probability of 1 - P^|t|
>>
>>
>> >
>> > In case 1, ASDF[t] = 0
>> > In case 2, ASDF[t] = (1/3)^2  (i think)
>>
>>
>>
>> so maybe it's
>>
>>
>>
>> ASDF[t] = 0 * P^|t|  +  (1/3)^2 * (1 - P^|t|)
>>
>>
>>
>> now the autocorrelation function (AF) is related to the ASDF as
>>
>>
>>
>> AF[t] =  mean{ x[n] * x[n-t] }
>>
>> AF[t] =  mean{ (x[n])^2 }  - (1/2)*mean{ (x[n] - x[n-t])^2 }
>>
>>
>> AF[t] =  mean{ (x[n])^2 }  - (1/2)*ASDF[t]
>>
>>
>>
>> AF[t]  =  (1/3)  -  (1/2) * (1/3)^2 * (1 - P^|t|)
>>
>>
>>
>> this doesn't quite look right to me.  somehow i was expecting  AF[t] to
>> go to zero as t goes to infinity.
>>
>>
>>
>>
>> > To get the limit of ASDF[t], weight the values of the two cases by the
>> > probability of each case case. (Which seems like a textbook waiting-time
>> > problem, but will require me to return to my textbook).
>> >
>> > Then I just need to convert the ASDF to PSD somehow.
>>
>>
>>
>>   ASDF[t] = 2*AF[0] - 2*AF[t]
>>
>>
>>
>> or
>>
>>
>>
>>   AF[t]  =  AF[0]  - (1/2)*ASDF[t]
>>
>>
>>
>>
>>
>> PSD = Fourier_Transform{ AF[t] }
>>
>>
>> > Does that seem like a reasonable approach?
>>
>>
>> it's the approach i am struggling with.   somehow, i don't like the AF i
>> get.
>>
>>
>>
>> --
>>
>>
>>
>>
>> r b-j   r...@audioimagination.com
>>
>>
>>
>>
>> "Imagination is more important than knowledge."
>>
>> ___
>> dupswapdrop: music-dsp mailing list
>> music-dsp@music.columbia.edu
>> https://lists.columbia.edu/mailman/listinfo/music-dsp
>>
>
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-03 Thread Ethan Duni
Wait, just realized I wrote that last part backwards. It should be:

So in broad strokes, what you should see is a lowpass spectrum
parameterized by P - for P very small, you approach a DC spectrum, and for
P close to 1 you approach a spectrum that's flat.

On Tue, Nov 3, 2015 at 10:26 AM, Ethan Duni  wrote:

> Do you mean the literal Fourier spectrum of some realization of this
> process, or the power spectral density? I don't think you're going to get a
> closed-form expression for the former (it has a random component). For the
> latter what you need to do is work out an expression for the
> autocorrelation function of the process.
>
> As far as the autocorrelation function goes you can get some hints by
> thinking about what happens for different values of P. For P=1 you get an
> IID uniform noise process, which will have autocorrelation equal to a
> kronecker delta, and so psd equal to 1. For P=0 you get a constant signal.
> If that's the zero signal, then the autocorrelation and psd are both zero.
> If it's a non-zero signal (depends on your initial condition at n=-inf)
> then the autocorrelation is a constant and the psd is a dirac delta. Those
> are the extreme cases. For P in the middle, you have a piecewise-constant
> signal where the length of each segment is given by a stopping time
> criterion on the uniform process (and P). If you grind through the math,
> you should end up with an autocorrelation that decays down to zero, with a
> rate of decay related to P (the larger P, the longer the decay). The FFT of
> that will have a similar shape, but with the rate of decay inversely
> proportional to P (ala Heisenberg Uncertainty principle).
>
> So in broad strokes, what you should see is a lowpass spectrum
> parameterized by P - for P very small, you approach a flat spectrum, and
> for P close to 1 you approach a spectrum that's all DC.
>
> Deriving the exact expression for the autocorrelation/spectrum is left as
> an exercise for the reader :]
>
> E
>
> On Tue, Nov 3, 2015 at 9:42 AM, Ross Bencina 
> wrote:
>
>> Hi Everyone,
>>
>> Suppose that I generate a time series x[n] as follows:
>>
>> >>>
>> P is a constant value between 0 and 1
>>
>> At each time step n (n is an integer):
>>
>> r[n] = uniform_random(0, 1)
>> x[n] = (r[n] <= P) ? uniform_random(-1, 1) : x[n-1]
>>
>> Where "(a) ? b : c" is the C ternary operator that takes on the value b
>> if a is true, and c otherwise.
>> <<<
>>
>> What would be a good way to derive a closed-form expression for the
>> spectrum of x? (Assuming that the series is infinite.)
>>
>>
>> I'm guessing that the answer is an integral over the spectra of shifted
>> step functions, but I don't know how to deal with the random magnitude of
>> each step, or the random onsets. Please assume that I barely know how to
>> take the Fourier transform of a step function.
>>
>> Maybe the spectrum of a train of randomly spaced, random amplitude pulses
>> is easier to model (i.e. w[n] = x[n] - x[n-1]). Either way, any hints would
>> be appreciated.
>>
>> Thanks in advance,
>>
>> Ross.
>> ___
>> dupswapdrop: music-dsp mailing list
>> music-dsp@music.columbia.edu
>> https://lists.columbia.edu/mailman/listinfo/music-dsp
>>
>
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-03 Thread Ethan Duni
Do you mean the literal Fourier spectrum of some realization of this
process, or the power spectral density? I don't think you're going to get a
closed-form expression for the former (it has a random component). For the
latter what you need to do is work out an expression for the
autocorrelation function of the process.

As far as the autocorrelation function goes you can get some hints by
thinking about what happens for different values of P. For P=1 you get an
IID uniform noise process, which will have autocorrelation equal to a
kronecker delta, and so psd equal to 1. For P=0 you get a constant signal.
If that's the zero signal, then the autocorrelation and psd are both zero.
If it's a non-zero signal (depends on your initial condition at n=-inf)
then the autocorrelation is a constant and the psd is a dirac delta. Those
are the extreme cases. For P in the middle, you have a piecewise-constant
signal where the length of each segment is given by a stopping time
criterion on the uniform process (and P). If you grind through the math,
you should end up with an autocorrelation that decays down to zero, with a
rate of decay related to P (the larger P, the longer the decay). The FFT of
that will have a similar shape, but with the rate of decay inversely
proportional to P (ala Heisenberg Uncertainty principle).

So in broad strokes, what you should see is a lowpass spectrum
parameterized by P - for P very small, you approach a flat spectrum, and
for P close to 1 you approach a spectrum that's all DC.

Deriving the exact expression for the autocorrelation/spectrum is left as
an exercise for the reader :]

E

On Tue, Nov 3, 2015 at 9:42 AM, Ross Bencina 
wrote:

> Hi Everyone,
>
> Suppose that I generate a time series x[n] as follows:
>
> >>>
> P is a constant value between 0 and 1
>
> At each time step n (n is an integer):
>
> r[n] = uniform_random(0, 1)
> x[n] = (r[n] <= P) ? uniform_random(-1, 1) : x[n-1]
>
> Where "(a) ? b : c" is the C ternary operator that takes on the value b if
> a is true, and c otherwise.
> <<<
>
> What would be a good way to derive a closed-form expression for the
> spectrum of x? (Assuming that the series is infinite.)
>
>
> I'm guessing that the answer is an integral over the spectra of shifted
> step functions, but I don't know how to deal with the random magnitude of
> each step, or the random onsets. Please assume that I barely know how to
> take the Fourier transform of a step function.
>
> Maybe the spectrum of a train of randomly spaced, random amplitude pulses
> is easier to model (i.e. w[n] = x[n] - x[n-1]). Either way, any hints would
> be appreciated.
>
> Thanks in advance,
>
> Ross.
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

[music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-03 Thread Ross Bencina

Hi Everyone,

Suppose that I generate a time series x[n] as follows:

>>>
P is a constant value between 0 and 1

At each time step n (n is an integer):

r[n] = uniform_random(0, 1)
x[n] = (r[n] <= P) ? uniform_random(-1, 1) : x[n-1]

Where "(a) ? b : c" is the C ternary operator that takes on the value b 
if a is true, and c otherwise.

<<<

What would be a good way to derive a closed-form expression for the 
spectrum of x? (Assuming that the series is infinite.)



I'm guessing that the answer is an integral over the spectra of shifted 
step functions, but I don't know how to deal with the random magnitude 
of each step, or the random onsets. Please assume that I barely know how 
to take the Fourier transform of a step function.


Maybe the spectrum of a train of randomly spaced, random amplitude 
pulses is easier to model (i.e. w[n] = x[n] - x[n-1]). Either way, any 
hints would be appreciated.


Thanks in advance,

Ross.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-03 Thread robert bristow-johnson



�
i have to confess that this is hard and i don't have a concrete solution for 
you. �it seems to me that, by this description:
�
r[n] = uniform_random(0, 1)
if (r[n] <= P)
� �x[n] =�uniform_random(-1, 1);
else
�x[n] =
x[n-1];
�
from that, and from the assumption of ergodicity (where all time averages can 
be replaced with probabilistic averages), then it should be possible to derive 
an autocorrelation function from this.
but i haven't done it.
�
rots o'
ruk.
�
r b-j
�

 Original Message 

Subject: Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

From: "Ethan Duni" 

Date: Tue, November 3, 2015 1:29 pm

To: "A discussion list for music-related DSP" 

--



> Wait, just realized I wrote that last part backwards. It should be:

>

> So in broad strokes, what you should see is a lowpass spectrum

> parameterized by P - for P very small, you approach a DC spectrum, and for

> P close to 1 you approach a spectrum that's flat.

>

> On Tue, Nov 3, 2015 at 10:26 AM, Ethan Duni  wrote:

>

>> Do you mean the literal Fourier spectrum of some realization of this

>> process, or the power spectral density? I don't think you're going to get a

>> closed-form expression for the former (it has a random component). For the

>> latter what you need to do is work out an expression for the

>> autocorrelation function of the process.

>>

>> As far as the autocorrelation function goes you can get some hints by

>> thinking about what happens for different values of P. For P=1 you get an

>> IID uniform noise process, which will have autocorrelation equal to a

>> kronecker delta, and so psd equal to 1. For P=0 you get a constant signal.

>> If that's the zero signal, then the autocorrelation and psd are both zero.

>> If it's a non-zero signal (depends on your initial condition at n=-inf)

>> then the autocorrelation is a constant and the psd is a dirac delta. Those

>> are the extreme cases. For P in the middle, you have a piecewise-constant

>> signal where the length of each segment is given by a stopping time

>> criterion on the uniform process (and P). If you grind through the math,

>> you should end up with an autocorrelation that decays down to zero, with a

>> rate of decay related to P (the larger P, the longer the decay). The FFT of

>> that will have a similar shape, but with the rate of decay inversely

>> proportional to P (ala Heisenberg Uncertainty principle).

>>

>> So in broad strokes, what you should see is a lowpass spectrum

>> parameterized by P - for P very small, you approach a flat spectrum, and

>> for P close to 1 you approach a spectrum that's all DC.

>>

>> Deriving the exact expression for the autocorrelation/spectrum is left as

>> an exercise for the reader :]

>>

>> E

>>

>> On Tue, Nov 3, 2015 at 9:42 AM, Ross Bencina 

>> wrote:

>>

>>> Hi Everyone,

>>>

>>> Suppose that I generate a time series x[n] as follows:

>>>

>>> >>>

>>> P is a constant value between 0 and 1

>>>

>>> At each time step n (n is an integer):

>>>

>>> r[n] = uniform_random(0, 1)

>>> x[n] = (r[n] <= P) ? uniform_random(-1, 1) : x[n-1]

>>>

>>> Where "(a) ? b : c" is the C ternary operator that takes on the value b

>>> if a is true, and c otherwise.

>>> <<<

>>>

>>> What would be a good way to derive a closed-form expression for the

>>> spectrum of x? (Assuming that the series is infinite.)

>>>

>>>

>>> I'm guessing that the answer is an integral over the spectra of shifted

>>> step functions, but I don't know how to deal with the random magnitude of

>>> each step, or the random onsets. Please assume that I barely know how to

>>> take the Fourier transform of a step function.

>>>

>>> Maybe the spectrum of a train of randomly spaced, random amplitude pulses

>>> is easier to model (i.e. w[n] = x[n] - x[n-1]). Either way, any hints would

>>> be appreciated.

>>>

>>> Thanks in advance,

>>>

>>> Ross.

>>> ___

>>> dupswapdrop: music-dsp mailing list

>>> music-dsp@music.columbia.edu

>>> https://lists.columbia.edu/mailman/listinfo/music-dsp

>>>

>>

>>

> ___

> dupswapdrop: music-dsp mailing list

> music-dsp@music.columbia.edu

> https://lists.columbia.edu/mailman/listinfo/music-dsp





--
�


r b-j � � � � � � � � � r...@audioimagination.com
�


"Imagination is more important than knowledge."
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-03 Thread Ross Bencina

On 4/11/2015 9:39 AM, robert bristow-johnson wrote:

i have to confess that this is hard and i don't have a concrete solution
for you.


Knowing that this isn't well known helps. I have an idea (see below). It 
might be wrong.




it seems to me that, by this description:

r[n] = uniform_random(0, 1)

if (r[n] <= P)

x[n] = uniform_random(-1, 1);

else

  x[n] = x[n-1];

from that, and from the assumption of ergodicity (where all time
averages can be replaced with probabilistic averages), then it should be
possible to derive an autocorrelation function from this.

but i haven't done it.


Using AMDF instead of autocorrelation:

let n be an arbitrary time index
let t be the AMDF lag time of interest

AMDF[t] = fabs(x[n] - x[n-t])

there are two cases:

case 1, (holding): x[n-t] == x[n]
case 2, (not holding) x[n-t] == uniform_random(-1, 1)

In case 1, AMDF[t] = 0
In case 2, AMDF[t] = 2/3 (i think?)

To get the limit of AMDF[t], weight the values of the two cases by the 
probability of each case case. (Which seems like a textbook waiting-time 
problem, but will require me to return to my textbook).


Then I just need to convert the AMDF to PSD somehow.

Does that seem like a reasonable approach?

Ross.

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-03 Thread Ross Bencina

On 4/11/2015 5:26 AM, Ethan Duni wrote:

Do you mean the literal Fourier spectrum of some realization of this
process, or the power spectral density? I don't think you're going to
get a closed-form expression for the former (it has a random component).


I am interested in the long-term magnitude spectrum. I had assumed 
(wrongly?) that in the limit (over an infinite length series), that the 
fourier integral would converge. And modeling in that way would be 
(slightly) more familiar to me. However, If autocorrelation or psd is 
the better way to characterize the spectra of random signals then I 
should learn about that.




For the latter what you need to do is work out an expression for the
autocorrelation function of the process.

>

As far as the autocorrelation function goes you can get some hints by
thinking about what happens for different values of P. For P=1 you get
an IID uniform noise process, which will have autocorrelation equal to a
kronecker delta, and so psd equal to 1. For P=0 you get a constant
signal. If that's the zero signal, then the autocorrelation and psd are
both zero. If it's a non-zero signal (depends on your initial condition
at n=-inf) then the autocorrelation is a constant and the psd is a dirac
delta.Those are the extreme cases. For P in the middle, you have a
piecewise-constant signal where the length of each segment is given by a
stopping time criterion on the uniform process (and P). If you grind
through the math, you should end up with an autocorrelation that decays
down to zero, with a rate of decay related to P (the larger P, the
longer the decay). The FFT of that will have a similar shape, but with
the rate of decay inversely proportional to P (ala Heisenberg
Uncertainty principle).

So in broad strokes, what you should see is a lowpass spectrum
parameterized by P - for P very small, you approach a flat spectrum, and
for P close to 1 you approach a spectrum that's all DC.

Deriving the exact expression for the autocorrelation/spectrum is left
as an exercise for the reader :]


Ok, thanks. That gives me a place to start looking.

Ross.




E

On Tue, Nov 3, 2015 at 9:42 AM, Ross Bencina > wrote:

Hi Everyone,

Suppose that I generate a time series x[n] as follows:

 >>>
P is a constant value between 0 and 1

At each time step n (n is an integer):

r[n] = uniform_random(0, 1)
x[n] = (r[n] <= P) ? uniform_random(-1, 1) : x[n-1]

Where "(a) ? b : c" is the C ternary operator that takes on the
value b if a is true, and c otherwise.
<<<

What would be a good way to derive a closed-form expression for the
spectrum of x? (Assuming that the series is infinite.)


I'm guessing that the answer is an integral over the spectra of
shifted step functions, but I don't know how to deal with the random
magnitude of each step, or the random onsets. Please assume that I
barely know how to take the Fourier transform of a step function.

Maybe the spectrum of a train of randomly spaced, random amplitude
pulses is easier to model (i.e. w[n] = x[n] - x[n-1]). Either way,
any hints would be appreciated.

Thanks in advance,

Ross.

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-03 Thread robert bristow-johnson







 Original Message 

Subject: Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

From: "Ross Bencina" 

Date: Tue, November 3, 2015 11:51 pm

To: music-dsp@music.columbia.edu

--



> On 4/11/2015 5:26 AM, Ethan Duni wrote:

>> Do you mean the literal Fourier spectrum of some realization of this

>> process, or the power spectral density? I don't think you're going to

>> get a closed-form expression for the former (it has a random component).

>

> I am interested in the long-term magnitude spectrum. I had assumed

> (wrongly?) that in the limit (over an infinite length series), that the

> fourier integral would converge. And modeling in that way would be

> (slightly) more familiar to me. However, If autocorrelation or psd is

> the better way to characterize the spectra of random signals then I

> should learn about that.
�
it is the correct way to characterize the spectra of random signals. �the 
spectra (PSD) is the Fourier Transform of autocorrelation and is scaled as 
magnitude-squared. � so if you're gonna look at the spectrum in dB, it's 
10*log10() not
20*log10().
�
but it ain't gonna be easy. �however, i *think* you gotta 'nuf information. 
�this is basically a Markov process.
�
setting aside a complex random signal, autocorrelation is first expressed as a 
time-average of the product of your random
signal times itself with a given lag. �it's an even function, so the PSD will 
be real.
�
with the assumption of ergodicity, the time average can be replaced with a 
probabilistic average for the same quantity. �i think there is enough 
information in your description to
calculate the probabilistic average of the product of your random signal times 
itself displaced by a given lag.
�
i have a sneaky suspicion that this Markov process is gonna be something like 
pink noise. �maybe with different slopes (of dB vs. log frequency) depending on
parameter P. �probabilistically holding on to a previous sample will have an 
LPF effect.






--
�


r b-j � � � � � � � � � r...@audioimagination.com
�


"Imagination is more important than knowledge."
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-03 Thread robert bristow-johnson







 Original Message 

Subject: Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

From: "Ross Bencina" 

Date: Wed, November 4, 2015 12:22 am

To: r...@audioimagination.com

music-dsp@music.columbia.edu

--



> On 4/11/2015 9:39 AM, robert bristow-johnson wrote:

>> i have to confess that this is hard and i don't have a concrete solution

>> for you.

>

> Knowing that this isn't well known helps. I have an idea (see below). It

> might be wrong.

>

>

>> it seems to me that, by this description:

>>

>> r[n] = uniform_random(0, 1)

>>

>> if (r[n] <= P)

>>

>> x[n] = uniform_random(-1, 1);

>>

>> else

>>

>> x[n] = x[n-1];

>>

>> from that, and from the assumption of ergodicity (where all time

>> averages can be replaced with probabilistic averages), then it should be

>> possible to derive an autocorrelation function from this.

>>

>> but i haven't done it.

>

> Using AMDF instead of autocorrelation:
>
> let n be an arbitrary time index

> let t be the AMDF lag time of interest

>

> AMDF[t] = fabs(x[n] - x[n-t])

>

> there are two cases:

>

> case 1, (holding): x[n-t] == x[n]

> case 2, (not holding) x[n-t] == uniform_random(-1, 1)

>

> In case 1, AMDF[t] = 0

> In case 2, AMDF[t] = 2/3 (i think?)
�
so if it's ASDF, it might be 1/3


>

> To get the limit of AMDF[t], weight the values of the two cases by the

> probability of each case case. (Which seems like a textbook waiting-time

> problem, but will require me to return to my textbook).

>

> Then I just need to convert the AMDF to PSD somehow.
�
it *is* possible to convert from ASDF to autocorrelation (and then to PSD). 
�square the difference instead of fabs().


>

> Does that seem like a reasonable approach?

>
perhaps.




--
�


r b-j � � � � � � � � � r...@audioimagination.com
�


"Imagination is more important than knowledge."
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-03 Thread robert bristow-johnson







 Original Message 

Subject: Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

From: "Ross Bencina" 

Date: Wed, November 4, 2015 12:22 am

To: r...@audioimagination.com

music-dsp@music.columbia.edu

--
�
with mods


>�Using ASDF instead of autocorrelation:

>

> let n be an arbitrary time index

> let t be the ASDF lag time of interest

>

> ASDF[t] = (x[n] - x[n-t])^2

>

> there are two cases:

>

> case 1, (holding): x[n-t] == x[n]
this has probability of P^|t|
�


> case 2, (not holding) x[n-t] == uniform_random(-1, 1)
this has probability of 1 - P^|t|


>

> In case 1, ASDF[t] = 0

> In case 2, ASDF[t] = (1/3)^2 �(i think)
�
so maybe it's
�
ASDF[t] = 0 * P^|t| �+ �(1/3)^2 * (1 - P^|t|)
�
now the autocorrelation function (AF) is related to the ASDF as
�
AF[t] = �mean{ x[n] * x[n-t] }


AF[t] = �mean{ (x[n])^2 } �- (1/2)*mean{ (x[n] - x[n-t])^2 }

�


AF[t] = �mean{ (x[n])^2 } �- (1/2)*ASDF[t]
�
AF[t] �= �(1/3) �- �(1/2) *�(1/3)^2 * (1 - P^|t|)
�
this doesn't quite look right to me. �somehow i was expecting �AF[t] to go to 
zero as t goes to
infinity.
�

> To get the limit of ASDF[t], weight the values of the two cases by the

> probability of each case case. (Which seems like a textbook waiting-time

> problem, but will require me to return to my textbook).

>

> Then I just need to convert the ASDF to PSD somehow.
�
� ASDF[t] = 2*AF[0] - 2*AF[t]
�
or
�
� AF[t] �= �AF[0] �- (1/2)*ASDF[t]
�
�
PSD = Fourier_Transform{ AF[t] }


> Does that seem like a reasonable approach?

�
it's the approach i am struggling with. � somehow, i don't like the AF i get.




--
�


r b-j � � � � � � � � � r...@audioimagination.com
�


"Imagination is more important than knowledge."
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-03 Thread robert bristow-johnson







 Original Message 

Subject: Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

From: "Ross Bencina" 

Date: Wed, November 4, 2015 12:22 am

To: r...@audioimagination.com

music-dsp@music.columbia.edu

--



> On 4/11/2015 9:39 AM, robert bristow-johnson wrote:

>> i have to confess that this is hard and i don't have a concrete solution

>> for you.

>

> Knowing that this isn't well known helps. I have an idea (see below). It

> might be wrong.

>

>

>> it seems to me that, by this description:

>>

>> r[n] = uniform_random(0, 1)

>>

>> if (r[n] <= P)

>>

>> x[n] = uniform_random(-1, 1);

>>

>> else

>>

>> x[n] = x[n-1];

>>

>> from that, and from the assumption of ergodicity (where all time

>> averages can be replaced with probabilistic averages), then it should be

>> possible to derive an autocorrelation function from this.

>>

>> but i haven't done it.

>

> Using AMDF instead of autocorrelation:
>
> let n be an arbitrary time index

> let t be the AMDF lag time of interest

>

> AMDF[t] = fabs(x[n] - x[n-t])

>

> there are two cases:

>

> case 1, (holding): x[n-t] == x[n]

> case 2, (not holding) x[n-t] == uniform_random(-1, 1)

>

> In case 1, AMDF[t] = 0

> In case 2, AMDF[t] = 2/3 (i think?)
�
so if it's ASDF, it might be 1/3


>

> To get the limit of AMDF[t], weight the values of the two cases by the

> probability of each case case. (Which seems like a textbook waiting-time

> problem, but will require me to return to my textbook).

>

> Then I just need to convert the AMDF to PSD somehow.
�
it *is* possible to convert from ASDF to autocorrelation (and then to PSD). 
�square the difference instead of fabs().


>

> Does that seem like a reasonable approach?

>
perhaps.




--
�


r b-j � � � � � � � � � r...@audioimagination.com
�


"Imagination is more important than knowledge."
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-03 Thread robert bristow-johnson







 Original Message 

Subject: Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

From: "Ross Bencina" 

Date: Wed, November 4, 2015 12:22 am

To: r...@audioimagination.com

music-dsp@music.columbia.edu

--
�
with mods


>�Using ASDF instead of autocorrelation:

>

> let n be an arbitrary time index

> let t be the ASDF lag time of interest

>

> ASDF[t] = (x[n] - x[n-t])^2

>

> there are two cases:

>

> case 1, (holding): x[n-t] == x[n]
this has probability of P^|t|
�


> case 2, (not holding) x[n-t] == uniform_random(-1, 1)
this has probability of 1 - P^|t|


>

> In case 1, ASDF[t] = 0

> In case 2, ASDF[t] = (1/3)^2 �(i think)
�
so maybe it's
�
ASDF[t] = 0 * P^|t| �+ �(1/3)^2 * (1 - P^|t|)
�
now the autocorrelation function (AF) is related to the ASDF as
�
AF[t] = �mean{ x[n] * x[n-t] }


AF[t] = �mean{ (x[n])^2 } �- (1/2)*mean{ (x[n] - x[n-t])^2 }

�


AF[t] = �mean{ (x[n])^2 } �- (1/2)*ASDF[t]
�
AF[t] �= �(1/3) �- �(1/2) *�(1/3)^2 * (1 - P^|t|)
�
this doesn't quite look right to me. �somehow i was expecting �AF[t] to go to 
zero as t goes to
infinity.
�

> To get the limit of ASDF[t], weight the values of the two cases by the

> probability of each case case. (Which seems like a textbook waiting-time

> problem, but will require me to return to my textbook).

>

> Then I just need to convert the ASDF to PSD somehow.
�
� ASDF[t] = 2*AF[0] - 2*AF[t]
�
or
�
� AF[t] �= �AF[0] �- (1/2)*ASDF[t]
�
�
PSD = Fourier_Transform{ AF[t] }


> Does that seem like a reasonable approach?

�
it's the approach i am struggling with. � somehow, i don't like the AF i get.




--
�


r b-j � � � � � � � � � r...@audioimagination.com
�


"Imagination is more important than knowledge."
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-03 Thread Ethan Fenn
How about this:

For a lag of t, the probability that no new samples have been accepted is
(1-P)^|t|.

So the autocorrelation should be:

AF(t) = E[x(n)x(n+t)] = (1-P)^|t| * E[x(n)^2] + (1 -
(1-P)^|t|)*E[x(n)*x_new]

The second term covers the case that a new sample has popped up, so x(n)
and x(n+t) are uncorrelated. So, this term vanishes. The first term is
(1/3)*(1-P)^|t|, so I reckon:

AF(t) = (1/3)*(1-P)^|t|

Does that make sense?

-Ethan





On Wed, Nov 4, 2015 at 8:21 AM, robert bristow-johnson <
r...@audioimagination.com> wrote:

>
>
>  Original Message 
> Subject: Re: [music-dsp] how to derive spectrum of random sample-and-hold
> noise?
> From: "Ross Bencina" 
> Date: Wed, November 4, 2015 12:22 am
> To: r...@audioimagination.com
> music-dsp@music.columbia.edu
> --
>
>
>
> with mods
>
>
> > Using ASDF instead of autocorrelation:
> >
> > let n be an arbitrary time index
> > let t be the ASDF lag time of interest
> >
> > ASDF[t] = (x[n] - x[n-t])^2
> >
> > there are two cases:
> >
> > case 1, (holding): x[n-t] == x[n]
>
> this has probability of P^|t|
>
>
>
>
> > case 2, (not holding) x[n-t] == uniform_random(-1, 1)
>
> this has probability of 1 - P^|t|
>
>
> >
> > In case 1, ASDF[t] = 0
> > In case 2, ASDF[t] = (1/3)^2  (i think)
>
>
>
> so maybe it's
>
>
>
> ASDF[t] = 0 * P^|t|  +  (1/3)^2 * (1 - P^|t|)
>
>
>
> now the autocorrelation function (AF) is related to the ASDF as
>
>
>
> AF[t] =  mean{ x[n] * x[n-t] }
>
> AF[t] =  mean{ (x[n])^2 }  - (1/2)*mean{ (x[n] - x[n-t])^2 }
>
>
> AF[t] =  mean{ (x[n])^2 }  - (1/2)*ASDF[t]
>
>
>
> AF[t]  =  (1/3)  -  (1/2) * (1/3)^2 * (1 - P^|t|)
>
>
>
> this doesn't quite look right to me.  somehow i was expecting  AF[t] to go
> to zero as t goes to infinity.
>
>
>
>
> > To get the limit of ASDF[t], weight the values of the two cases by the
> > probability of each case case. (Which seems like a textbook waiting-time
> > problem, but will require me to return to my textbook).
> >
> > Then I just need to convert the ASDF to PSD somehow.
>
>
>
>   ASDF[t] = 2*AF[0] - 2*AF[t]
>
>
>
> or
>
>
>
>   AF[t]  =  AF[0]  - (1/2)*ASDF[t]
>
>
>
>
>
> PSD = Fourier_Transform{ AF[t] }
>
>
> > Does that seem like a reasonable approach?
>
>
> it's the approach i am struggling with.   somehow, i don't like the AF i
> get.
>
>
>
> --
>
>
>
>
> r b-j   r...@audioimagination.com
>
>
>
>
> "Imagination is more important than knowledge."
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp