Hi Carsten,

On 2022-05-13 09:25, Carsten Andrich wrote:
On 11.05.22 08:15, Carsten Andrich wrote:

Also, any reason to do this via forward and inverse FFT? AFAIK the
Fourier transform of white noise is white noise, [...]
I had the same question when I first saw this. Unfortunately I don't have a good answer, besides that forward + inverse ensures that the noise looks like it is supposed to do, while I'm not 100% whether there is an easy way to generate
time-domain Gauss i.i.d. noise in the frequency domain.

If you know how, please let me know.
Got an idea on that, will report back.

Disclaimer: I'm an electrical engineer, not a mathematician, so someone trained in the latter craft should verify my train of though. Feedback is much appreciated.

Turns out the derivation of the DFT of time-domain white noise is straightforward. The DFT formula

X[k] = \Sigma_{n=0}^{N-1} x[n] * e^{-2j*\pi*k*n/N}

illustrates that a single frequency-domain sample is just a sum of scaled time-domain samples. Now let x[n] be N normally distributed samples with zero mean and variance \sigma^2, thus each X[k] is a sum of scaled i.i.d. random variables. According to the central limit theorem, the sum of these random variables is normally distributed.

Do note that the model of no correlation is not correct model of reality. There is several effects which make "white noise" slightly correlated, even if this for most pratical uses is very small correlation. Not that it significantly changes your conclusions, but you should remember that the model only go so far. To avoid aliasing, you need an anti-aliasing filter that causes correlation between samples. Also, the noise has inherent bandwidth limitations and futher, thermal noise is convergent because of the power-distribution of thermal noise as established by Max Planck, and is really the existence of photons. The physics of it cannot be fully ignored as one goes into the math field, but rather, one should be aware that the simplified models may fool yourself in the mathematical exercise.

To ascertain the variance of X[k], rely on the linearity of variance [1], i.e., Var(a*X+b*Y) = a^2*Var(X) + b^2*Var(Y) + 2ab*Cov(X,Y), and the fact that the covariance of uncorrelated variables is zero, so, separating into real and imaginary components, one gets:

Var(Re{X[k]}) = \Sum_{n=0}^{N-1} Var(x[n]) * Re{e^{-2j*\pi*k*n/N})}^2
              = \Sum_{n=0}^{N-1} \sigma^2  * cos(2*\pi*k*n/N)^2
              = \sigma^2 * \Sum_{n=0}^{N-1} cos(2*\pi*k*n/N)^2

Var(Im{X[k]}) = \Sum_{n=0}^{N-1} Var(x[n]) * Im{e^{-2j*\pi*k*n/N})}^2
              = ...
              = \sigma^2 * \Sum_{n=0}^{N-1} sin(2*\pi*k*n/N)^2

The sum over squared sin(…)/cos(…) is always N/2, except for k=0 and k=N/2, where cos(…) is N and sin(…) is 0, resulting in X[k] with real DC and Nyquist components as is to be expected for a real x[n]. Finally, for an x[n] ~ N(0, \sigma^2), the DFT's X[k] has the following variance:

                { N   * \sigma^2, k = 0
Var(Re{X[k]}) = { N   * \sigma^2, k = N/2
                { N/2 * \sigma^2, else


                { 0             , k = 0
Var(Im{X[k]}) = { 0             , k = N/2
                { N/2 * \sigma^2, else

Therefore, a normally distributed time domain-sequence x[n] ~ N(0, \sigma^2) with N samples has the following DFT (note: N is the number of samples and N(0, 1) is a normally distributed random variable with mean 0 and variance 1):

       { N^0.5     * \sigma *  N(0, 1)                , k = 0
X[k] = { N^0.5     * \sigma *  N(0, 1)                , k = N/2
       { (N/2)^0.5 * \sigma * (N(0, 1) + 1j * N(0, 1)), else

Here you skipped a few steps compared to your other derivation. You should explain how X[k] comes out of Var(Re(X[k])) and Var(Im(X[k])).
Greenhall has the same results, with two noteworthy differences [2]. First, normalization with the sample count occurs after the IFFT. Second, his FFT is of size 2N, resulting in a N^0.5 factor between his results and the above. Finally, Greenhall discards half (minus one) of the samples returned by the IFFT to realize linear convolution instead of circular convolution, fundamentally implementing a single iteration of overlap-save fast convolution [3]. If I didn't miss anything skimming over the bruiteur source code, it seems to skip that very step and therefore generates periodic output [4].

This is a result of using real-only values in the complex Fourier transform. It creates mirror images. Greenhall uses one method to circumvent the issue.

Cheers,
Magnus


Best regards,
Carsten

[1] https://en.wikipedia.org/wiki/Variance#Basic_properties
[2] https://apps.dtic.mil/sti/pdfs/ADA485683.pdf#page=5
[3] https://en.wikipedia.org/wiki/Overlap%E2%80%93save_method
[4] https://github.com/euldulle/SigmaTheta/blob/master/source/filtre.c#L130
_______________________________________________
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com
_______________________________________________
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com

Reply via email to