On August 13, 2015, Peter S peter.schoffhau...@gmail.com wrote:
So far I haven't
tried Fourier-domain decomposition to estimate entropy, but sounds
like a fun project.
Spectral entropy is commonly used as a feature extractor in the MIR
community. Pure sinusoids have the lowest spectral
On 13/08/2015, Peter S peter.schoffhau...@gmail.com wrote:
Bonus experiment: try to see if you can hear the difference between
sine_fadeout16_noise.wav and sine_fadeout8_noise.wav in a blind ABX
test. If not, then having extra bits of noise make zero sense.
I did a blind ABX test between
On 13/08/2015, Risto Holopainen ebel...@ristoid.net wrote:
Spectral entropy is commonly used as a feature extractor in the MIR
community. Pure sinusoids have the lowest spectral entropy whereas white
noise has maximal entropy, so it's useful for distinguishing pitched and
noisy signals.
This paper discusses Shannon entropy, Wiener entropy, Rényi entropy,
spectral entropy, and some related measures for the purpose of speech
processing:
ON THE GENERALIZATION OF SHANNON ENTROPY FOR SPEECH RECOGNITION
http://architexte.ircam.fr/textes/Obin12e/index.pdf
Quote:
2. SPECTRAL ENTROPY
WTF, who is trying to unsubscribe me:
---
Mailing list removal confirmation notice for mailing list music-dsp
We have received a request from 92.103.69.51 for the removal of your
email address, tdu...@tascam.com from the
music-dsp@music.columbia.edu mailing list.
Here's an experiment that I always wanted to do:
The dither added is 1 bit (or 2 if doing TPDF),
so generating it from a PRNG is easy, you get
one bit at a time, and the bits are all completely
uncorrelated to each other - white noise spectrum.
When implemented in DSP, sometimes you get a
PRNG
On 2015-08-09, robert bristow-johnson wrote:
1) a dithered sigma-delta converter is typically better quality than
one without dithering
Correct.
there is and always had been **some** discussion and controversy about
that every time i seen it discussed at an AES convention. i remember
On 2015-08-09, robert bristow-johnson wrote:
even so, Shannon information theory is sorta static. it does not
deal with the kind of redundancy of a repeated symbol or string.
In fact it does so fully,
really? like run-length encoding?
Shannon's original statement of his theory was
On Thu, Aug 13, 2015 at 1:05 PM, Tom Duffy tdu...@tascam.com wrote:
WTF, who is trying to unsubscribe me:
---
Mailing list removal confirmation notice for mailing list music-dsp
We have received a request from 92.103.69.51 for the removal of your
email