This paper discusses Shannon entropy, Wiener entropy, Rényi entropy, spectral entropy, and some related measures for the purpose of speech processing:
"ON THE GENERALIZATION OF SHANNON ENTROPY FOR SPEECH RECOGNITION" http://architexte.ircam.fr/textes/Obin12e/index.pdf Quote: "2. SPECTRAL ENTROPY With an appropriate normalization, the power spectrum of an audio signal can be interpreted as a probability density. According to this interpretation, some techniques belonging to the domains of probability and information theory can be applied to sound representation and recognition: in particular, the concept of entropy can be extended to provide a concentration measure of a time-frequency density - which can be interpreted as a measure of the degree of voicing (alternatively, noisiness) of an audio signal. The representation adopted in this study (see [9] for the original formulation) is based on the use of Rényi entropies, as a generalization of the Shannon entropy [10]. In this section, the Rényi entropy is introduced with regard to information theory, and some relevant properties for the representation of audio signals are presented. In particular, the Rényi entropy presents the advantage over the Shannon entropy of focusing on the noise or the harmonic content." _______________________________________________ music-dsp mailing list music-dsp@music.columbia.edu https://lists.columbia.edu/mailman/listinfo/music-dsp