On 2019-01-15, David Reaves wrote:

I’m wondering about why the ever-prevalent auto-tune effect in much of today's (cough!) music (cough!) seems, to my ears, to have such a vocoder-y sound to it. Are the two effects related?

If you want to do autotuning of not just one (over)tone but many, you will have to do something approaching a constant bandwidth-time product. Something approaching 1/f versus t stretch in your prosessing.

When you try that out, the only invariant out there which works out fully is the minimal Fourier coupling between frequency and impulse response; that same thing can be extended into a coupling between all harmonic series in music, once we impose a time-bandwidth symmetry on our stuff. After that, we only need to analyze our caugh-caugh -stuff, continuous or discrete, as one edemplar of the Lorenz group of time-frequency dilations, modulo an imperfect comb filter in frequency.

What necessarily results is a more or less spectrally spread out 1/f -decaying, more or less periodic in period instead of frequency, inverse-harmonic decomposition. That is the only way retuning *can* work even with two sinusoidal signals at a time, eventually the mathematical rigidity of the overall problem will make us do something like a spectrally nice remodulation, with few enough aharmonical intermodulation products.

Since we have to estimate, statistically, all of our incoming "voices", we will have to also take care of their cross terms after remodulation. Those terms will have infinite degree, because of the 1/f term in both time and frequency, after remodulation. This makes the optimum solution hard to find, even under the 1/f, 1/t symmetry we already know the continuous statistics of the problem dictate.
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
dupswapdrop: music-dsp mailing list

Reply via email to