---------------------------- Original Message ----------------------------

Subject: Re: [music-dsp] Auto-tune sounds like vocoder

From: "Eder Souza" <ederwan...@gmail.com>

Date: Thu, January 17, 2019 6:46 am

To: "A discussion list for music-related DSP" <music-dsp@music.columbia.edu>

--------------------------------------------------------------------------



> When I read the original patent US5973252A (Pitch detection and intonation

> correction apparatus and method), in the vague description of how the pitch

> shift is made, I wondered if everything seemed to be as simple as I would

> be imagining...

>

> For pitch shift (auto Tune):

> Just get the fracional period, use the fracional position period to cut off

> or add periods whithout apply overlap and add (just splice and add/remove

> in the exact period position, yeah this is why do you need a very strong

> pitch detector, to join or discard in the exact period position to not get

> clicks), this will expand or compress the signal, then now just resample

> the signal to pitch shift (Ok now the formants go down).
this is the standard kinda time-domain pitch shifting that goes by a variety of 
names: TDHS, maybe WSOLA.� it's what Eventide originally did (and i think what 
Autotune originally did).� when you splice out a period, that
is time-compression (which speeds things up) and then for a pitch shifter, you 
have to resample that to slow down the time-compressed audio.� that moves both 
the pitch and the formants down.� for upshifting you are spicing in an extra 
period (which is time-stretching) and then resampling
that to speed it up which moves the pitch and formants up.� in this method, the 
cycles of the quasi-periodic waveform or stretched or scrunched in the 
resampling.� and in this method octave errors might not hurt you because all 
that means is you splice in or out two entire periods, instead
of one, and it's still a reasonably glitch-free splice.� this is particularly 
the case for WSOLA, which is not directly worried about the pitch at all, but 
in Waveform Similarity (which *is* indirectly related to pitch).
this is different than the Lent/Hamon method (sometimes called
PSOLA), in which you window off a single cycle and call it a "wavelet" or a 
"grain".� you do *not* stretch nor scrunch that wavelet or grain (unless you 
*do* wanna move the formants) but output the most current wavelet or grain, 
overlapping and adding, at the rate of the
output pitch.� when upshifting, there is more overlapping and some grains will 
be used twice.� when downshifting, there is less overlapping and some grains 
will be skipped and not used.
>
> So I think that the current effect "Robotic or vocoded" happens when you

> try change the formants (warping to original formants or just warping to a

> new position).

>

> In the past I write the Keith Lent code to do Auto Tune(pitch correction)

> and my results are cool...

>

> PS: I wrote to test the pitch detector described in the patent above (just

> to proof of concept), and yeah this works great!



well, i dunno where Autotune is now, but back in the '90s, i thought it sucked 
(the Wave Mechanics products, PurePitch and PitchDoctor were much better).� and 
Autotune back then did not use the Lent alg, so if there was a lot of shifting 
(like 3 or 4 semitones or more), Autotune definitely
munchkinized the voice.
--


r b-j� � � � � � � � � � � � �r...@audioimagination.com



"Imagination is more important than knowledge."

�
�
�
�
_______________________________________________
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Reply via email to