R B-J, Steffan, Xue, Igor,

Thank you all for your expert help. I meant to research all of the options 
you've raised (Grain (Lent's) vs Wavetable Synthesis) before responding but I 
also wanted to let you know in a timely manner that I appreciate your input. 
It's obvious now that with the wealth of info available, it'll be a while 
before I can post a meaningful response.

I did have a couple follow up questions that I hope aren't too irrelevant. 
Because we are working with the VST spec (and temporarily within an 
implementation of the Java MIDI Interface) we will have access to all MIDI 
information. Assuming the instruments are carefully tuned, I think they should 
be very close to the equal temperament pitch. Considering that we will have the 
pitch detected ahead of time (the fundamental, at least) could we reduce 
latency?

I should clarify that we won't have access to the instrument's (synth's) 
implementation but will only have the output buffer information. We had used R 
B-J's wavetable interpolation idea when adjusting the SF2 output but I'm not 
sure how it would be possible when our information is the MIDI event and audio 
signal.

Finally, worth mentioning is that the specific use is fine tuning equal 
temperament to just intonation so we are adjusting by ~40 cents max but mostly 
~10 cents. With the goal being beatless harmony, we want precision to 2 cents 
because the threshold of pitch perception is right around there. 1 cent would 
be even better. Maybe 0.5 is too ambitious. Xue has a good point but if we 
disregard accuracy and trust that the synth is carefully tuned, shouldn't this 
be possible, theoretically?

Again, thank you all for the thoughtful information.

Regards,
Dylan
________________________________________
From: music-dsp-boun...@music.columbia.edu 
[music-dsp-boun...@music.columbia.edu] on behalf of robert bristow-johnson 
[r...@audioimagination.com]
Sent: Tuesday, August 02, 2011 1:17 PM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Precise, Real Time Pitch Shift with Formant Control

On 8/2/11 2:04 PM, Steffan Diedrichsen wrote:
> Since you implement for a synthesizer, you may look into the option for an 
> off-line pitch detection and real-time grain-synthesis. Grain synthesis has a 
> nice formant control and is fairly easy to implement.
i think that this "grain synthesis" is essentially what Lent's alg is.
with enough delay, it can be done real-time, but for a synth, i wouldn't
recommend doing that.

in fact, for a synth, if you can do the analysis and processing
off-line, you can accomplish the same effect with wavetable synthesis
but the waveforms for different regions of pitch have to be sorta mixed
(or interpolated on a sample-by-sample basis between adjacent
wavetables).  of course, if you don't do this (and just pitch up the
wavetable from a single analyzed note) you'll get Alvin the Chipmunk (or
Satan when downshifting).

--

r b-j                  r...@audioimagination.com

"Imagination is more important than knowledge."



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Reply via email to