WoW nice Examples :-)

It's really amazing to get satisfactory results without need a pitch track!

I will try put some examples in somewhere, I coded some algorithms like
Phase Vocoders, TDHS and PSOLA(Lent), I'm using pitch track (AMDF just in
monophonic sound) to splice and cross-fade with hann, my AMDF need be
improved but still I get very nice results too...


Yeah I did just for fun one Automatic pitch correction("AutoTune") using
the Keith Lent Principle (Works well and keep the formants), I will try
upload It for you to listen!


For now listen my phase vocoder in action at real-time:

https://www.youtube.com/watch?v=QzVfkgUkLIY&t=6s

https://www.youtube.com/watch?v=YT-zAX3S850&t=3s


Regards,

Eder


♪♫                                                                ♫♪
     ▇ ▅ █ ▅ ▇ ▂ ▃ ▁ ▁ ▅ ▃ ▅ ▅ ▄ ▅ ▇
Sent From The Moon and Written With My Thumbs !

On Tue, Mar 6, 2018 at 1:49 AM, robert bristow-johnson <
r...@audioimagination.com> wrote:

> hi Nigel and Alan,
>
> i beg to differ that frequency analysis (i.e. FFT) is always necessary to
> do formant-preserving pitch shifting.  that's what this Lent (
> https://www.jstor.org/stable/3679554) or Hamon (i can't find a direct
> reference, but it is referenced here: https://pdfs.
> semanticscholar.org/4b8c/facc5e5af6850052fec6931d96124c7ee74c.pdf ) thing
> is about.  and i sorta analyze it here: https://www.
> researchgate.net/profile/Robert_Bristow-Johnson/publication/255966071_A_
> Detailed_Analysis_of_a_Time-Domain_Formant-Corrected_
> Pitch-Shifting_Algorithm/links/5625676308aeabddac91cd08/A-
> Detailed-Analysis-of-a-Time-Domain-Formant-Corrected-
> Pitch-Shifting-Algorithm.pdf . (don't tell AES that a copy of this paper
> lives at that location.  they'll be pissed and want it taken down, even if
> it's a quarter century old.)
>
> it needs a good pitch detector, but it can be used to shift pitch and
> formants independently of each other.  (the separate formants are *not*
> independent of each other, just that the formants together can be shifted
> independently of pitch.)  and it surely seems to me to be a form of
> granular synthesis (or resynthesis).  the fricatives will get their asses
> all chopped up and mangled, but if they're quick, it won't sound too awful
> bad.  the Digitech Vocalist and the Roland Voice Transformer were based on
> this.  i dunno if either Lent or Hamon gotta dime from any profits from
> these products.
>
> not implying that this el-cheapo time-domain technique beats a good phase
> vocoder.  but it's cheap and does not require soooo much delay that it is
> ruled out for real-time or live application.
>
> bestest,
>
>
> --
>
> r b-j                         r...@audioimagination.com
>
> "Imagination is more important than knowledge."
>
>
>
> ---------------------------- Original Message ----------------------------
> Subject: Re: [music-dsp] granular synth write up / samples / c++
> From: "Nigel Redmon" <earle...@earlevel.com>
> Date: Mon, March 5, 2018 9:50 pm
> To: "A discussion list for music-related DSP" <
> music-dsp@music.columbia.edu>
> --------------------------------------------------------------------------
>
> > Hi Alan—extremely cool article, nice job!
> >
> > This is related to your closing question, but if you have time, I found
> this course very interesting:
> >
> > Audio Signal Processing for Music Applications
> > https://www.coursera.org/learn/audio-signal-processing
> >
> > It’s a different thing—the main part pertaining to analysis and
> re-synthesis in the frequency domain, with pitch and time changing and
> including formant handling. It’s not as well suited for real time, since
> you need to capture something in its entirety before analyzing and
> re-synthesizing (unless you go to a lot more work). On the other hand, it
> knows about pitch and individual harmonics, so it allows more sophisticated
> processing, more akin to melodyne. I completed the first offering of it
> over three years ago, don’t know what improvements it might have now. The
> lab/home work revolves around a python toolkit.
> >
> > If nothing else, it would give you idea about how to deal with formant
> (which will need frequency analysis anyway). You can also audit and just
> check out the course videos pertaining to the harmonic models, for instance.
> >
> > Nigel
> >
> >> On Mar 5, 2018, at 5:45 PM, Alan Wolfe <alan.wo...@gmail.com> wrote:
> >>
> >> Hello rbj!
> >>
> >> My techniques are definitely not that sophisticated. It's really neat
> to hear what the deeper layers of sophistication are.
> >>
> >> I'm particularly surprised to hear it is in the neighborhood of
> vocoders. That is another technique I'd like to learn sometime, but sounds
> "scary" (granular synthesis sounded scary too before I did this post hehe).
> >>
> >> Anyways, all I'm doing is placing grains after another, but repeating
> or omitting them as needed to make the output buffer get to the target
> length for whatever percentage the input buffer is at. I only place whole
> grains into the output buffer.
> >>
> >> There is a parameter that specifies a multiplier for playing the grains
> back at (to make them slower or faster aka affecting pitch without really
> affecting length).
> >>
> >> Whenever a grain is placed down, say grain index N, if the previous
> grain placed down isn't grain index N-1, but is grain index M, it does a
> cross fade from grain index M+1 to N to keep things continuous.
> >>
> >> In my setup, there is no overlapping of grains except for this cross
> fading, and no discontinuities.
> >>
> >> I use cubic hermite interpolation to get fractional samples, my grain
> size is 20 milliseconds and the cross fade time is 2 milliseconds.
> >>
> >> Would you consider this enough in the family of granular synthesis to
> call it GS for a layman / introduction?
> >>
> >> Thanks so much for the info!
> >>
> >> PS do you happen to know any gentle / short introductions to formants
> or vocoders?
> >>
> >> On Mar 5, 2018 3:58 PM, "robert bristow-johnson" <
> r...@audioimagination.com <mailto:r...@audioimagination.com>> wrote:
> >>
> >> this is very cool. i had not read through everything, but i listened to
> all of the sound examples.
> >>
> >> so there are two things i want to ask about. the first is about this
> "granular" semantic:
> >>
> >> Thing #1: so the pitch shifting is apparently *not* "formant-corrected"
> or "formant-preserving". when you shift up, the voice becomes a little
> "munchkinized" and when you shift down, Darth Vader (or Satan) comes
> through. that's okay (unless one does not want it), but i thought that with
> granular synthesis (or resynthesis), that the grains that are windowed off
> and overlap-added where not stretched (for downshifting) nor scrunched (for
> up-shifting). i.e. i thought that in granular synthesis, the amount of
> overlap increases in up shifting and decreases during downshifting. this
> kind of pitch shifting is what Keith Lent writes about in Computer Music
> Journal in 1989 (boy that's a long time ago) and i did a paper in the JAES
> in, i think, 1995.
> >>
> >> without this formant-preserving operation, i think i would call this
> either "TDHS" (time-domain harmonic scaling), "OLA" (overlap-add), or
> "WOLA" (windowed overlap-add), or if pitch detection is done "SOLA"
> (synchronous overlap-add) or "PSOLA" (pitch synchronous overlap-add).
> however i have read somewhere the usage of the term PSOLA to mean this
> formant-preserving pitch shifting a.la <http://a.la/> Lent (or also a
> French dude named Hamon). BTW, IVL Technologies (they did the
> pitch-shifting products for Digitech) was heavy into this and had a few
> patents, some i believe are now expired.
> >>
> >> Thing #2: are you doing any pitch detection or some attempt to keep
> waveforms coherent in either the time-scaling or pitch-shifting
> applications? they sound pretty good (the windowing smoothes things out)
> but might sound more transparent if you could space the input grains by an
> integer number of periods.
> >>
> >> with pitch-detection and careful cross-fading (and windowing can be
> thought of as a fade-up function concatenated to a fade-down function) you
> can make time-scaling or pitch-shifting a monophonic voice or harmonic
> instrument glitch free. it can sound *very* good and companies like
> Eventide have been doing something like that since the early-to-mid 80s.
> (ever since the H949.) and i imagine any modern DAW does this (and some
> might do frequency-domain pitch-shifting and/or time-scaling using
> something we usually call a "phase vocoder".
> >>
> >>
> >> but your examples sound pretty good.
> >>
> >> r b-j
> >>
> >>
> >> ---------------------------- Original Message
> ----------------------------
> >> Subject: [music-dsp] granular synth write up / samples / c++
> >>
> From: "Alan Wolfe" <alan.wo...@gmail.com <mailto:alan.wo...@gmail.com>>
> >> Date: Mon, March 5, 2018 5:14 pm
> >> To: "A discussion list for music-related DSP" <
> music-dsp@music.columbia.edu <mailto:music-dsp@music.columbia.edu>>
> >> ------------------------------------------------------------
> --------------
> >>
> >> > Hey Guys,
> >> >
> >> > Figured I'd share this here.
> >> >
> >> > An explanation of basic granular synth stuff, and some simple
> standalone
> >> > C++ i wrote that implements it.
> >> >
> >> > https://blog.demofox.org/2018/03/05/granular-audio-synthesis/ <
> https://blog.demofox.org/2018/03/05/granular-audio-synthesis/>
> >> >
> >> > Kind of amazed at how well it works (:
> >> >
> >> > Thanks for the answer to my question BTW Jeff.
> >> > _______________________________________________
> >> > dupswapdrop: music-dsp mailing list
> >> > music-dsp@music.columbia.edu <mailto:music-dsp@music.columbia.edu>
> >> > https://lists.columbia.edu/mailman/listinfo/music-dsp <
> https://lists.columbia.edu/mailman/listinfo/music-dsp>
> >>
> >>
> >>
> >>
> >> --
> >>
> >> r b-j r...@audioimagination.com <mailto:r...@audioimagination.com>
> >>
> >> "Imagination is more important than knowledge."
> >>
> >>
> >>
> >>
> >>
> >>
> >> _______________________________________________
> >> dupswapdrop: music-dsp mailing list
> >> music-dsp@music.columbia.edu <mailto:music-dsp@music.columbia.edu>
> >> https://lists.columbia.edu/mailman/listinfo/music-dsp <
> https://lists.columbia.edu/mailman/listinfo/music-dsp>
> >> _______________________________________________
> >> dupswapdrop: music-dsp mailing list
> >> music-dsp@music.columbia.edu
> >> https://lists.columbia.edu/mailman/listinfo/music-dsp
> >
> > _______________________________________________
> > dupswapdrop: music-dsp mailing list
> > music-dsp@music.columbia.edu
> > https://lists.columbia.edu/mailman/listinfo/music-dsp
>
>
>
>
>
>
>
>
>
>
>
>
> --
>
> r b-j                         r...@audioimagination.com
>
> "Imagination is more important than knowledge."
>
> _______________________________________________
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
_______________________________________________
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Reply via email to